MCNZ consulted on a draft statement on Using artificial intelligence (AI) in patient care. We supported the draft as circulated, although noting it will be challenging to apply in practice. As use of AI increases within all aspects of medical practice and patient care, the issue of patient consent and the implications for patients who decline treatment will become both more nuanced and more significant.
The draft Statement outlined what doctors need to consider when they use AI in activities directly related to patient care. We submitted the seemingly straightforward differentiation between patient care and medical administration required further clarification. AI is likely to form the backbone of the organisational structure and administration of many medical practices, with no option of a person booking and amending appointments for clinics, procedures, and scans, or undertaking administration, transcription, and typing.
In relation to patient care, we agreed doctors must never use an AI tool to represent them in the practice of medicine, for example by using an avatar, chatbot or deep-fake video to carry out a consultation. However, there will be a useful role in future for AI avatars in education. Safeguards required will be along the lines of ‘the avatar can provide generic information only, not specific medical advice, and AI generated content must be clearly labelled’.
Patient consent will become a more nuanced consideration in future as use of AI increases within all aspects of patient care and treatment. Doctors will need to consider the important question of the implications for patients who chose to decline an AI element which is fundamentally integrated into a service. Can we offer them options? Will their care be jeopardised or possibly declined? We suggested the issue of requesting and confirming patient consent be covered more fully.
Associate Professor Matthew Clark, Chair of the RACS AI in Surgery Advisory Group, contributed to and endorsed our submission.
Read submission (PDF 243.57KB).
