November 8, 2024
WHO Releases AI Ethics and Governance Guidance for Large Multimodal Models | Insight

On 18 January 2024, the World Health Organization (WHO) issued new guidance on the ethics and governance of artificial intelligence (AI) for health, focusing on large multimodal models (LMMs). The WHO guidance summarizes the broad applications of LMMs in the healthcare industry and includes recommendations for governments, which have the primary responsibility of setting standards for the development and deployment of LMMs, and their integration and use for public health and medical purposes.

Background

LMMs are a rapidly growing generative AI technology with applications across the healthcare industry. Specifically, LMMs can accept one or more types of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm. LMMs are known as “general-purpose foundation models”, though it has not yet been proven whether LMMs can accomplish a wide range of tasks and purposes.

LMMs have been adopted faster than any consumer application in history. There is interest in LMMs as they can facilitate human-computer interaction to mimic human communication and generate responses to questions or data inputs that appear human-like and authoritative.

The WHO therefore issued guidance to assist member states in mapping the benefits and challenges associated with the use of LMMs for health, setting out over 40 recommendations aimed at various stakeholders, including governments, healthcare providers and technology companies, with the goal of ensuring responsible use of LMMs to safeguard and enhance public health.

Potential Benefits and Risks

The new WHO guidance sets out five primary applications and challenges of the use of LMMs in the healthcare industry:

1. Diagnosis and clinical care

LMMs may be used to assist diagnosis in fields like radiology and medical imaging, tuberculosis, and oncology. It had been hoped that clinicians could use AI to integrate patient records during consultation to identify at-risk patients, use it to aid in difficult treatment decisions and to catch clinical errors.

Risks arise from the use of LMMs in diagnosis and clinical care, such as inaccurate, incomplete, biased or false responses; data quality and data bias; automation bias; degradation of physicians’ skills; and informed consent.

2. Patient-centered applications

AI tools can be used to increase self-care, where patients can take responsibility for their own care, such as taking medicines, improving their nutrition and diet, engaging in physical activity, caring for wounds or delivering injections. This may be done through LMM-powered chatbots, health monitoring and risk prediction tools.

Risks arise from the use of LMMs in patient-centered applications, such as inaccurate, incomplete or false statements; risks of emotional manipulation of chatbots; data privacy concerns; degradation of interactions between clinicians, laypeople and patients; and delivery of healthcare outside the healthcare system, which is generally subject to greater regulatory scrutiny.

3. Clerical functions and administrative tasks

LMMs may be used to assist health professionals in the clerical, administrative and financial aspects of practising medicine.

Risks that arise from such use could be potential inaccuracies or inconsistencies of LMMs, where a slight change to a prompt or question can generate a completely different response.

4. Medical and nursing education

LMMs are also projected to play a role in medical and nursing education, potentially being used to create dynamic texts that, in comparison with generic texts, are tailored to the specific needs and questions of the student.

The risk arising from such use is that healthcare professionals could suspend their judgment or the judgment of a human peer in favor of that of a computer.

5. Scientific and medical research and drug development

LMMs can potentially extend the ways in which AI can be used to support scientific and medical research and drug discovery. For example, they can generate text to be used in a scientific article, summarize texts, edit texts, analyze scientific research, etc.

General concerns about the use of LMMs in scientific research include the lack of accountability, high-income country bias, and “hallucination” by summarizing or citing academic articles that do not exist.

Key Recommendations

The WHO has included several recommendations for the development and deployment of LMMs, including the following:

  • Governments should invest in public infrastructure, enact laws and regulations to ensure that LMMs and healthcare applications meet ethical obligations and human rights standards, assign regulatory agencies to assess and approve LMMs, and implement mandatory post-release audits and impact assessments by independent third parties.
  • Developers of LMMs should engage diverse stakeholders from early stages of development and design LMMs to execute well-defined tasks with accuracy and reliability, thus enhancing the capabilities of healthcare systems and promoting patient welfare.

The WHO’s guidance on ethics and governance of AI, focusing on LMMs, can be foundĀ here.


link

Leave a Reply

Your email address will not be published. Required fields are marked *