December 3, 2024
Signature biomedical ethics lecture at OUWB addresses issues raised by AI

Navigating rapidly evolving artificial intelligence technologies while maintaining a person-centered ethical approach to patient care was the primary focus of a recent event hosted by Oakland University William Beaumont School of Medicine.

The 7th Annual Ernest and Sarah Krug Lecture in Biomedical Ethics was held Oct. 1, at The Townsend Hotel in Birmingham. The lecture also was available online.

More than 100 people attended to hear Matthew DeCamp, M.D., Ph.D., associate professor, Division of General Internal Medicine and Center for Bioethics and Humanities, University of Colorado Anschutz Medical Campus.

Attendees included Oakland University President Ora Hirsch Pescovitz, M.D., several school deans from Oakland University, numerous department chairs from Corewell Health, medical students, and others.

“I hope people came away with a sense of the complexities of how we manage transparency and bias in AI,” DeCamp said after his lecture on “Beyond the Data: Ethics and Artificial Intelligence (AI) in Health Care.”

“But even more, I hope they came to think of AI as not just tools we use but technologies that can change how we think and what we value,” he added.

“The challenges AI raises are about more than the data…they reflect who we are, and who we want to be, as individuals and a society.”

The Krug Lecture was among several recent signature OUWB events that have addressed AI in health care.

In May, an entire day of OUWB Medical Education Week was devoted to the topic. (More about that event can be found here.)

As part of September’s Women in Medicine Month, Flo Doo, M.D., OUWB ’17, gave a lecture on “Revolutionizing Precision Medicine with Generative AI from Medical Education to Patient Care.” Doo is now director of Innovation and an assistant professor at University of Maryland Medical Center. (More details about her lecture can be found here.)

Additionally, the use of AI in potentially improving maternal health is at the center of a $200,000 grant that OUWB recently received.

And school officials are looking closely at how AI fits into the curriculum.

“AI and ethics are critical topics at this time due to the rapid integration of artificial intelligence and machine learning into health care,” said Jason Wasserman, Ph.D., professor, Department of Foundational Medical Studies.

Wasserman said it made sense for the Krug Lecture to center on AI because it’s transforming how medical professionals make decisions, diagnose diseases, and treat patients.

“With this transformation, ethical questions about the implications of AI in medicine are growing. AI raises concerns about transparency, fairness, patient autonomy, and the potential for bias in medical decisions,” he said.

With AI increasingly handling tasks traditionally in the hands of physicians, said Wasserman, AI also is challenging established norms around clinical judgement and professional responsibility.

“Moreover, the complexity and proprietary nature of many AI systems make it difficult for clinicians and patients to fully understand or challenge the outputs generated by these systems,” he said. “This calls for a reevaluation of patients’ rights and the moral obligations of health care providers.”

 

‘Perfect time to be proactive’

DeCamp said that “now is the perfect time to be proactive in addressing the issues AI raises.”

“We have to think about these issues and act now, if we want to shape AI for (the) better,” he said.

DeCamp introduced several key areas of impact AI might have on medicine: personalized medicine; diagnostic accuracy; drug discovery and development; predictive analytics; and as virtual health assistants.

He also talked about the rapid evolution of AI, noting that the Food and Drug Administration has cleared more than 900 AI-enabled devices for use in medicine.

DeCamp also talked about the different kinds of AI and with that baseline established, began addressing ethical questions and issues related to the technology.

“A question we ask in ethics is if any of the many, many ethics guidelines that are out there now can influence this coming (AI) tidal wave,” he said. “We hope so.”

He suggested the use of several consensus principles to help govern the use of AI in medicine: be transparent; do no harm; mitigate bias; be explainable to patients; and promote choice.

But he suggested AI might be held to a higher standard than other tools used by doctors. For example, he pointed to the “Principles for Augmented Intelligence Development, Deployment, and Use,” approved by the American Medical Association board of trustees in November 2023. Among other things, the principles call for the use of AI to be documented in a medical record. Yet there isn’t a similar requirement for when other tools are used.  

“Is AI exceptional or are we behaving in a way that reflects exceptionalism?” he asked.

DeCamp also talked at length about the issue of bias in AI. He used bias in chest X-ray interpretation as just one example and noted a recent study that found there is significantly more under-diagnosis in patients who are female, Black, Hispanic, on Medicaid, or at an intersectional disadvantage.

One often-discussed issue is biased datasets, or the primary source of the information AI draws on to process results. Not only do they generally not reflect the full spectrum of patients, but one study found that most health care-related datasets come from California, New York, or Massachusetts, which creates geographic bias.

DeCamp also noted other opportunities for bias when using AI with regards to processing patients and other latent biases and errors that could be waiting to happen.

Still, he did offer suggested ways to possibly mitigate some of the biases. For example, to improve datasets he suggested engaging and partnering with communities so that they will want to contribute data. More diverse teams might help eliminate bias in processing and reckoning with social and structural determinants of health could help reduce biased outputs.

DeCamp also suggested that governance may be key to addressing the bias of AI in health care.

In closing, DeCamp said AI can be a helpful tool, but cautioned it can affect how people view each other and “how we care, not just ‘what we do.’” He suggested that potential solutions to ethical questions raised by AI are “beyond the data.”

“And I hope that thinking deeply about ethics principles and how they apply in the real world should help us shape how AI is implemented in health care,” he said.

For more information, contact Andrew Dietderich, senior marketing specialist, OUWB, at [email protected].

To request an interview, visit the OUWB Communications & Marketing webpage.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

link

Leave a Reply

Your email address will not be published. Required fields are marked *