
Given the rapid proliferation of AI-equipped medical devices across U.S. healthcare, unintended effects should surprise no one. Many such occurrences will be pleasant surprises. But some adverse events are likely as well.
To strive for the best while preparing for the worst, healthcare organizations and healthcare AI developers should collaborate to ensure that AI systems are robust, reliable and transparent.
Two researchers remind these stakeholders of this and other responsibilities in an opinion piece published Nov. 27 in JAMA.
“Healthcare organizations must proactively develop AI safety assurance programs that leverage shared responsibility principles, implement a multifaceted approach to address AI implementation, monitor AI use, and engage clinicians and patients,” write Dean Sittig, PhD, and Hardeep Singh, MD, MPH. “Monitoring risks is crucial to maintaining system integrity, prioritizing patient safety and ensuring data security.”
Sittig is affiliated with the University of Texas, Singh with Baylor College of Medicine. Their JAMA paper’s primary audience is the provider sector. Here are six recommendations from the piece.
1. Conduct or wait for real-world clinical evaluations published in high-quality medical journals before implementing any AI-enabled systems into routine care.
Further, while new AI-enabled systems mature, “we recommend that all healthcare organizations conduct independent real-world testing and monitoring with local data to minimize the risk to patient safety,” Sittig and Singh write.
2. Invite AI experts into new or existing AI governance and safety committees.
These experts might be data scientists, informaticists, operational AI personnel, human-factors experts or clinicians working with AI, the authors point out.
3. Make sure the AI committee maintains an inventory of clinically deployed, AI-enabled systems with comprehensive tracking information.
Healthcare organizations should maintain and regularly review a transaction log of AI system use—similar to the audit log of the EHR—that includes the AI version in use, date/time of AI system use, patient ID, responsible clinical user ID, input data used by the AI system and AI recommendation or output, Sittig and Singh assert.
4. Create high-quality training programs for clinicians interested in using AI systems.
Initial training and subsequent clinician engagement should include a formal consent-style process, complete with signatures, the authors stress, to ensure that clinicians understand the risks and benefits of using AI tools before their access is enabled.
5. Develop a clear process for patients and clinicians to report AI-related safety issues.
As part of this effort, be sure to implement a rigorous, multidisciplinary process for analyzing these issues and mitigating risks, Sittig and Singh recommend.
6. Provide clear written instructions and authority to enable authorized personnel to disable, stop, or turn off the AI-enabled systems 24 hours a day, 7 days a week, in case of an urgent malfunction.
“Similar to an organization’s preparation for a period of EHR downtime,” the authors offer, “healthcare organizations must have established policies and procedures to seamlessly manage clinical and administrative processes that have become dependent on AI automation when the AI is not available.”
Expounding on the latter point, the authors suggest revising AI models that fail to meet pre-implementation goals. If such revisions prove unfeasible, “the entire system should be decommissioned.” AI in Healthcare
link