July 15, 2025
AI Code of Conduct Addresses Patient Engagement, Performance Monitoring

During a June 12 webinar, several of the co-authors of a new special publication from the National Academy of Medicine (NAM) described an AI Code of Conduct framework to guide responsible effective, equitable, and human-centered use of AI in medicine. 

The AI Code of Conduct framework is intended as a touchstone for organizations and groups developing approaches for use in their specific contexts. The publication presents six commitments and 10 principles to align the field around responsible development and application of AI. The commitments, which provide the anchor elements of the framework, are: advance humanity, ensure equity, engage impacted individuals, improve workforce well-being, monitor performance, and innovate and learn.

Grace Cordovano, Ph.D., patient advocate and founder of Enlightening Results, said she applauds the National Academy of Medicine and the AI code of conduct team for being among one of the first major health AI efforts to authentically represent and systematically embed the patient and care partner perspective. 

She noted that conversations about trust in AI have taken a priority across the healthcare ecosystem, and the code of conduct will be a conduit for building trust, especially with patients, families, care partners and patient communities. “I want to highlight that the code discusses patients as primary stakeholders, end users and co-creators of health AI, and similarly recognizes that patient-led governance and rights are essential” Cordovano said. 

Among the advocacy priorities is inclusion of the patient voice in all stages of the AI lifecycle. “This code establishes transparency requirements for developers and systems to explain what data is being used and how as well as which AI tools are being used in our care and and how they are performing. So trust and transparency are critical and key,” she added. “We also look at translations of clinical and technical specifications for AI monitoring into patient-friendly formats to support informed consumer engagement. I look forward to the day when AI is not just in the background or something that we are hearing about in the press, but that all through our ecosystem we are supporting patient education and transparency and building trust.”

Implications for researchers

Philip Payne, Ph.D. associate dean for health information and data science at Washington University School of Medicine as well as the chief data scientist for the School of Medicine, shared perspectives from the team that worked on implications for research on AI and research using AI. He said one of the questions involves how to make sure, given the rapid pace of innovation, that we have appropriate mechanisms to share not only successes, but also failures, “so that we can create a cumulative approach to AI innovation where we avoid potential pitfalls that have already been identified and well-characterized.”

Payne said researchers recognize because of the fundamental nature of AI methods and the increasing use of even more voluminous data than they have ever used before, both to train, evaluate and then deploy these technologies, that they need to think about how to deeply embed privacy and ethics into all parts of the lifecycle of the development and evaluation and ultimate scaling of AI. “That means we have to consider not just ethics in the immediate context or research scenarios, but also how do those ethical, legal and social implications of AI scale as we think toward future applications if those projects are successful,” he added. 

Real-world example from Kaiser Permanente

Andrew Bindman, M.D., executive vice president and chief medical officer for Kaiser Permanente, shared the health system perspective. He noted that health systems can take a leadership role in specifying and promoting the business and clinical needs and requirements for using health AI and bear responsibility for ensuring that when it’s used in care delivery, AI benefits patients equitably and is used in a way that builds trust in the health system. “This includes carefully considering issues, including patient privacy, consent, agency, accountability, addressing legal and financial responsibilities, and overcoming challenges related to the workforce.”

Bindman gave a real-world example of how Kaiser Permanente has been using a framework for responsible AI, which he said reflects the commitments that are outlined in the NAM report. 

“At Kaiser Permanente, AI tools must drive our core mission of delivering high-quality and affordable care for our members. This means that AI technologies must demonstrate a return on health, such as improved patient outcomes and experiences, and we prioritize safety, equity and health outcomes in the development and assessment and deployment of any technology, including AI-based tools,” he said. “We’re committed to evaluating potential AI technologies, continuously monitoring their performance and adhering to established clinical standards and guidelines. We’re using AI to improve care, enhance patient clinician relationships, optimize clinicians time and ensure fairness and care experiences and health outcomes by addressing individual needs.”

Bindman highlighted how Kaiser Permanente operationalized and applied its responsible AI principles within its implementation, assessment, and monitoring of an assistant clinical documentation tool. The tool supports doctors and clinicians with securely capturing clinical notes during an inpatient visit with patients and helps them remain focused on talking with patients rather than on documentation or administrative tasks. “We employed a QA [quality assurance] process related to the deployment, which, upon reflection, really aligns quite well with the health system commitments outlined in the NAM report. “One of those identified in the NAM report is advancing humanity at a high level. We found that this technology helps doctors and other clinicians foster an environment where they can provide effective communication and transparency while meeting the individual needs of each patient who comes to them,” he said. 

Kaiser Permanente assessed the performance among patients with limited English proficiency based on clinician feedback describing some inconsistent performance among non-English encounters. “We conducted specialized QA testing to assess the performance of non-English speakers, and did find there were some issues that were identified so that we paused on using it in that setting,” Bindman said. “We spoke with the vendor and had those issues corrected. We engaged the impacted individuals who, in this case, are the patients. We always require patient consent for the use of the tool. We assessed patient experience and observed a significant positive impact related to clinicians using this tool. In brief, the patients appreciated the attention was on them, not the keyboard.”

Kaiser Permanente captured clinician experience and found the tool helped to reduce feelings of burnout by reducing the amount of time they spend on administrative tasks, and this contributed to a sense of of greater communication and a feeling of well being among the impacted part of the workforce.

Bindman said one question health systems must address is what represents best practices with regard to ongoing monitoring for AI. “As an organization that is committed to being a learning healthcare system, we accept our responsibility to learn from the latest contributions to the evidence, and that we have a responsibility contribute to the knowledge about this new technology,” he said. “While we and other health systems are making important progress in standing up responsible AI governance systems and developing new tools, we still have much to learn collectively as an industry. Having alignment around best practices for what initial and ongoing monitoring of AI tools looks like would be helpful. That’s akin to the value that’s derived from having benchmarks for quality performance across healthcare organizations. Alignment is key. It isn’t helpful if there are inconsistent AI standards across different levels of government or across different regulating or certifying bodies. This could be especially burdensome for healthcare organizations with limited resources, who have some of the most to gain from the promises of AI but may be least able to navigate the challenges of the ever evolving AI standards.”

 

link

Leave a Reply

Your email address will not be published. Required fields are marked *