March 21, 2025
Challenges, Ethical Concerns, and Pessimistic Views with AI Integration

Integrating new applications and technologies into your organization is not a task to be taken lightly. It is vital that we consider all aspects and possibilities for both our staff and our patients when making these decisions. Right now AI is being preached as the solution to most of our problems – like finding water after being stranded in the desert. We’re told that different AI applications and tools can reduce administrative burden and clinical burnout, improve patient experience and outcomes, and so much more. It can be very tempting to leap straight in, but we must practice caution to make sure that the water is real and not merely a mirage. We need to look at all of the challenges, ethical concerns, and pessimistic views around AI first. This way we are prepared for all of its shortcomings, failures, and reasons that may make patients and staff members distrustful.

To help you see what challenges, ethical concerns, and pessimistic views surround AI integration, we reached out to our insightful Healthcare IT Today Community and the following is what they had to share on this subject.

Trent Peterson, PhD, Head of User Experience Design at AdvancedMD
The biggest issue with generative AI technology within healthcare is that it is still new and many are concerned about its limitations. There was a report last December from Klas Research that found more than half of the healthcare executives they surveyed were looking to implement a GenAI solution within the next year. At the same time, these executives reported that their biggest concerns around GenAI were accuracy and reliability.

A Generative AI platform is only as strong as the LLM (large language model) that powers it. Before a healthcare organization can use the technology to their advantage, it must have guardrails in place to ensure the generative AI application is operating as intended and not putting patient data at risk or influencing diagnoses or treatment plans based on “bad” data. When it comes to addressing these challenges, some will only be resolved with advances to the technology. But it is also critical that the organizations implementing Generative AI have the proper guardrails built into the system to catch these problems when they happen.

Aasim Saeed, CEO at Amenities Health
Unfortunately, I’m very skeptical about AI in healthcare. It’s not that I’m not impressed with ChatGPT or don’t see the potential for transformation with these powerful tools. It’s that the hype train has left the station without offering much in terms of tangible use cases. We’ve seen this story several times: most recently it was “cloud” for healthcare, before that it was “apps” in healthcare, and before that, it was… well, it was “AI” in healthcare already (anyone remember when Watson was going to revolutionize our industry?). The reality is that all these technologies are amazing, and I look forward to the day we’re able to utilize and maximize their potential. But that day is not today, and it’s mostly our fault because we won’t let the technology do important things.

Instead of asking how AI can increase access, we’re tasking it to preread our messages, then summarize them, and then draft a response for us to read and send manually. How much time did that save us? Instead of asking how AI can eliminate bureaucracy, we’re training provider voice bots to call up insurance voice bots and see which one can outlast the other on the phone line of death for prior authorization. The list goes on and on.

Of course, we need to start somewhere and it would be imprudent to allow AI to take over tasks with no supervision, but we should focus on areas with transformative potential. In fact, let’s start by expanding use cases that have already proven some early successes like screening and prioritizing medical imaging for Radiologist review or scanning large EMR data sets for increased risk for heart attacks, stroke, and other cardiovascular events.

Tim Boltz, Healthcare Solutions Lead at Carahsoft Technology Corp.
AI is still a nascent and largely abstract technology. However, it is generating tremendous excitement in the healthcare space, and its use cases are rapidly accelerating. AI has the power to create more effective medicine and treatments for patients, lessen the burdens of workforce shortages, fast-track the ability to conduct life-saving research, and more. While AI is already being used in some pockets of the healthcare landscape, we have hardly scratched the surface of its potential. However, there are several security and equity concerns that healthcare leaders must consider.

As IT networks become more sophisticated and organizations begin to deploy AI, it is important that network security measures and efforts to secure data scale with modernization initiatives. As such, healthcare institutions should be of the mindset that AI and cybersecurity advancements go hand in hand.

Additionally, technology leaders must ensure that new AI solutions will not perpetuate the inequities that we have witnessed in patient care in the past. This means conducting thorough analysis of the datasets that form the foundation of new AI models, and actively committing to an approach that equally considers all populations.

To support the responsible and safe use of AI technology, it is critical that regulatory bodies introduce standards that dictate the network security measures that must be taken, new HIPAA considerations for AI technology, and put in place frameworks that prevent inequities. Together, these measures can help pave the way for prolific advancements in healthcare.

Kevin Paroda, Global Segment Manager, Acute Care at CenTrak
The integration of AI in healthcare presents several challenges and ethical considerations, particularly regarding patient privacy, data security, and algorithmic bias. Firstly, the quality of data inputted into AI algorithms is crucial for accurate outcomes. Hospital data, often complex and unfiltered, requires significant cleaning before being utilized, posing a substantial operational hurdle. Secondly, concerns about liability arise when AI systems influence medical decision-making. Hospitals fear legal repercussions if decisions recommended by AI result in adverse outcomes for patients.

Ownership and protection of data are paramount, with compliance issues such as HIPAA regulations adding complexity. Questions surrounding the sharing of proprietary information and the risk of algorithmic compromise or manipulation heighten ethical concerns. Data security also remains a pervasive worry, with insufficient reassurance to allay fears of breaches or unauthorized access to sensitive patient information.

Despite challenges, the rigorous FDA approval process for AI used in clinical settings helps ensure safety and efficacy standards are met, mitigating concerns about patient safety and algorithmic reliability. In addressing these challenges, ongoing efforts focus on enhancing data quality, strengthening regulatory frameworks, and implementing robust security measures to safeguard patient privacy and maintain trust in AI-driven healthcare solutions.

Heather Lane, Senior Architect at athenahealth
AI holds tremendous promise for transforming healthcare. Importantly, as more use cases are implemented, it’s critical for the industry to ensure that existing gaps in health equity aren’t deepened by AI advances. When safety and equity considerations are thoughtfully included, AI can be used to correct human biases. But when approached naively, AI can act as a mirror and absorb our existing biases. For example, it may be a quicker and lighter lift for a healthcare organization to build AI systems based off existing training data, but this pre-existing data and codes within it may reflect inequities that are then amplified if not caught, resulting in a system that increases inequity gaps. Therefore, it’s crucial to take a careful, measured approach to developing solutions that can limit inequities and are more impartial than the original input data.

Amy Brown, Founder & CEO at Authenticx
Healthcare workers face the challenge of balancing exceptional patient care against completing necessary administrative functions. Nurses, in particular, often field a variety of patient calls that range from scheduling appointments to answering medical questions. Conversational data insights help healthcare organizations redirect administrative burdens and ensure that medical professionals’ time and expertise are being used in the most impactful ways. Leaders can use AI to listen to and analyze patient conversations, understand top reasons for patient calls and identify common areas of disruption. With those insights, they can develop or update processes to redirect administrative calls, prioritize nurses’ ability to provide patient care and improve operational efficiency.

Rob Laumeyer, Availity AI Adviser at Availity
In healthcare, where errors can have dire consequences, it is critical to ensure AI’s high accuracy and reliability. Accomplishing this and implementing meaningful change will require a collaborative effort between AI developers, healthcare professionals, and regulatory bodies to create AI technologies that are accountable, transferable, and traceable. AI technology providers must, therefore, take a deeply responsible position and establish precautions as well as responsible AI principles that ensure the accuracy and safety of decisions.

An example is in developing critical technology systems that combine AI’s power with human clinicians’ expertise, which is not a black box but is transparent and traceable. To minimize known biases in data, AI systems should also be designed with fair and equitable objectives and outputs monitored accordingly. In addition to adherence to HIPAA security and privacy rule policies, data usage rights agreements should be honored, along with adopting architectures with privacy safeguards, and providing appropriate transparency and control over the use of data.

Finally, it is vital to continue researching, learning, and iterating to ensure scientific rigor and integrity and continuously improve solutions. This includes gathering end-user feedback, using human-in-the-loop audits, and adjusting or retraining systems to ensure fairness, context appropriateness, and security. By ensuring responsible and ethical training and maintenance of AI systems, we can enhance groundbreaking AI-based healthcare technologies and deliver transformational improvements.

Tim O’Connell, Co-Founder and CEO at emtelligent
Recognizing that previous AI solutions have had issues with non-determinism, hallucinations, and reliably referencing source data underscores the need for proactive efforts to create guardrails that address privacy, security, and bias. First, as the technology matures, AI solutions should be placed directly into the hands of healthcare professionals, equipping them to make well-informed, auditable decisions. To reduce bias, LLMs must also be trained on more diverse data sets and account for disparities and inaccuracies that come from training that leverages historical diagnoses. Lastly, training must shift from simply recognizing clinical terms and passing medical school entrance exams to ensuring LLMs are medically aligned, that is, trained for real-world healthcare situations to ensure they can produce output that is highly useful and resilient against hallucinations.

Anthony Weiss, CMO at Harvard Medical Faculty Physicians at BIDMC
One challenge we should anticipate with the integration of AI into healthcare is the impact on the peer review process. For all of the regulation within healthcare, oversight of The Practice of Medicine remains largely the domain of local peer review committees, populated by physicians. Through this process, these committees assess adverse events to determine if care was aligned with community standards and guide physicians toward these standards through education or discipline. The direct provision of care by an AI “physician extender” may pose challenges for these committees, both in understanding how the AI made an error and how best to modify the AI’s practice toward more optimal outcomes. This may require an entire rethinking of the role of peer review in overseeing healthcare quality and safety.

Dave Bennett, Executive Vice President, Healthcare at pCare by Uniguest
Emerging technologies are seen every day, but AI is one in healthcare that we really need to back up and thoughtfully consider how we’re using it. While AI holds great promise for us, it also has lots of pitfalls along the way. We must be careful not to fall into the ‘shiny object syndrome,’ and ensure we aren’t swept up in the newest tool. AI holds a great deal of promise, but taking a step back and keeping it simple so people understand it’s process, end results, and how to optimize the technology in front of us now is really critical.

Laxmi Patel, Chief Strategy Officer at Savista
The integration of AI in healthcare brings transformative potential but also raises significant challenges and ethical considerations. Patient privacy and data security are paramount concerns, with the vast amount of sensitive health information at risk of unauthorized access or breaches. Additionally, bias in AI algorithms poses a threat to fairness and equity in healthcare delivery, potentially exacerbating disparities in diagnosis and treatment outcomes. Ensuring transparency and interpretability of AI systems is crucial to building trust and accountability.

To address these challenges, healthcare organizations and policymakers need to implement regulatory frameworks to govern AI use, emphasizing data privacy, security, and algorithmic fairness. Robust data governance and security measures should be adopted to protect patient data, including encryption and access controls. Efforts to mitigate bias in algorithms involve developing techniques for data preprocessing and fairness-aware training. Enhancing the interpretability of AI models and promoting informed consent empower patients to understand and control how their data is used in AI applications. By addressing these issues, stakeholders can harness the benefits of AI in healthcare while upholding principles of privacy, security, fairness, and transparency.

Akshay Sharma, Chief AI Officer at Lyric
In developing AI for healthcare, it’s crucial to ask: How does this AI benefit patient care? Will AI replace human decision-making in a healthcare setting? If so, stricter protocols are needed. If AI assists human decision-making, preventing bias is key. The operational enhancements utilizing AI are usually more straightforward. Implementing AI in healthcare demands understanding data rights, ensuring privacy of PHI and PII, and careful dataset selection. Multiple models may be necessary to capture diverse data representations. To safeguard sensitive information, rigorous privacy techniques are crucial during training and inference, and certified de-identified data and public datasets are recommended. Continuous monitoring for data and model drifts is also essential.

Thomas Kavukat, Chief Technology Officer at RXNT
Perhaps pessimistic is a harsh term, but in the short to middle term I think we will struggle to see medical AI/ML reaching the same heights as has recently been demonstrated in other industries with novel model types such as LLMs. For the foreseeable future, applications of ML to the healthcare space will be necessarily restricted to utility and optimization use cases (e.g. summarization, billing optimization, recommender systems). More patient centric use-cases such as predictive diagnosis etc. will be difficult to adopt as functionality which directly impacts patient care needs to be treated as a safety critical system (i.e. misdiagnoses as a result of an AI glitch/miscalculation is unacceptable).

For this reason, in the medium term, medical ML needs to remain human interpretable, allowing the providers to understand the model’s decision and act as a safety gate. The majority of high performance modern ML predictive algorithms (neural networks, boosted ensembles etc.) struggle with interpretability, and so building provider trust is quite difficult. Model explainability metrics can help alleviate this issue, but are not a replacement for true interpretability in a safety critical environment.

Anika Heavener, Vice President of Innovation and Investments at The SCAN Foundation
AI will be a critical tool for meeting the healthcare needs of the older adult population but is already plagued by data-driven bias. Currently the data of older adults, especially marginalized populations, is siloed, disparate, and lacks social determinants of health (SDOH) leading to inadequate and biased AI datasets. Frankly, there are vast populations of older adults that we know little about. To harness the promise of AI, we must prioritize inclusivity and equity in our data generation and collection efforts, with a focus on three key elements:

  1. A deliberate commitment to capturing a range of experiences of older adults, including SDOH, like the ability to pay off debt or find work
  2. Champion data equity standards for AI-driven healthcare
  3. Incentivize healthcare practitioners to embed equity and data representation

We need to widen the representation of older populations in big data generation and collection and in a manner that explicitly includes marginalized populations. This includes incorporating age, income, and geography, among other factors. Older adults have different needs, capacities and priorities for their lives. Rather than forging ahead with incomplete and biased data, it’s essential to close the gaps within the data sets algorithms draw from or we risk widening the very health inequities these advancements aim to solve.

Gaurav Gautam, Global Head of Media and Telecom at Revature
GenAI is set to revolutionize healthcare by significantly impacting various technology skills and job roles within the industry. These advancements will empower healthcare professionals to enhance patient care while also introducing new specialized roles such as prompt engineers, AI data engineers, AI security specialists, and content reviewers to the field of healthcare. Already, we’re witnessing the emergence of roles like Data Scientists, AI Engineers, and AI Product Managers in healthcare, all centered around GenAI. Despite the tremendous promise of generative AI, the current reality is that the technology represents little more than unrealized potential for most organizations.

According to recent research from the Boston Consulting Group, two-thirds of organizations, including those in healthcare, are struggling to adopt gen AI due to a shortage of skilled personnel and a lack of a coherent roadmap. To overcome these hurdles, healthcare organizations must develop tailored learning frameworks to upskill their workforce and prepare them for AI integration. Recent academic research explicitly noted the need for leaders to “clarify the practical path towards implementation” of the technology.

The first thing that’s important to understand is that generative AI is not one size fits all. Learning frameworks must be tailored to the organization’s technical stack (current apps, models, and APIs) and maturity (including the quality and accuracy of your data). AI platforms require high-performance computing, machine learning models, and—especially in the healthcare space—proper security. Training for GenAI in healthcare differs significantly from other industries, emphasizing hands-on, experiential learning to reflect real-life patient scenarios. With most uses of this technology being tied directly to sensitive patient data, the training framework must intricately address the complexities of these technologies.

Technology is changing at a fast pace and the ” half-life of skills” – the time it takes for a technology skill to become obsolete is now less than 3 years and GenAI is no different. For health institutions to be able to continue to use this technology at its potential for patient care they need to be proactive in acquiring and updating their skills. Healthcare organizations that invest in upskilling learning frameworks today will be well-positioned to use generative AI to save time and money while improving patient care tomorrow.

Justin Manjourides, PhD, Biostatistician and Associate Professor at Northeastern University
For all its advantages in healthcare, AI comes with its challenges as well. Data security, for example, is so crucial for both patients and researchers. Researchers’ contributions to improving healthcare are often very reliant on patients being willing to participate and share their data, therefore, protecting that data is essential. Secondly, for these data driven algorithms to be most effective, researchers must ensure that the data used to train and inform the models being used are appropriately representing the populations they hope to help. Therefore, sharing one’s health information for research is often done with the understanding that the benefits received are less personal and more for the improvement of the health of others or future generations.

One approach to minimizing risks to sharing health data among researchers while including as many patients from as many diverse settings as possible, is through federated data analyses. Ideally, in a federated analysis, the exact same standardized models and calculations are conducted by participating institutions (maybe 10 different hospitals, for example) on data from their own patient populations. The summary calculations from those analyses, which cannot be traced back to any individual patients, are then combined across the institutions to get an overall result. This allows researchers to increase the diversity of the data being used to inform models while minimizing the need to share raw patient data.

Tim Price, Chief Product Officer at Infermedica
Integrating AI into healthcare involves navigating patient privacy, data security, and algorithmic bias challenges. Adhering to privacy laws like HIPAA and GDPR is critical for using health data in AI without compromising confidentiality. With healthcare becoming more interconnected, protecting against cyber threats requires measures like advanced encryption, security audits, and promoting cybersecurity awareness. Ensuring AI-powered solutions are clinically validated for predictable, explainable outcomes is also essential for gaining trust from healthcare professionals and patients. Addressing these challenges with transparency, and prioritizing clinical validation, helps ensure our AI applications are ethical, effective, and enhance patient care responsibly.

Detlef Koll, Vice President Global R&D at Solventum
To put it in the words of the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: “AI holds extraordinary potential for both promise and peril.” The technology bears risks and complexities, but the value it promises to provide to businesses and the society at large will drive adoption nevertheless. Some of the risks can be managed through a responsible, thoughtful approach to data use and disclosure of such data use to patients and physicians; others are inherent to the current state of AI technology.

Patient privacy and data security range towards the more manageable side. Some AI applications will require recording of new kinds of patient information with all its associated risks. For example, ambient documentation solutions for automatic generation of clinical document drafts require recording of the dialog between physician and patient. Such recordings are highly sensitive; they wouldn’t usually exist if not for the use of AI in generating draft documentation. Yet, responsible vendors will minimize the retention, prevent access and control use to such recordings.

One example for a sensitive use of patient data is to train or adapt machine learning models. AI models can greatly benefit from learning or adapting on usage data, but there is a risk of unintentional leakage of patient information. While the training intent usually is to prevent retaining any patient specific information in order to generalize to others, it is hard to conclusively assert that no patient specific information is memorized in this process. Vendors providing generative AI technology for image generation are currently being sued for copyright infringement based on the fact that the AI models can create close similes of images used for training the AI models. This is an example for unintentional memorization and successive leakage of training information during use.

Yet, there are approaches that responsible AI vendors can apply to manage this risk in clinical applications. For example, de-identification of data prior to any use for adaptation can greatly reduce the sensitivity of data. Requiring minimum data set sizes used for training and controlling the kind and number of parameters adjusted during learning can effectively prevent memorization. In short, there are compensating measures.

Bias in the output of generative AI tends to be one of the more intractable problems, inherent to the nature of the technology itself. Models are trained to learn the distribution of the training data. Thus, any bias in the training data will be reflected by the trained models. Bias can be introduced through an imbalance in the training data, e.g., if the data isn’t appropriately balanced for gender, race, age, etc. But it can also be introduced through undesirable, yet well documented bias in treatment habits of care givers. To make matters worse, generative AI applications tend to amplify bias: for example, when generating textual output from an AI model, most applications will tend to create the output that is most likely in a given context, even if the AI model properly learned the more nuanced distributions of possible answers.

The effect of this is documented in several studies: for example, one study showed that diagnostic and treatment recommendations of one AI model differed in clinically unexplainable ways for model patients when varying just gender or race while keeping all other information of the patient vignette unchanged. While reaching impressive performance on average, AI models tend to derive spurious and unexplainable artifacts towards less frequent subsets of data. The Curse of Dimensionality of the training data used for training AI models ensures that there is an unknown and endless number of such subsets in any data set.

Controlling for bias creates its own issues as Google will be willing to attest after a recent gaffe with their latest generation of Gemini Image generation models: in an attempt to control for gender and race bias when generating images of judges, CEOs, or inmates, the models started to generate female popes, and Native American founding fathers. Controlling for unwanted bias in one area invariable carries the risk of creating new equally unwanted bias elsewhere. The problem of AI model bias is at least in the near future not likely to be solved by changes to the AI algorithm.

On the example of ambient documentation, AI technology creates a draft that needs to be reviewed and corrected prior to signature by the responsible provider. Thus inherently, the application contains a quality review step by the authoring clinician. This step should protect against the most blatant mistakes and negative effects of bias. Human reviewers though tend to overlook issues in general, the output of generative Large Language Models tends to look plausible, even where factually wrong, and clinicians are operating under time pressure. Thus, a required step of at least larger scale uses of ambient documentation does require secondary human review processes for quality control, identification of under-editing by users, and an active process to measure and manage the output of ambient documentation towards fitness for secondary use of the documentation, e.g. the billing and reimbursement process.

Generative AI shows impressive performance across a wide range of use cases. The effort to develop a product demo or proof of concept in many instances is trivial. It will depend on the discipline and commitment of both users and vendors to use the technology in a responsible way. This will require more effort but will lead to a better balance between “promise and peril.”

Adam Hesse, CEO at Full Spectrum
Applying artificial intelligence in healthcare while managing an evolving data privacy landscape is not only a challenge to maintain compliance, but also magnifies the potential for bias. More and more geographies are adopting a stance on privacy that allows patients to “opt out” of the use of their data. This creates the opportunity for bias related to location. Patients having control over the use of their data is very important, but I expect many who would “opt out” might make a different choice if they understood the value their data has in advancing healthcare.

Judy Jiao, Chief Information Officer at National Government Services
To responsibly capitalize on the boundless potential of AI, quality data is paramount. An AI system is only as useful as the data it is based on, and therefore, insufficient, inaccurate or low-quality data will lead to subpar AI systems. However, due to the large quantity of data needs, and the fact that many data are produced outside of an organization, ensuring quality is extremely challenging. And almost inevitably, sometimes the AI models hallucinate. An AI hallucination is a false or misleading response that AI generates and presents as fact. These hallucinations are caused by various factors, including insufficient or biased data. To enhance data quality, organizations should have a comprehensive data quality strategy to ensure accuracy, completion, consistency, timeliness, traceability, and reducing noisy, or meaningless, data. A mature data management system, holistic data governance and thorough integration approaches are also necessary to effectively support AI implementation.

Notably, one technique to reduce hallucination, manage bias, protect patient privacy, and increase the accuracy of the AI responses is called “Retrieval-augmented generation (RAG),” the term coined by AI research scientist Patrick Lewis in 2020, now making its name in the AI community. This allows the Large Language Models (LLMs) to be augmented with additional, relevant, and more controlled-in-scope data sources to meet a particular business need.

RAG empowers businesses to create a wide range of AI solutions by combining LLMs with internal and external knowledge bases. Critically, RAG facilitates this process with relative ease and managed scope for businesses to understand the data sources they are incorporating. These solutions can then be set up in a well-protected environment within an organization with relevant controls for privacy and cybersecurity reasons. However, health organizations need to be reminded that hastily adopting an AI system can put patients at risk. When AI systems are automatically making critical decisions for doctors or healthcare workers, wrong decisions could be vast in scale and deeply impactful. So, leaders need to carefully balance the need for automation using AI with human determination to make sound decisions.

Dr. Anjum Ahmed, Medical Officer at AGFA HealthCare
If I were to be pessimistic about AI, specially in healthcare settings, we need to rethink about our reliance on AI for diagnostics and treatment decisions as it introduces the risk of errors that are hard to predict and control. Today, early adopters of AI solutions in healthcare settings are trying to understand how AI based disease detection decisions are being made, which is opening up the conversation around Explainable AI. These AI algorithms require vast amounts of data to learn effectively, and there is a real danger that these systems might not perform equally well across diverse populations. Unlike human clinicians, AI systems may not always have the capacity to consider the nuances and complexities of individual cases, leading to potential misdiagnoses or inappropriate treatment recommendations. These concerns underscore the need for rigorous validation, continuous monitoring, and strict regulatory frameworks to ensure that AI technologies are used safely and ethically in healthcare settings.

Definitely a lot of things to be looking out for when you integrate AI into your organization! Huge thank you to everyone who took the time out of their day to submit a quote and thank you to all of you for taking the time to read this article! We could not do this without your support. What challenges or ethical concerns do you have with the integration of AI? Are there areas of AI that you are pessimistic about? Let us know either in the comments down below or over on social media! We’d love to hear from all of you.

link

Leave a Reply

Your email address will not be published. Required fields are marked *