February 12, 2025
Integrating ethics in AI development: a qualitative study | BMC Medical Ethics

This research paper explores the development of AI and the considerations perceived by experts as essential for ensuring that AI aligns with ethical practices within healthcare. The experts underlined the ethical significance of introducing AI with a clear and purposeful objective. Experts expressed that beyond being innovative, AI needs to be meaningful for healthcare in practical ways. During the interviews, experts illustrated the ethical complexity of navigating the tension between profit and healthcare benefits as well as the importance of prioritizing the interests of healthcare professionals, and patients who are the stakeholders most affected by AI’s implementation. Experts highlighted the importance of understanding the context, the intrinsic dynamics, and the underlying theoretical foundation of healthcare during the development of AI. The three themes collectively call to deliver AI that serves the interests of doctors and patients and aligns with the intricate and context-specific healthcare landscape. For this to be achieved, those developing AI applications need to be sufficiently aware of clinical and patient interests, and this information transfer to the developers must be prioritized.

To our knowledge, limited evidence exists regarding the practical aspects of developing ethical AI for healthcare. However, in a roundtable discussion by experts, the ideal future agenda for AI and ethics included the questions: “(i) who designs what for whom, and why? (ii) how do we empower the users of AI systems? (iii) how do we go beyond focusing on technological issues for societal problems?” [28]. Our results validate how integral these questions are within a specific context of application, namely healthcare, and how they can help recognize ethical pitfalls in AI’s development. Our results focus on readily understandable ethical questions such as: Is AI developed for the right reasons? And, is the solution benefiting the right stakeholder? These practical questions can help evaluate the ethical implications of AI in a more understandable and relatable manner [29, 30].

One participant mentioned the concept of “repairing innovation” originating from Madeleine Clare Elish and Elizabeth Anne Watkins. This concept adequately summarizes the challenges described by our experts of developing AI solutions in healthcare. Elish and Watkins stated that there is a critical role in examining and understanding how effective clinical AI solutions must be considered part of complex sociotechnical systems in their development [31]. They advocate seeing AI beyond its potential (and often theoretical) possibilities but centrally investigate whether AI addresses existing problems, exploring how and in what ways AI is integrated into existing processes as well as how it disrupts them [31]. For them, to repair innovation is to set new practices and possibilities that address the often unexpected changes caused by AI’s disruption and integrate them into an existing professional context. Collectively, our findings suggest experts saw the need to change the way AI for healthcare is currently developed. They often called implicitly to repair the guidance, process, and incentives that help make AI align with ethical frameworks.

The World Health Organization guideline for AI ethics states that implementing ethical principles and human rights obligations into practice must be part of “every stage of a technology’s design, development, and deployment” [32]. In line with their statement, ethical AI (and AI ethics) cannot be solely involved in defining the ethical concepts or principles that must be part of AI, but must help guide its development. However, the current versions of AI ethics guidance have had limited effect in changing the practices or development of AI to make it more ethical [3, 15, 33]. Hallamaa and Kalliokoski (2022) raise the question: “What could serve as an approach that accounts for the nature of AI as an active element of complex sociotechnical systems?” [33]. While our results cannot offer an answer to this question; the insights of this study suggest that developing and implementing ethical AI is a complex, multifaceted, and multi-stakeholder process that cannot be removed from the context in which it will be used. In that sense, AI ethics for healthcare may need to become more practically minded and potentially include moral deliberations on AI’s objectives, actors, and the specific healthcare context. In this way, our study focuses on the practical ethical challenges that are a part of the puzzle regarding what “ought to be” ethical AI for healthcare. Further research is needed to answer which tools or methods for ethical guidance can achieve in practice better ethical alignment of AI for healthcare.

In particular, the experts in our study were concerned about the innovation-first approach. These concerns, however, are not unique to healthcare. While innovation may be positive when it answers to the specific needs of stakeholders and is context-sensitive, it can also be simply a new, but potentially, useless product. Although the RRI framework places great importance on creating innovative products that are ethically acceptable and socially desirable, there are currently no tools that can help determine whether an innovation fulfills the conditions for RRI [34]. RRI is mostly used to determine regulatory compliance, which means the assessment of whether an AI fulfills RRI may come “too late” when it can no longer be transformed to impact practice [11, 34]. Guidance to develop AI ethically and responsibly may need to shift to a proactive and operationally strategic approach for practical development instead of remaining prescriptive.

Within the frameworks that guide AI’s development, the question remains: Who is in charge or responsible for ethically aligning AI in healthcare? Empirical evidence suggests that development teams are often more concerned with the usefulness and viability of the product rather than its ethical aspects [35]. In part, these results are expected as software developers are not responsible for strategic decisions regarding how and why AI is developed [17]. While some academics have suggested embedding ethics into AI’s design by integrating ethicists in the development team [36], management (including product managers) may be a better entry point to ensure that AI is ethically developed from its initial ideation. In a survey, AI developers felt capable of designing pro-ethical AI, but the question remained whether they were responsible for these decisions [37]. These developers stressed that although they feel responsible, without senior leadership, their actionability is limited [37]. This hints at the possibility that operationalizing AI ethics may need to include business ethics and procedural approaches to business practices such as quality assurance [30].

For our experts, context awareness is undeniably important, and a systemic view of healthcare is essential to understanding how to achieve ethical AI. AI innovations by themselves do not change the interests that determine the way healthcare is delivered or re-engineer the incentives that support existing ways of working, and that is why “simply adding AI to a fragmented system will not create sustainable change” [38]. As suggested by Stahl, rethinking ecosystems to ensure processes and outcomes meet societal goals may be more fruitful than assigning individual responsibility, for example, to developers [9]. Empirical evidence collected on digital health stakeholders in Switzerland showed that start-up founders may lack awareness or resources to optimize solutions for societal impact or that their vision may be excessively focused on attaining a high valuation and selling the enterprise quickly [11]. Similar to our results, the participants in Switzerland reflected on the tension between key performance indicators focused on commercial success or maximization of societal goals [11]. It might be challenging to address this tension without creating regulatory frameworks for AI’s development and business practices.

In contrast to focusing on AI as product development, for example, ethics-by-design, Gerke suggested widening the perspective to design processes that can manage AI ethically, including considering systemic and human factors [39]. Attention may be needed to evaluate the interactions of AI with doctors and patients and whether it is usable and valuable for them. For example, an AI assisting diagnosis of diabetic retinopathy may not be helpful for ophthalmologists as they already have that expertise [6]. Along similar lines, digital health stakeholders in Switzerland described that due to the complexities in navigating the health system, innovators may lose sight of the “priorities and realities of patients and healthcare practitioners” [11]. Our results reflect these findings, showing that balancing AI for different stakeholders is challenging. Creating frameworks and regulations that change the incentives of AI’s development may be an opportunity to answer stakeholders’ priorities and healthcare needs. For example, to encourage the development of effective and ethical AI applications, reimbursement regulations could incentivize those solutions that offer considerable patient benefit or financial rewards when efforts have been put into bias mitigation [40].

Strengths and limitations

While research papers are abundant for theoretical discussions, there is limited empirical evidence on the practical challenges perceived by experts to develop AI for healthcare that is ethically aligned. Therefore, our results are important to provide evidence that may help bridge the gap between the theory and practice of AI ethics for healthcare. Given the thematic analysis methodology, we collected rich data and conducted an in-depth exploration of the views and insights of a wide variety of experts.

For the context of our interviews, AI is used as a general term that can lead to experts interpreting AI differently or focusing specifically on machine learning (and its black-box subtypes). However, consensus on the definition of AI remains elusive and a topic of academic and governmental discussion. While the European Commission has recently defined AI,Footnote 1 the definition is still broad. They included any software that can decide based on data the best course of action to achieve a goal [41]. While we clarified the focus on supportive AI as CDSS during the interview, some experts brought different understandings of AI to the discussion, delineating scenarios where it would be more autonomous and unsupervised. This challenge is not exclusive to our research or to healthcare, but it reflects the fact that AI is an ever-evolving topic currently under conceptual and practical construction and where multiple open questions remain. Given that our research aims to be exploratory, identifying different interpretations of AI can be considered part of our results, and signals a broader challenge in which research and ethics guidelines may need to define and study AI as application-, subject-, and context-specific. While our study demonstrates how practical challenges during AI’s development may need ethical reflection, as qualitative research, our results cannot be generalized outside the study population, and more research is needed to explore whether similar insights can be obtained in other areas. For example, future quantitative research could investigate whether participants from different healthcare models (commodity vs social service) may have different views or fears regarding AI’s development for healthcare.

Moreover, the chosen recruitment strategy of a purposive sample may have introduced bias in the selection of participants, given the dominance of researchers who are men or come from high-income countries. While we actively invited participants from non-dominant backgrounds (women and researchers of the global south), only a few accepted participation. Therefore, our results widely represent the views of those in Western countries, emphasizing Europe. The subject of our study must be further researched in different technological, socio-economical, and international systems.

link

Leave a Reply

Your email address will not be published. Required fields are marked *