Qatar Foundation expert discusses importance of ethically assessing AI in healthcare
Dr. Mohammed Ghaly during the World Innovation Summit for Health 2024.
Doha, Qatar: The advancement of Artificial Intelligence (AI) has the potential to transform medical practice by enhancing diagnostic accuracy, treatment efficiacy, overall patient care, as well as biomedical research and drug development. With the unprecedented opportunities and progress this technology brings, ethical considerations are now at the forefront of research and policy.
The recently concluded World Innovation Summit for Health (WISH)– a Qatar Foundation initiative shed light on the ethical implications of AI in healthcare and considered through the lens of Islamic Bioethics.
In a question and answer session with The Peninsula, Dr. Mohammed Ghaly, one of the panelists at WISH session on ‘Ethical Management of AI in Healthcare,’ discussed about ‘AI-Enabled Healthcare Ethics: A Perspective from the Muslim World.’
Dr. Ghaly is a professor of Islam and Biomedical Ethics at the research Center for Islamic Legislation Ethics (CILE), College of Islamic Studies at QF’s Hamad Bin Khalifa University.
Can you describe the intersection of artificial intelligence (AI) and healthcare?
Like many areas of human life, AI is gradually making its way into medicine and healthcare, unlocking a wide range of transformative possibilities.
In hospital management, AI can improve efficiency by streamlining appointments, billing, claims processing, and medical records, while enhancing customer service through chatbots and fraud detection systems.
Clinically, AI can play a pivotal role in diagnosis, particularly in image-based detection of diseases such as cancer and blood disorders, as well as in predictive medicine, which forecasts individual health risks. It facilitates patient monitoring through wearable sensors, trackers, and the Internet of Medical Things (IoMT), and analyzes data from public sources, including social media and news reports, to identify and monitor disease outbreaks or pandemics.
In treatment, AI can advance personalized medicine, optimize treatment protocols, reduce dosage errors, and accelerate the development of breakthrough vaccines and drugs by analyzing and redesigning pre-existing medications. Additionally, it can significantly enhance precision and outcomes in robot-assisted surgeries.
Together, these applications underscore AI’s far-reaching and systematic impact in transforming healthcare, making it more efficient, accurate, and innovative. However, alongside these positive contributions, these applications also give rise to ethical concerns and challenges, often necessitating a reassessment of existing governing frameworks for core issues in medical ethics, such as physician-patient relationships and medical accountability.
Dr. Ghaly is a professor of Islam and Biomedical Ethics at the research Center for Islamic Legislation Ethics (CILE), College of Islamic Studies at QF’s Hamad Bin Khalifa University.
How can AI in healthcare be ethically assessed, considering both its potential benefits and risks?
To ethically evaluate AI in healthcare, it is crucial to weigh both its potential benefits and risks. This balanced approach aligns with the well-known principle of benefit-risk assessment. In the Islamic tradition, this methodology is reflected in fiqh al-muawazanat—the jurisprudence of balancing benefits against harms. In response to the previous question, the benefits and positive impact of AI applications were highlighted, and these hold particular significance, especially when they are actual or when their impact carries a strong likelihood—a concept in Islamic ethicsknown as zann rajih (dominant probability).
Concerns and challenges surrounding AI in healthcare are both numerous and diverse, encompassing data-related issues—such as privacy, confidentiality, and bias—as well as broader ethical questions about the role of automation in medicine, patient autonomy, the problem of opacity, and the shifting dynamics of clinical decision-making when AI is involved. These challenges not only introduce novel ethical dilemmas but also necessitate critical engagement with established concepts such as informed consent, physician competence, and the redistribution of medical accountability.
Do you agree that the “black-box” effect is a primary concern? How should it be addressed in terms of medical accountability?
Yes, one key area of ethical concern is the ‘black-box’ effect, where the decision-making process of AI systems is opaque, even to experts. This makes it difficult to trace the reasoning behind AI-generated outputs. To address this issue, the following factors need to be carefully considered:
- Whether the physician using the AI tool can be deemed an ‘ignorant physician’ (tabibjahil): In Islamic jurisprudence, physicians are typically held liable for injuries if they lack necessary theoretical knowledge or practical expertise. However, in the case of opaque AI systems, this opacity is due to the technology itself, not physician negligence, and thus cannot be classified as ‘ignorant’as long as they understand the technology’s limitations.
- Trade-offs between precision and interpretability: Data science models often sacrifice explainability for precision. Physicians face a dilemma: use highly precise but opaque AI systems or opt for more transparent systems that may offer less precision. When two systems have comparable accuracy, efficacy, and safety, transparency should take precedence. In cases where there is no clear parity, physicians, possibly with input from an ethics committee, should decide how to balance these trade-offs on a case-by-case basis.
- Approval by a licensing authority: If the AI tool is approved by a recognized body (e.g., the FDA in the US), responsibility for any resulting injuries could shift to that authority. If not, the physician may be liable.
- Informed consent: The patient must be adequately informed about the physician’s knowledge and experience with the AI tool. If this is lacking, the physician is typically held liable. However, if the patient has been informed about the physician’s limited expertise and still consents to the use of the AI tool, the physician may not be held liable.
- Negligence by AI developers: If an injury results from a design defect or other issue with the AI tool, liability could extend beyond the physician to include those involved in designing, developing, and deploying AI systems. This aligns with corporate liability models endorsed by the International Islamic Fiqh Academy (IIFA) in 2004, and the 2022 AI Liability Directive, which held that AI providers can be as accountable as those offering other products and services, further institutionalizing this shared responsibility framework.
How can patient data privacy be safeguarded while utilizing AI technologies?
The ethical governance of data in AI is a major challenge, particularly in balancing the need for extensive datasets to improve AI functionality while safeguarding patient privacy. Techniques from genetic ethics, such as anonymization (removing identifiable information) and pseudonymization (replacing identifiers with artificial tags), help protect data while enabling valuable analysis. However, risks like re-identification in large datasets persist. To address this, these techniques must be paired with strict access controls and secure data management to ensure robust privacy protection in AI-driven healthcare.
Additionally, religious perspectives offer dimensions often absent in secular discussions. In Islamic ethics, for instance, the principle of human authority over one’s body—or “personal assets”—is foundational. Within this framework, the concept of charity (sadaqa) is particularly relevant. This principle allows individuals to voluntarily donate valuable possessions for the benefit of others, with the intention of seeking God’s reward. In AI-enabled healthcare, where data is viewed as a valuable resource, datasets could be regarded as a form of personal asset that can be donated for charitable purposes and made available for public use, with minimal concern over privacy violations. However, such “data donations” must adhere to conditions set by the donors to ensure that their use aligns with the donors’ values and their commitment to God, the Creator of their bodies. These safeguards are essential to prevent any misuse that could infringe upon God’s supreme authority over the human body and its associated rights within these belief systems.
What is the current state of AI in healthcare in a Muslim-majority country like Qatar?
Muslim-majority countries, including Qatar and other Gulf nations, are investing heavily in AI technology to improve their global standing. However, according to rankings like the 2024 Stanford AI Index Report, these countries are still working to establish themselves as leaders in AI, with the US, China, and some European nations currently ahead. Qatar, Saudi Arabia, and the UAE are among the most ambitious in AI development within the region.
As for Qatar in particular, the National AI Strategy emphasizes utilizing AI in key sectors, including healthcare. The country has made remarkable progress in this regard, including a QR 9 billion incentive package announced at the Economic Forum in May 2024. Qatar’s digital investments are expected to reach $5.7 billion by 2026, up from $1.65 billion in 2022. These efforts are part of Qatar’s broader plan to become a hub for innovation, particularly in healthcare, where AI is set to improve diagnostics and patient care.
Additionally, partner universities and research entities at Qatar Foundation are already making strides in this direction. For example,Carnegie Mellon University in Qatar has developed Avey, a healthcare app featuring an AI-powered self-diagnosis algorithm. Additionally, the Qatar Computing Research Institute at Hamad Bin Khalifa University, part of Qatar Foundation,has partnered with Huawei Consumer Business Group to develop the Smart Individualized Health Analytics (SIHA) platform, integrating wearable devices for health research and chronic disease management.
AI image: Abraham Augusthy / The Peninsula
Should we be optimistic or concerned about the future of AI-enabled healthcare?
Given the current advancements and anticipated future developments, I personally support a position of “cautious optimism.”
A prime example of AI’s potential in healthcare is the breakthrough achieved by AlphaFold2. This breakthrough has revolutionized protein structure prediction; a task that once took months or years is now accomplished in minutes. AlphaFold2 predicts the 3D structure of proteins from their amino acid sequences with remarkable accuracy, helping to solve complex biological problems and understand diseases like malaria.Despite its transformative accuracy, experimental validation and supplementary techniques like cryo-electron microscopy remain necessary.
However, this optimism is tempered by concerns over the dominance of profit-driven tech giantsdriven by neoliberal capitalist ideologies. These ideologies prioritize profit maximization over the common good, often guiding technological advancements in ways that conflict with the moral principles central to healthcare. The integration of AI tools into medicine must, above all, align with the time-honored goals of healthcare—most importantly, the pursuit of providing good medical care (tatbib). Practices that violate privacy—such as the mass collection of personal data under the guise of improving AI technologies—cannot be considered legitimate or acceptable progress for healthcare in the AI age.Equally concerning is that corporations, and occasionally governments, frequently weaken ethical governance by cutting budgets for ethics oversight, prioritizing competitiveness over societal well-being.
To mitigate these risks, individuals and institutions must collaborate to raise public awareness and foster a culture of ethical consumer behavior. This unified effort must send a clear message to corporations: the ongoing AI race cannot achieve responsible innovation without adhering to ethical principles and undergoing a comprehensive evaluation of its societal impact.Notably, prominent figures in the field, such as the 2024 Nobel laureate Geoffrey Hinton, champion a vision that integrates AI with ethics, emphasizing the necessity of balancing technological innovation with responsible stewardship.
link