Unquestionably, the evolution of AI in diagnostic dentistry has been groundbreaking [21]. The current body of evidence has been bustling with newly developed AI models with high performance metrics, anticipating the possibility of a fully automated diagnostic process in the near future [21]. However, ethical regulations surrounding AI remain largely unexplored [21]. As AI-driven decisions increasingly impact dental practitioners and patients, understanding their implications is essential [22]. This paper reframes the discussion on AI in caries detection through an ethical lens, promoting a more inclusive and responsible integration of AI in dental practice.
Across many healthcare sectors, common ethical challenges in AI applications include bias and discrimination, data privacy and security, transparency and accountability, equity and accessibility, and economic and employment effect [23, 24]. For instance, in radiology, AI models rely heavily on imaging data, prompting worries about over-reliance on AI diagnosis [25, 26]. In pathology, assuring the accuracy of AI in diagnosing uncommon diseases and protecting data privacy are critical concerns [27]. In general, the widespread use of AI in patient monitoring and management requires careful consideration of how AI influences patient-doctor interaction [28]. Despite these shared ethical concerns, dentistry presents some unique challenges. Dental care frequently requires personalized treatment plans, making it critical to understand and explain AI suggestions in order to gain patient trust [11, 29]. Additionally, dental radiographs are a fundamental diagnostic tool, needing careful attention to the quality and interpretation of AI-powered radiographic analysis [30]. Furthermore, the possible influence on dental hygienists and technicians must be considered, since AI-driven diagnosis and treatment planning may dramatically modify their responsibilities [31].
AI’s use in dentistry has significant public health benefits, enabling early diagnosis, timely intervention, and improved efficiency [32]. It streamlines diagnostics, reduces errors, and optimizes resource use [32]. Despite these potential benefits, there are significant ethical issues that must be addressed. For instance, the development and use of AI in dentistry must ensure that the technology is equitable and does not reinforce existing prejudices or exclude disadvantaged communities [32, 33]. Furthermore, AI may indirectly impact public health by automating manual jobs, potentially leading to near-term unemployment in low-income communities and adverse health effects due to economic instability [34]. Privacy concerns also arise since AI systems frequently require access to sensitive patient data, demanding strict security measures to ensure patient confidentiality [35]. By recognizing and proactively addressing these ethical challenges, the dental profession may employ AI to improve patient outcomes while maintaining high ethical standards.
To address these ethical challenges, in AI development, regulatory interventions could include transparent processes, promoting inclusivity and diversity, enhancing data privacy, ensuring accountability, and supporting workforce transition [36]. These include clear documentation of data sources, diverse datasets, robust encryption, and clear guidelines for AI-driven decisions [37]. Additionally, training dental professionals to adapt to AI technologies is crucial, ensuring they can effectively use and interpret AI tools [38].
Our review found 7 articles relevant to the objective of this article. Of these, two studies trained the AI models on carrying out automated clinical caries detection, using photographs of carious teeth. The remaining 5 articles focused on radiographic caries detection. Irrespective of the methodology, each article was screened according to the 11 aforementioned ethical principles. The models developed for caries detection are based on deep learning, which has been associated with the “black box effect”—a phenomenon where the internal decision-making processes of the models are not easily understandable to humans [39]. Considering the complex nature of these models, it is often not possible to interpret how a final decision was reached or which component was most important in the decision-making process [40]. This lack of transparency in the decision-making process of the model makes it impossible to determine who can be held accountable for the decisions made by the AI algorithm [41]. At present, the dilemma regarding accountability is further heightened by a gap between those who profit from developments of AI and those who are more likely to deal with the consequences of decisions made by AI [41].
Further widening this gap is the issue of inclusion and diversity [42]. In this context, lack of diversity refers to the inclusion of only one type of ethical or cultural background [22, 42]. The authors found the available literature had a lack of diversity owing to the fact that there were mostly single centered studies from economically developed countries [15]. This geographical bias was also noted by Roche et al. who found that a majority of the literature originated from the US, UK and Germany with limited evidence from low-middle income countries [43]. Ideally, an AI model should be inclusive with respect to gender, social and cultural diversity. Moreover, considering ML algorithms work on pattern recognition by virtue, they may learn biases and discriminate based on age, gender and sexual orientation [44]. This may lead to misdiagnosis due to the AI model’s failure to recognize variations specific to groups based on ethnicity and cultural diversity. Misdiagnosis can lead to false negatives, resulting in untreated caries or false positives, resulting in unneeded treatments and higher healthcare expenditures [45]. However, the current body of evidence is ambiguous regarding the ideal method of incorporating diversity and inclusion while training AI models. This ambiguity is further heightened by the lack of clarity surrounding sample size calculations in AI, making it difficult to determine what percentage of a minority would be sufficient representation to train a model [44]. This is another area that requires further exploration in future studies.
Another area of concern is the issue of patient privacy, since all seven articles did not mention if there was any form of consent obtained by the patient. Moreover, there was limited information on any attempts to conceal identifiers of the patient, which is concerning since several softwares are open to public access. Although it is understandable that it may not be possible to completely de-identify photographs and radiographs, it is pertinent for regulatory bodies to develop a more stringent framework to maintain patient confidentiality and privacy. These “regulatory bodies” encompass research ethics committee which aims to provide thorough, pertinent and prompt evaluations of the research applications across the Faculty of Health Sciences at various institutions. In the selected study, only one study by Berdouses et al. has not reported the process of ethical approval from any ethics committee [19]. However, this may be an ethical concern of publications in general and not specific to AI.
Concerns regarding transparency were also noted in studies by Berdouses et al., since there was only a single annotator present in a study by Kuhnisch et al. and Moran et al., with no measure of the reliability of these annotations [5, 10, 19]. Since labeling the carious defect can often be subjective, it is crucial to calibrate multiple experienced examiners on the process of annotating the defect to avoid any biases. Another noteworthy finding was that Berdouses et al. and Moran et al. both did not mention the random allocation of the included radiographs into the “training set” and “testing set” [10, 19]. This may lead to a difficulty bias, which refers to the possibility of all the challenging sets of images being allocated to either the testing group or the training group.
To date, there is limited evidence on the ethical concerns of AI in dentistry. Recently, an ethical framework and checklist has been developed by Rokhshad et al. [15]. The checklist was devised by a rigorous evaluation process by a carefully selected committee of 29 members [28]. Although this is a commendable initiative, the guidelines were formulated based on existing frameworks for AI, including the World Health Organization’s guidance on ethical considerations in AI for healthcare, and a recently published scoping review on ethics reporting in dentistry [1, 23]. The authors state that although both sources provided valuable insights, neither was entirely applicable to the more specific goals and the context of AI in dentistry.
Although the advancements in AI will greatly advance society, and the benefits of the technology are groundbreaking, it is important to develop a balanced approach to ensure that these benefits are shared equitably and that the growth of AI does not exclude populations that are underrepresented or further heighten any pre-existing bias. With this article, we hope to highlight the requests for transparency in developing AI models. Although this cannot be solved immediately, these factors are important to consider, and there is a need for regulatory bodies to develop adequate frameworks that critically assess the merits and, possibly, the demerits of this dynamic technology.
This article has certain limitations. Due to restricted access, only three databases were searched, which may have limited the identification of relevant studies. Furthermore, most of the included studies were conducted in high-income, single-center settings, potentially impacting the generalizability of the findings. This topographical bias reflects a broader structural disparity—where institutions in well-resourced countries benefit from advanced infrastructure, policy support, and funding, whereas low- and middle-income countries (LMICs) often face significant barriers, including limited digital infrastructure, unequal access to care, and workforce shortages. These differences significantly influence the applicability and successful integration of AI in dental care across regions. Without context-specific adaptations, AI systems developed in high-income settings may fail to address the unique clinical and ethical challenges faced in LMICs.
To bridge this gap, future studies should adopt more collaborative, multicenter approaches across diverse socio-economic settings to ensure broader applicability. Moreover, the notable shortage of literature on AI ethics in dentistry highlights the need for stronger, more targeted primary research. Future investigations must go beyond theoretical discussions and focus on real-world applications—exploring how ethical principles such as transparency, accountability, fairness, and inclusivity are operationalized in clinical dental settings. Empirical studies examining the experiences and perceptions of patients, dental professionals, and AI developers are essential for bridging the gap between ethical theory and clinical application. In addition, to enhance the practical relevance of ethical guidelines, future work should propose concrete, actionable strategies—such as the use of standardized data anonymization techniques (e.g., differential privacy, federated learning) to safeguard patient identity, and the development of diversity inclusion protocols that ensure training datasets represent varied age, ethnicity, and socio-economic backgrounds [46]. Implementation of explainability mechanisms (e.g., saliency maps or decision logs) and mandatory training modules for AI users can also strengthen transparency and accountability [47]. These efforts would support the development of more inclusive, equitable, and context-sensitive AI systems in dentistry, particularly in LMICs.
link
