December 3, 2024
Biases in AI Generated Images of Intensivists and Implications for Healthcare Ethics

Photo Credit: gorodenkoff

The following is a summary of “Representation of intensivists’ race/ethnicity, sex, and age by artificial intelligence: a cross-sectional study of two text-to-image models,” published in the November 2024 issue of Critical Care by Gisselbaek et al. 


The integration of artificial intelligence (AI) into intensive care improved patient care by supporting clinical decisions and providing real-time predictions, but biases within AI models posed challenges to diversity, equity, and inclusion (DEI), particularly in how healthcare professionals were visually represented.  

Researchers conducted a retrospective study to assess the demographic representation of intensivists in 2 AI text-to-image models, Midjourney and ChatGPT DALL-E 2, and assess the accuracy in depicting the characteristics.  

They performed an investigation between May and July 2024. They used demographic data from the 2022 USA workforce report and 2021 intensive care trainees to compare actual intensivist demographics with images generated by Midjourney v6.0 and ChatGPT 4.0 DALL-E 2. A total of 100 images were generated across ICU subspecialties, and compared sex, race/ethnicity, and age representation in AI-generated photos to the actual workforce demographics.  

The results showed that both AI models revealed biases compared to U.S. intensive care workforce data, particularly overrepresenting White and younger intensivists. ChatGPT-DALL-E2 produced fewer females (17.3% vs. 32.2%, P < 0.0001), more White individuals (61% vs. 55.1%, P = 0.002), and a younger demographic (53.3% vs 23.9%, P < 0.001). Midjourney depicted more females (47.6% vs. 32.2%, P< 0.001), more White individuals (60.9% vs 55.1%, P = 0.003), and younger intensivists (49.3% vs 23.9%, P < 0.001). Significant differences were noted across ICU specialties in both models which notable differences in portraying intensivists.  

They concluded the significant biases in AI-generated images of intensivists, produced by ChatGPT DALL-E 2 and Midjourney, reflected broader cultural issues and perpetuate stereotypes of healthcare workers, emphasizing the need for fairness, accountability, transparency, and ethics in AI applications for healthcare. 

Source: ccforum.biomedcentral.com/articles/10.1186/s13054-024-05134-4 

link

Leave a Reply

Your email address will not be published. Required fields are marked *