Scientists are urging caution before artificial intelligence (AI) models such as ChatGPT are used in health care for ethnic minority populations. Writing in the Journal of the Royal Society of Medicine, epidemiologists at the University of Leicester and University of Cambridge say that existing inequalities for ethnic minorities may become more entrenched due to systemic biases in the data used by health care AI tools.
Scientists are urging caution before artificial intelligence (AI) models such as ChatGPT are used in health care for ethnic minority populations. Writing in the Journal of the Royal Society of Medicine , epidemiologists at the University of Leicester and University of Cambridge say that existing inequalities for ethnic minorities may become more entrenched due to systemic biases in the data used by health care AI tools. AI models must be “trained” using data scraped from different sources such as health care websites and scientific research . However, evidence shows that ethnicity data are often missing from health care research. Ethnic minorities are also less represented in research trials. Mohammad Ali, Ph.D. Fellow in Epidemiology at the College of Life Sciences, University of Leicester, says, “This disproportionately lower representation of ethnic minorities in research has evidence of causing harm, for example by creating ineffective drug treatments or treatment guidelines which could be regarded as racist. If the published literature already contains biases and less precision, it is logical that future AI models will maintain and further exacerbate them.” The researchers are also concerned that health inequalities could worsen in low- and middle-income countries (LMICs). AI models are primarily developed in wealthier…