Can AI be used to promote equality in healthcare access?

The Issue

Access to healthcare remains a pressing concern globally, especially among minority communities. The inherent societal bias has been a significant concern across the healthcare spectrum, especially in the wake of growing economic and social disparity.

With the increasing adoption of AI technologies in different areas, there has been rising hope that its adoption in healthcare will help improve on existing medical technology, personalised medicine and provide reprieve and equity of access to the underserved and socially disadvantaged communities.

However, the adoption of AI in healthcare must overcome critical challenges to ensure that AI technologies do not amplify the existing biases, which would render such a promoter of healthcare access inequality.

AI and equality in access

Adopting AI in healthcare is multispectral, ranging from easing diagnosis to aiding the specialists in prescribing medication. Without AI, these clinical processes are marred with bias, affecting the access of quality medical services to underprivileged communities. In most cases, bias rise from social factors, including the common misconceptions and assumptions held by society towards a certain community. For instance, the pain gap phenomenon is a clear example of societal bias and how such brings about access inequality in healthcare. According to Berkley School of Public Health, the phenomenon shows that pain management for white patients is intense since a thorough investigation of the pain cause is carried out. In contrast, such is largely ignored in other non-white, disadvantaged patients.
Based on these concerns, the adoption of AI provides hope as it promises to personalise medicines, improve existing technologies, and provide access to customised healthcare to underserved communities by utilising big data. Nonetheless, all these potential benefits are put at risk by the possibility of AI magnifying the existing societal disparities. According to a professor at MIT Institute of Medical Engineering, biases in healthcare AI arise from algorithms and datasets that propagate the inherent societal bias. The three main sources of bias are statistical bias, variance in sample size, and noise that emanates from large data size or changes in the model used.
Overcoming these challenges is crucial if the goals of AI are to be achieved. Stakeholders and thought leaders in the field propose several ways. The key to these solutions is to ensure that the AI tools are trustworthy, which calls for the need for the said tools to be inclusive, fair, and support personalisation. According to researchers at Jameel Clinic, an initiative that supports AI research in healthcare, AI tools should be diverse in the operationalisation with the ability to serve any community, population, or subpopulation. This calls for algorithms to be trained and validated in multiple communities, cities, and countries while also considering the need to honour patient privacy. It is also imperative to train the AI to consider individual uniqueness, which is important in addressing the pre-existing biases in the healthcare system. Besides, conducting exploratory error analysis could identify common error threads, thus helping identify bias.

Our View

The growth of AI in healthcare holds a promise to address the raging inequality in healthcare access. It promises to provide tools that can robustly address bias which is a major cause of inequality. However, it is the humans who ultimately make the decisions. They need to understand all the possible machine-generated biases and address them to ensure AI does not exacerbate the existing problem.
Sarah Klain

Written by:

Sarah Klain