Bad Robots – “Race Blind” Medical Algorithm Fraught With … (wait for it) … Racial Bias
Bad Robot Outcome:
The world’s largest healthcare organization utilized an algorithm to determine which patients would benefit from enhanced medical support. The use of this technology has since come under significant scrutiny for vastly underestimating the health needs of seriously ill Black patients.
The Story
The algorithm at issue here was created by Optum, a subsidiary of UnitedHealth Group (the world’s largest healthcare company). The purpose of the algorithm was to identify and evaluate which patients would benefit most from increased medical care that could help such individuals stay on their medications or keep them out of the hospitals.
In fact, the algorithm failed to account for more than half of Black patients who should have been categorized as “high risk.” Upon its reassessment, Optum concluded that Black patients who had originally been as equally in need of care as their white counterparts were actually much more at risk, suffering from a collective additional 48,772 chronic diseases.
So, how could a seemingly race blind algorithm produce such results? It’s simple. Healthcare costs are not race neutral. In fact, there has been a long and tragic history of racial inequity within the healthcare system. As stated by Ashish Jha, Director of the Harvard Global Health Institute, “we already know that the healthcare system disproportionately mismanages and mistreats Black patients and other people of color.” In fact, studies over the years have demonstrated that Black patients, as compared to their white peers, are often less likely to receive pain treatments, life saving procedures, and cholesterol-lowering medication.
The underlying reasons for this inequity, however, are complex.
They range from explicit and outright racism all the way to unconscious and deep-rooted bias within the medical community. They also include socioeconomic factors, such as inadequate resources and lack of insurance.
The Fall-out
The bias within Optum’s algorithm first came to light in a study published in Science (a peer-reviewed academic journal) in October 2019. It has since been covered by news publications and periodicals far and wide.
However, the problem goes well beyond Optum and the UnitedHealth Group. Similar tools and algorithms are used by entities – both public and private – to manage the health care of about 200 million United States residents each year. “It’s truly inconceivable to me that anyone else’s algorithm doesn’t suffer from this,” notes Sendhil Mullainathan, professor of computation and behavioral science at the University of Chicago Booth School of Business. Professor Mullainathan hopes that this serves as a wake-up call to the entire industry.
Our view
Race blindness does not exist. The sociologically and historically created disparities between ethnic groups across our planet cannot be ignored. Quite the opposite, they must be considered and accounted for in the selection of data sets and the implementation of Artificial Intelligence solutions. If they are not, as wisely stated by Ziad Obermeyer (acting associate professor at the Berkeley School of Public Health), then “biased algorithms end up perpetuating all the biases that we currently have in our health care systems.”
As we have stated in past posts, technology is only as good as the inputs it receives. Algorithms create models based on the data they ingest. Data must be scrutinized to assess which underlying factors, issues, and context informs and influences such numbers. Individuals who are familiar with – and trained on – the specifics of the data must be engaged and involved in the process.
Relatedly it is important to note that technology can have biased consequences, even if those who create the technology are not outwardly biased themselves. Ruha Benjamin, associate professor of African American Studies at Princeton University, is “struck by how many people always think that racism always has to be intentional and fueled by malice.”
Written by: