Skip to Content
http://www.wgbh.org/authenticate/login
wgbh News

Listen
Meet the scientist fixing bias in healthcare algorithms

Fixing Bias In Algorithms Is Possible, And This Scientist Is Doing It

hospital_care-3031259_1280.jpg
The errors could lead to Asian patients not getting adequate care in the hospital.
Pixabay/Creative Commons
Listen
Meet the scientist fixing bias in healthcare algorithms

Algorithms and artificial intelligence are playing ever larger roles in our daily lives, from Google searches and Facebook feeds to self-driving cars and sentencing convicted criminals. It’s increasingly clear that the decisions algorithms make are often biased, and even outright racist and discriminatory.

Irene Chen wants to fix that. Chen is a graduate student in the MIT Computer Science and Artificial Intelligence Lab, where her research focuses on machine learning in healthcare and making algorithms fairer. She was working with an algorithm for predicting who needs the highest level of attention in an intensive care unit.

“I was very surprised to find out that the Asian population was having a higher error than the rest of the population,” Chen told Living Lab Radio. “As an Asian American myself I thought, ‘I'm not biased, I'm not trying to make this discriminatory. What's going on here?’”

It turns out that Asians made up only 3 percent of the data set, whereas white patients made up 50 to 60 percent. This caused a higher error rate for Asians.

“The algorithm might say that the Asian patient is not going to die and then the hospital will not allocate resources to them,” Chen said. “But as a result, because the algorithm is wrong, the Asian patient might have been extremely high-risk and end up dying due to lack of resources.”

Chen said it would be great to go out and collect more data on Asian patients, but that’s not feasible for a computer scientist. So Chen and her colleagues made the assumption that additional data would be similar to the data that they already had.

“In the medical setting, that's actually a reasonable assumption,” she said. “Because a lot of times, the limiting agent is the clinician who doesn't want to label it, or it's hard to get different providers to give us the data. But in the end, it'll be the same type of patients, same population of patients.”

Using this method, they were able to get increased accuracy and fairness.

“And that was a really exciting result to find,” she said.

That approach won’t always work, Chen cautioned. Sometimes you will need more data or different types of data, and sometimes the mathematical model will be at fault. But, Chen said, algorithms are providing life-saving advances in healthcare that are worth pursuing.

“The results that we’ve shown from healthcare algorithms are so powerful that we really do need to see how we could implement those carefully, safely, robustly and fairly,” she said.

WGBH News coverage is a resource provided by member-supported public radio. We can’t do it without you.
Expand