A.I. Still Has a Bias Problem

Deep Linking
March 10, 2020
Prioritizing Accountability and Trust
March 10, 2020

A.I. Still Has a Bias Problem

It’s no secret A.I. has a serious and multifaceted bias problem.

It’s no secret A.I. has a serious and multifaceted bias problem.

Just one example: The data sets used for training often come from places like Reddit, Amazon reviews and Wikipedia, a site inherently riddled with bias. The people building models tend to be homogeneous and aren’t often aware of their own biases.

As computer systems get better at making decisions, algorithms may sort each of us into groups that don’t make any obvious sense to us—but could have massive repercussions. Every single day, you are creating unimaginable amounts of data, both actively (like when uploading and tagging photos on Facebook) and passively (driving to work, for example).

Those data are mined and used, often without your direct knowledge or understanding, by algorithms. It is used to create advertising, to help potential employers predict our behaviors, to determine our mortgage rates and even to help law enforcement predict whether or not we’re likely to commit a crime.

Researchers at a number of universities—including the University of Maryland, Columbia University, Carnegie Mellon, MIT, Princeton, University of California-Berkeley, International Computer Science Institute, among others—are studying the side effects of automatic decision-making. You, or someone you know, could wind up on the wrong side of the algorithm and discover you’re ineligible for a loan, or a particular medication, or the ability to rent an apartment, for reasons that aren’t transparent or easy to understand.

Increasingly, data is being harvested and sold to third parties without your knowledge. These biases can reinforce themselves over time.

As A.I. applications become more ubiquitous, the negative effects of bias will have greater impact.

The Apple card gave high credit limits for men versus women, in some cases, by a factor of 20. Wearables such as Google’s Fitbit are considerably less accurate for darker skin-types because of how melanin absorbs green light.

This is problematic when insurance companies use biased algorithms to track heart rates, blood pressure and risk rates for conditions like irregular heartbeats or a potential heart attack.