Bias in Scoring Systems

A.I. Systems Intentionally Hiding Data
March 10, 2020
Two-Factor Biometric-Based Authentication
March 10, 2020

Bias in Scoring Systems

It is no secret that many of our machine learning models—and the data they use to recognize others—are encoded with bias.

It is no secret that many of our machine learning models—and the data they use to recognize others—are encoded with bias.

That’s because the people who built the models are themselves subject to unconscious bias, as well as more explicit homogeneous learning and working environments.

Everyone seems to agree we have a bias problem, but so far the tech industry still doesn’t have a plan for how to address and solve for bias in recognition systems that are now scoring all of us continuously.

The algorithmic bias problem will likely get worse, especially as more law enforcement agencies and the justice system adopt recognition technologies.

To reduce bias, Facebook announced in June of 2019 that it was building an independent oversight board—a kind of “supreme court”—to judge itself. The board of 40 people would make content review decisions in small panels, in an effort to curtail false or misleading information, cyber bullying and meddling by governments wishing to harm countries and their citizens.

Research scientists Kate Crawford and Meredith Whittaker founded the AI Now Institute to study bias in A.I. as well as the impacts the technology will have on human rights and labor.

In response to a scathing investigative report by ProPublica on bias in the technologies used in the criminal justice system, the New York City Council and Mayor Bill de Blasio passed a bill requiring more transparency in A.I.

Microsoft hired creative writers and artists to train A.I. in language, while IBM is developing a set of independent bias ratings to determine whether A.I. systems are fair.