Scoring Vulnerable Populations

A.I. Systems Intentionally Hiding Data
March 10, 2020
Two-Factor Biometric-Based Authentication
March 10, 2020

Scoring Vulnerable Populations

A.I.-powered recognition tools have well-documented blind spots.

A.I.-powered recognition tools have well-documented blind spots. They often return incorrect results for people of color and for trans, queer and nonbinary individuals.

In November 2019, researchers at the University of Colorado-Boulder showed how scoring tools—including Clarifai, Amazon’s Rekognition system, IBM Watson, Megvii’s Face++ and Microsoft facial analysis systems—habitually misidentified non-cisgender people.

Another study, this one from the MIT Media Lab, found that 33% of the time, Rekognition misidentified women of color as men. Even so, companies and government agencies continue to score vulnerable communities.

Law enforcement, immigration officials, banks, universities and even religious institutions now use scoring systems. The Charlottesville Police Department was publicly criticized early this year for placing smart cameras in communities with public housing and using A.I. systems to monitor activity.