Algorithms Designed to Target Vulnerable Populations

Deep Linking
March 10, 2020
Prioritizing Accountability and Trust
March 10, 2020

Algorithms Designed to Target Vulnerable Populations

In countries around the world, A.I. is being used at border crossings, within poor neighborhoods, and in school districts where delinquency is a problem.

In countries around the world, A.I. is being used at border crossings, within poor neighborhoods, and in school districts where delinquency is a problem. Most of the time, the technology is billed as a solution, but it serves to disenfranchise vulnerable communities.

In 2018, Amazon pitched Immigration and Customs Enforcement on an A.I. system capable of identifying and tracking immigrants and said its Rekognition face scanning platform could assist with homeland security investigations.

The Multiple Encounter Dataset is a big database that contains two big sets of photos: people who have not yet committed a crime, and an FBI data set of deceased people. The dataset overindexes on people of color, which means that if law enforcement use the data to train algorithms, it’s going to lead to bias. Image recognition is a particularly vexing challenge, because researchers need large datasets to perform their work. Often, images are used without consent.

The Child Exploitation Image Analytics program—a data set used by facial recognition technology developers for testing—has been running since at least 2016 with images of “children who range in age from infant through adolescent” and the majority of which “feature coercion, abuse, and sexual activity,” according to the program’s own developer documentation. These images are considered particularly challenging for the software because of the greater variability of position, context, and more.