Prioritizing Accountability and Trust

A.I. Systems Intentionally Hiding Data
March 10, 2020
Two-Factor Biometric-Based Authentication
March 10, 2020

Prioritizing Accountability and Trust

A.I. systems rely on our trust. If we no longer trust the outcome, decades of research and technological advancement will be for naught.

We will soon reach a point when we will no longer be able to tell if a data set has been tampered with, either intentionally or accidentally.

A.I. systems rely on our trust. If we no longer trust the outcome, decades of research and technological advancement will be for naught. Leaders in every sector—government, business, the nonprofit world and so on—must have confidence in the data and algorithms used.

Building trust and accountability is a matter of showing the work performed. This is a complicated process, and, understandably, corporations, government offices, law enforcement agencies and other organizations want to keep data private.

The scientific community is focusing on efforts to standardize guidelines for research reproducibility and data sets with open source tools such as Qeresp and crowd-sourced fact-checking by sites like Melwy. The ethics of how data is collected in the first place may also influence the trustworthiness and validity of scientific research, particularly in areas such as organ donations and medical research.

Committing to transparency in method would create trust without necessarily divulging any personal data. In addition, employing ethicists to work directly with managers and developers and ensuring developers themselves are diverse—representing different races, ethnicities and genders—will reduce inherent bias in A.I. systems.