Making A.I. Explain Itself

Deep Linking
March 10, 2020
Prioritizing Accountability and Trust
March 10, 2020

Making A.I. Explain Itself

There is growing concern voiced by computer scientists, journalists and legal scholars who argue that A.I. systems shouldn’t be so secretive.

You’ve undoubtedly heard someone argue that A.I. is becoming a “black box”—that even those researchers working in the field don’t understand how our newest systems work. That’s not entirely true, however there is growing concern voiced by computer scientists, journalists and legal scholars who argue that A.I. systems shouldn’t be so secretive.

In August 2019, IBM Research launched AI Explainability 360, an open-source toolkit of algorithms that support explainability of machine learning models. It’s open source so that other researchers can build on and explain models that are more transparent. This isn’t a panacea—there are only a few algorithms in the toolkit—but it’s a public attempt to quantify and measure explainability.

Broadly speaking, there are a few challenges that will need to be overcome. Requiring transparency in A.I. could reveal a company’s trade secrets. Asking the systems to simultaneously explain their decision-making process could also degrade the speed and quality of output.

It’s plausible that new regulations requiring explainability will be enacted in various countries in the coming years. Imagine sitting beside a genius mathematician who gives you correct answers in Italy, but receiving her answers across the border in France would mean asking her to stop and show her work, and so on every time she was asked to share her answers in a new country.