Synthetic Voice Fraud

360-degree Video
March 10, 2020
A.I. at the Edge
March 10, 2020

Synthetic Voice Fraud

Synthesized media has a known problem area: It can be used by malicious actors to mislead people, to trick voice authentication systems and to create forged audio recordings.

Synthesized media has a known problem area: It can be used by malicious actors to mislead people, to trick voice authentication systems and to create forged audio recordings.

Voice fraud cost U.S. businesses with call centers $14 billion last year alone, according to a study by call center software maker Pindrop.

Google has been working on a synthetic speech dataset as part of the ASVspoof 2019 Challenge, which is an open source, global initiative designed to help develop countermeasures to fight spoofed speech. Researchers hope that the challenge will lead to more secure synthetic voice content.

Voice synthesis startup Lyrebird keeps its ethics statement in view on its website, and warns that its software “could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.”