FTI Newsletter, Issue 103

FTI Newsletter, Issue 102
April 29, 2018

Robots vs Robocalls

Every two weeks in our FTI newsletter, we take a deep dive into an emerging tech trend, exploring what it is, why it matters, key players, plausible scenarios for the future, and what action you and your organization should take. 

First, the robots. Google’s annual developer conference (called I/O) was last week. And while it was full of interesting announcements, the one that caught everyone’s attention was something called Duplex. An upgrade to Google Assistant, Duplex is a kind of personal AI-powered secretary. In the near future Duplex will make all the time-consuming phone calls that you hate––scheduling your haircuts, finding reservations at your favorite restaurant, and generally chatting with a bunch of human gatekeepers you’d rather not interact with.

The demo at I/O appeared to pass the Turing Test, complete with “ummms” and “ahs.” Google’s AI Assistant made a call to a human business, and from the audience it was difficult to tell which voice was algorithmically generated. It was built using some of the same deep learning technology created by DeepMind––the division that built AlphaGo and AlphaGo Zero, which beat the world’s Go champions. (All of them.) You can listen to recordings here:

And now, the robocalls. While Duplex was being demoed in front of awestruck I/O attendees, FCC Chairman Ajit Pai announced a $120 million fine on Adrian Abramovic for his role in 97 million robocalls placed over a three-month period in 2016. As a result, we saw a whole bunch of stories last week about robocalls––why you’re now getting them in Chinese, how they’re hard to stop, and how not to get caught up in a scam.

But what really separates robocalls from AI bots who make phone calls? Duplex and Abramovic’s robocall service both route and make calls on behalf of people hoping to accomplish a particular goal. Both could be used for altruistic or malicious purposes. Both help businesses make money. Both access and use your personal
data during the calls.

So why does this matter to you? In our Future Today Institute foresight methodology, the emergence of Duplex and the Abramovic fine are what we call a contradiction. On the one hand, we’re celebrating Duplex, which is a remarkable achievement. (Who really wants to burn time on the phone making appointments?) But we don’t have clear regulations saying when robots are allowed to make calls; we don’t have any enforcement mechanism to stop them; and in the U.S., each state gets to decide whether calls can be recorded or listened in on.

 

The uncertain future of robot-human interactions


This raises some thorny legal questions about the future of robot-human interactions.
How much digital eavesdropping are you, personally willing to accept? Under what circumstances? What about your customers/ clients? How could this affect your business?

Journalists and lawyers already know about all-party consent laws here in the U.S, which make it illegal to record a conversation without notifying everyone on the call. It’s illegal to eavesdrop in California, Connecticut, Delaware, Florida, Illinois, Maryland, Massachusetts, Montana, Nevada, New Hampshire, Pennsylvania, and Washington––which means that Duplex might have to disclose that it’s an AI first. (It would likely have to record and analyze the conversation in order to work, and it’s likely that those calls would be further analyzed to tweak and improve the system going forward.)

But here’s where things get interesting. Eavesdropping laws were intended for other humans, not robots. Strictly speaking, robots don’t have ears or “hear” the other speakers.


Emerging Tech Trend: Regulating Robot-Human Interactions

Key Insight: Technology is now moving faster than any government’s ability to legislate it, whether that’s at the state, federal or international level. As a result, government agencies and businesses around the world are learning the hard way what happens when old laws clash with new technology.

Examples: In 2015, a Twitterbot went rogue, autonomously tweeting “I seriously want to kill people” during a fashion event in Amsterdam. It was a short-sighted coding error that made the bot threaten people, and when the police came to arrest its programmer, he apologized and immediately deleted it. But he also said that he didn’t know “who is or should be held responsible, if anyone.”

If a digital assistant or bot breaks a law without your direct involvement—robocalling incessantly, or harassing another person with hate speech, or something equally horrible that we just haven’t seen yet—who’s to actually blame? The individual developers who created the code? Or the technology company that built the platform?

At the moment, no one knows. The legal community is divided. Some scholars argue that speech produced by AI is not protected by the First Amendment. Others say that AI is only capable of producing speech because of its human programmers, therefore speech from AI is just another form of human speech and is subject to the same rules and regulations we are.

Here’s the rub. Robots already breaking our laws. We have plenty of legal questions, but few answers, and to confront the future of robot-human interactions, we only have access to our existing democratic instruments of change: patents, regulation, legislation, and lawsuits. In a democracy, new policies and laws require discussion, debate and various parts of a government to collaborate. It’s a slow process by design, but that doesn’t mean we should avoid any action until there’s a real crisis. Without meaningful discussion about the long-range implications of legislation, lawmakers could cause drastic (if untended) consequences for all of us (you, your business, your family, your school, your city) in the decades to come.

Near-Futures Scenarios:

  • Optimistic: In the U.S., lawmakers convene nonpartisan tech experts, ethicists, philosophers and business leaders to re-think some of our constitutional amendments. Given what we know to be true about AI and its developmental track today, how do the First, Fourth and Fifth Amendments apply to robot-human interactions? This group begins the difficult but necessary task of defining speech in our age of technology––and thinks through all of the laws relating to communications and commerce, and how they should evolve to meet the needs of 21st century America. State governments follow suit. Everyone benefits.
  • Pessimistic: We take a wait-and-see approach and let the courts decide, which only postpones the inevitable regulation that follows. Eventually new rules are written, but they’re unenforceable. Consumers lose trust in the companies mining, refining and productizing their data. Elected officials, saddled with complaints, make bold promises about regulation––but fail to address the issue in a meaningful way. Everyone’s miserable.
  • Catastrophic: Something horrible happens––an AI interaction causes a fatality, or riots, or a market crash––and lawmakers decide to restrict robot-human interactions. Lawmakers, buckling under mounting public pressure, create hyper-restrictive legislation, which effectively censors the AI community. We find ourselves living under a strange surveillance state, with a new kind of code-based police force that’s trained to find and suspend certain human-machine interactions, or to delete data/ algorithms/ networks. If this sounds a little Fahrenheit 451, you should know that there are lots of signals already pointing us in this direction.

Action Meter: If you work in a field that relies on communication––for sales, marketing, constituent outreach––or if you’re a lawmaker, or if you’re just a person who talks on the phone, now is a good time to start monitoring this trend. How will you approach the future of human-machine interactions?

Watchlist: The Big 9 (Alphabet, Amazon, Tencent, Alibaba, Baidu, Facebook, Microsoft, IBM, Apple); the Electronic Frontier Foundation; government agencies; business leaders; legal scholars; law enforcement; technology and privacy advocates; media organizations; everyday citizens.


Around the Institute

FTI on the Wall Street Journal Podcast
Listen to a live recording of the WSJ’s Nikki Waller and FTI’s Amy Webb in conversation at the Wall Street Journal’s Future of Everything conference. Listen or download here.

Summer Teaching Fellows: DEADLINE IS MAY 25
Our 2018 Teaching Fellows Program is now accepting applications! This August, FTI will host a special Teacher Training Fellowship for teachers who want to incorporate futures forecasting into their curriculum. The program lasts three intensive days, and we will teach fellows how to incorporate the tools of futurists into their existing coursework. Apply here.

Futurist Amy Webb has some concerns about the future
Ahead of her talk at the Charles H. Wright Museum of African American History in Detroit, Crain’s spoke with Amy Webb about data privacy, local journalism and what we can do right now to make an optimistic version of the future come true. Read more…

FTI on TWiT
Mark Zuckerberg comes out of his Congressional testimony unscathed. China will dominate AI in the coming decade. HomePods are not selling like HotCakes. Apple leaks leakers leaking leaks. Waymo wants to test truly driverless cars in California. Watch or listen to “This Week in Tech” with host Leo Laporte, Lindsey Turrentine, Jason Hiner and Amy Webb. Watch/ listen…

Get our tech trend updates

Comments are closed.