Censorship in the Digital Age

Crypto Trading Bots
March 10, 2020
Popup Newsrooms and Limited-Edition News Products
March 10, 2020

Censorship in the Digital Age

Censorship and free speech play instrumental roles in terms of design, development, and the legal protections afforded to creators and users of technology.

Key Insight

As fake news spreads across the globe on the internet and social platforms, censorship and free speech play instrumental roles in terms of design, development, and the legal protections afforded to creators and users of technology.

Why It Matters

If Facebook or Twitter decided to block all politically related posts because they could not sufficiently weed out “fake” posts, they would be making a business decision, but not one that would raise First Amendment issues. So, though we expect to see platforms tighten the rules on what they deem permissible, they are fully entitled to do so. In the United States, the larger First Amendment issues as they relate to media involve questions of what (if any) rights are afforded to A.I. and what liability (if any) can be imposed on the creators of technology, algorithms, and code.

Examples

The term “fake news” is relatively new, but worries about misinformation aren’t. Just look at the 1938 “War of the Worlds” radio broadcast—Orson Welles’ fictional story about an alien invasion that sent real-life New Yorkers into mass hysteria. The same kind of hysteria takes place today on various levels, thanks to the viral nature of fake news, conspiracy theories, and misinformation spread on the internet.

It’s causing damage outside the United States, too: Following the February 2019 terrorist attacks in Kashmir, India, fake stories, photos, and videos spread at unprecedented levels—ultimately fueling calls for military retaliation against Pakistan and nearly leading the two countries into war.

In Egypt, fake news laws are being used to silence dissent. For instance, Egyptian activist Amal Fathy posted a video in which she claimed police officers had sexually harassed her. Two days later, her house was raided, and she and her son were jailed for “spreading false news.”

What’s Next

Moving forward, there are numerous scenarios for how the U.S. government chooses to protect speech created by A.I. or automated devices. The most restrictive scenario would involve deciding that First Amendment protections do not extend beyond human produced speech.

This scenario is unlikely due to the fact that some human programming does go into bot creation, and would mean that a string of different technological advances (such as voice recognition and generation) could be afforded fewer protections.

A second possibility involves deciding that the human programmer would be protected under the First Amendment, while A.I.-created speech would not be afforded protections. This attempt to compromise makes sense at some level but could fall short when it comes to being able to fully give credit (or blame) to content created by a human vs A.I. technology.

Yet another option would be deciding that all A.I.-produced content is considered free speech. Supporters of this view contend that the First Amendment does not limit speech to that created by humans, hence any content produced by a voice interface or bot should be protected. While on one hand this opens the possibility to all content being considered speech, if A.I.-created content is protected as speech, the legal entities producing such content could be held liable if appropriate.

We are likely to see some hybrids of these stances come about as legal questions arise. Look for media and journalism to be at the epicenter of numerous technology-related legal questions moving forward everywhere around the world.

The Impact

Americans say fake news is a more pressing problem than climate change, terrorism, or racism, according to a 2019 study by the Pew Research Center. Social media companies, governments, and citizens across the globe must balance the need for free speech with the need for truth. Fake news threatens democracy globally, causing confusion, spreading misinformation and seeding distrust of the news media.

Watchlist

European Union, Federal Communications Commission, Google, Facebook, Microsoft, Apple, Amazon, Snap, Instagram, YouTube, Twitch, broadcasters, newspapers, radio stations, digital media organizations, Jack Balkin (Knight Professor of Constitutional Law and the First Amendment at Yale Law School), Margot Kaminski (Associate Professor, University of Colorado Law).