Content Moderation in the Name of National Security

The Subscription Economy Matures
March 10, 2020
Personal Robots and Robot Butlers
March 10, 2020

Content Moderation in the Name of National Security

Content moderation and policing efforts are being met with resistance from free speech advocates from across the political spectrum.

Key Insight

It may seem counterintuitive, given that there is so much talk of regulating big tech, but government agencies worldwide expect tech companies to help fight against the spread of misinformation, propaganda and terrorism.

Why It Matters

Content moderation and policing efforts are being met with resistance from free speech advocates from across the political spectrum. Content surveillance and moderation will be ever important battlegrounds around the world for states, companies and individuals.


Content moderation is a critical consideration in the design and development of online platforms, and requires decisions about the legal protections afforded to creators and users of technology. Throughout history, governments have restricted content and its distribution in the name of security and morality. After the advent of the printing press, the Catholic Church started publishing a list of prohibited books in 1559. This practice continued until 1966. With the internet age, content moderation has become a larger technical challenge not just for governments and institutions, but for media and tech companies as well.

Throughout 2018 and 2019, both the U.S. and European governments held several heated public hearings with representatives from Twitter, Google and Facebook in the wake of controversial elections and increasingly polarized public discourse.

Tech companies are inconsistent at best when it comes to enforcing “community standards” of content moderation and of free or acceptable speech. Tech companies are struggling to design algorithms that identify and censor offensive and dangerous content such as pornography and misinformation while protecting the right to personal expression.

Gab, a social network that champions free speech, launched in 2017 mostly in response to the suspension of the high-profile accounts of controversial personalities such as Milo Yiannopoulos and Alex Jones on Twitter and Facebook. While Gab doesn’t have the network effect of mainstream players, expect to see more services like it as content moderation policies become more strictly enforced.

What’s Next

Lawmakers and tech companies alike struggle with balancing tensions between censorship, free enterprise, and national security. The questions—and answers—are complicated, and they involve all of us. Google, Twitter and Spotify have all announced restrictions or complete bans on political advertisements in 2020. Facebook is the notable exception here, allowing political ads without fact-checking for accuracy.

Moving forward, there are numerous scenarios for how governments choose to protect and police content, regardless of whether it was created by a human or a bot. One scenario could be that governments decide freedom of speech protections don’t extend beyond human-produced speech.

Down the road, certain content produced via future technological advances wouldn’t be protected either. But it’s unlikely this would happen, because humans are involved in programming bots. Another potential outcome: human programmers are protected under the First Amendment, but A.I.-created speech is not. This makes sense at some level, but it could fall short when giving credit or blame to content created by a human, versus A.I. technology. Or, ultimately, the government could decide that A.I.-produced content is considered free speech, including any content produced by a voice interface or a bot. In the end, it could open up the liability to legal entities responsible for the content.

The Impact

Complex legal questions will arise, and we’re likely to see various hybrids of these scenarios in the future. The media and journalism will be at the center of these legal questions around the world.


Amazon, Apple, Electronic Frontier Foundation, European Union, Facebook, Federal Communications Commission, Google, Instagram, Gab, law enforcement, legal scholars, media organizations, Microsoft, Moritz College of Law, technology and privacy advocates, technology company leaders, Ohio State University, Twitch, YouTube.