As fake news spreads across the globe on the internet and social platforms, censorship and free speech play instrumental roles in terms of design, development, and the legal protections afforded to creators and users of technology.
If Facebook or Twitter decided to block all politically related posts because they could not sufficiently weed out “fake” posts, they would be making a business decision, but not one that would raise First Amendment issues. So, though we expect to see platforms tighten the rules on what they deem permissible, they are fully entitled to do so. In the United States, the larger First Amendment issues as they relate to media involve questions of what (if any) rights are afforded to A.I. and what liability (if any) can be imposed on the creators of technology, algorithms, and code.
The term “fake news” is relatively new, but worries about misinformation aren’t. Just look at the 1938 “War of the Worlds” radio broadcast—Orson Welles’ fictional story about an alien invasion that sent real-life New Yorkers into mass hysteria. The same kind of hysteria takes place today on various levels, thanks to the viral nature of fake news, conspiracy theories, and misinformation spread on the internet.
It’s causing damage outside the United States, too: Following the February 2019 terrorist attacks in Kashmir, India, fake stories, photos, and videos spread at unprecedented levels—ultimately fueling calls for military retaliation against Pakistan and nearly leading the two countries into war.
In Egypt, fake news laws are being used to silence dissent. For instance, Egyptian activist Amal Fathy posted a video in which she claimed police officers had sexually harassed her. Two days later, her house was raided, and she and her son were jailed for “spreading false news.”
Moving forward, there are numerous scenarios for how the U.S. government chooses to protect speech created by A.I. or automated devices. The most restrictive scenario would involve deciding that First Amendment protections do not extend beyond human produced speech.
This scenario is unlikely due to the fact that some human programming does go into bot creation, and would mean that a string of different technological advances (such as voice recognition and generation) could be afforded fewer protections.
A second possibility involves deciding that the human programmer would be protected under the First Amendment, while A.I.-created speech would not be afforded protections. This attempt to compromise makes sense at some level but could fall short when it comes to being able to fully give credit (or blame) to content created by a human vs A.I. technology.
Yet another option would be deciding that all A.I.-produced content is considered free speech. Supporters of this view contend that the First Amendment does not limit speech to that created by humans, hence any content produced by a voice interface or bot should be protected. While on one hand this opens the possibility to all content being considered speech, if A.I.-created content is protected as speech, the legal entities producing such content could be held liable if appropriate.
We are likely to see some hybrids of these stances come about as legal questions arise. Look for media and journalism to be at the epicenter of numerous technology-related legal questions moving forward everywhere around the world.
Americans say fake news is a more pressing problem than climate change, terrorism, or racism, according to a 2019 study by the Pew Research Center. Social media companies, governments, and citizens across the globe must balance the need for free speech with the need for truth. Fake news threatens democracy globally, causing confusion, spreading misinformation and seeding distrust of the news media.
European Union, Federal Communications Commission, Google, Facebook, Microsoft, Apple, Amazon, Snap, Instagram, YouTube, Twitch, broadcasters, newspapers, radio stations, digital media organizations, Jack Balkin (Knight Professor of Constitutional Law and the First Amendment at Yale Law School), Margot Kaminski (Associate Professor, University of Colorado Law).
Accountants, Advertising and Public Relations, Aerospace, Agriculture, Airlines, Alternative Energy Production & Services, Architectural Services, Auto Manufacturers, Banking, Bars & Restaurants, Beer, Wine and Liquor, Book Publishers, Broadcasters, Radio and TV, Builders/General Contractors, Cable & Satellite TV Production & Distribution, Casinos/Gambling, Chemical & Related Manufacturing, Civil Servants/Public Officials, Clergy & Religious Organizations, Clothing Manufacturing, Commercial TV & Radio Stations, Construction, Corporate Boards & Directors, Covid-19/ coronavirus, CPG, Cruise Ships & Lines, Defense, Diplomacy, Doctors & Other Health Professionals, Drug Manufacturers, Education Colleges & Universities, Education K-12, Education Online, Education Trades, Electric Utilities, Entertainment Industry, Environment, Finance, Foreign & Defense Policy, Gas & Oil, Government - International, Government - National, Government - State and Local, Health Professionals, Heavy Industry, Hedge Funds, Hospitality, Hotels/Motels/Tourism, Human Resources, Information Technology, Insurance, Law Enforcement, Lawyers/Law Firms/Legal Industry, Lobbyists, Luxury Retail, Magazines, Manufacturing, National Security, News Media, Non-profits/Foundations/Philanthropists, Online Media, Pharmaceuticals/Health Products, Private Equity, Professional Services, Professional Sports, Radio/TV Stations, Real Estate, Retail, Technology Company, Telecommunications, Trade Associations, Transportation, Travel Industry, TV Production, TV/Movies/Music, Utilities, Venture Capital, Waste Management, Work (Future of)