As hate speech, fake news, and rampant harassment escalate, online platforms and social media sites are increasingly investing in moderation of their platforms. Each platform takes a different tack to enforce its own policies, including blending human moderators and algorithms to detect hate or problematic speech, and other features like muting or blocking users.
Moderation requires making choices about what is acceptable—and what isn’t. Platforms must think carefully about what they optimize for and whose needs they consider, because any decision could be politicized and the platform’s interests may not align with society’s.
YouTube rolled out new terms of service at the end of 2019, adding the phrase, “YouTube is under no obligation to host or serve content.” Facebook is wading into preemptive moderation with a policy to ban misinformation about the 2020 census and a ban on videos manipulated with A.I. systems. But the company has generally taken a more hands-off approach—in particular when it comes to fact-checking political ads. Twitter, meanwhile, decided it would ban political advertising entirely.
Classifying political speech is difficult because the process inevitably becomes, well, political—and moderation policies sometimes have unexpected consequences. When Tumblr announced in 2018 it would ban pornography, it disrupted communities of queer and gender non-conforming adults who were drawn to the site’s formerly permissive rules.
When Twitter announced it would ban political advertising, some labor groups and activists worried it would be harder to share their messages. An energy company, for instance, could post a brand campaign painting a rosy picture of fossil fuels, while environmental activists couldn’t post ads for legislation to cut back on emissions.
As platforms take a more active stance, they must balance their principles with possible backlash from users: When Reddit quarantined the controversial President Trump-focused subreddit r/The_Donald, users started a campaign to move the conversations to other platforms that might be more welcoming to conservatives.
Decisions about all kinds of content, from user posts to political advertising, will be highly scrutinized and possibly politicized. The proliferation of policies to regulate speech could push meaningful conversations about what should be allowed in the public forum; if not, it could further polarize debate, pushing users deeper into channels with narrower audiences.
As the 2020 election cycle kicks into full swing, each major social platform has a different posture toward political advertising, misleading posts and more. Those stances will be politicized and exploited with far reaching consequences for our political climate.
The Coral Project, Facebook, Perspective API by Google, Reddit, Tumblr, Twitter, YouTube, political parties and candidates on both sides.
Advertising and Public Relations, Book Publishers, Broadcasters, Radio and TV, Cable & Satellite TV Production & Distribution, Commercial TV & Radio Stations, Government - International, Government - National, Information Technology, Magazines, National Security, News Media, Non-profits/Foundations/Philanthropists, Online Media, Radio/TV Stations, Technology Company, Trade Associations, TV Production