Detecting Authentic Activity

Crypto Trading Bots
March 10, 2020
Popup Newsrooms and Limited-Edition News Products
March 10, 2020

Detecting Authentic Activity

Bots are becoming more human and difficult to detect

Key Insight

Social networks including Facebook and Twitter have promised to tweak their algorithms to curb the spread of bot-generated content and engagement—but the bots are becoming more human and difficult to detect.

Why It Matters

Some bots may be harmless, helpful, or funny, but others manipulate people by spreading misinformation, artificially inflating the popularity of people, ideas, or products. There’s also the risk of fraud, suppressing speech, spam, malware, cyberbullying, and trolling. The result: a social media landscape in which the public increasingly struggles to distinguish reality from lies.

Examples

Russian-linked Facebook and Twitter bot accounts spread disinformation during the 2016 U.S. presidential election. This is concerning when two-thirds of Americans get their news online, according to the Pew Research Center. In some cases, conspiracy theories spread by bots inspired real-world violence.

In December 2019, Facebook and Twitter shut down a system of fake accounts pretending to be real Americans that were photos of A.I.-generated faces and disseminating pro-Trump messages. All told, in 2019, Facebook removed 3.2 billion fake accounts between April and September—double that of the same time period in 2018—while Twitter suspended 88,000 accounts.

The University of Indiana created a bot tool called Botometer that checks Twitter activity on accounts and scores them on the likelihood they are a bot. However, researchers at the NATO Strategic Command Centre of Excellence in Latvia found it’s simple to buy tens of thousands of comments, likes, and views on Facebook, YouTube, Instagram, and Twitter.

Countries across the globe—including India following the aforementioned terrorist attack—have warned social media companies that they must reel in fake news, and special working groups within the United Nations have explored the topic of regulation, questioning what responsibilities and standards social media companies should have when it comes to international law.

What’s Next

The challenge going forward: Algorithm changes tend to happen in real-time, with live audiences. Not all scenarios have been mapped and tested. This became apparent when a fake story about a Muslim man, warning others about a planned terrorist attack in Slovakia, went viral. Local police issued a statement correcting the story, but since it came from the official police station’s account, tweaks to the News Feed algorithm prevented Facebook users from seeing it.

As social media companies experiment with better ways to curb the spread of fake and misleading information, we will see glitches and potentially even more fake news stories being spread in the foreseeable future.

Impact

Labeling of bots will continue to be problematic. Algorithms, for instance, could cast a too-wide net, wrongfully flagging innocent content, such as voice-to-text posts from those with disabilities. If not carefully handled by tech companies and regulators, the labeling could also undermine freedom of expression in democracies.

Watchlist

Facebook, Google, Instagram, Snap, Twitter, the United Nations, regulators, digital advertisers, digital marketers.