In the past year, researchers showed how A.I. could be used to compose text so good that humans couldn’t tell it was machine written. The team at OpenAI demonstrated the many reasons why this was problematic, from mass-generating salacious social media posts and fake reviews to forging documents by world leaders.
It turns out that A.I. can also be used to detect when text was machine generated, even if we humans can’t spot the fake. That’s because an essay written by A.I. tends to rely on statistical patterns in text and doesn’t have much linguistic variation.
Researchers at the MIT-IBM Watson AI Lab and at Harvard University developed the Giant Language Model Test Room (GLTR), which looks for words that are likely to appear in a particular order.
This is good news for those concerned about the spread of automated misinformation campaigns.
This trend is part of our section on Artificial Intelligence. Other trends in this section include: