Summary of “An AI that writes convincing prose risks mass-producing fake news”

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks.
“We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. Clark says the program hints at how AI might be used to automate the generation of convincing fake news, social-media posts, or other text content.
Fake news is already a problem, but if it were automated, it might be harder to tune out.
Clark says it may not be long before AI can reliably produce fake stories, bogus tweets, or duplicitous comments that are even more convincing.
OpenAI does fundamental AI research but also plays an active role in highlighting the potential risks of artificial intelligence.
The OpenAI algorithm is not always convincing to the discerning reader.
A lot of the time, when given a prompt, it produces superficially coherent gibberish or text that clearly seems to have been cribbed from online news sources.
“You don’t need AI to create fake news,” he says.

The orginal article.