OpenAI refuses to make its AI writer open source over fake news fears

Although the technology can effortlessly write stories, researchers fear it could be used maliciously

AI

The accidental creation of an AI system capable of creating sophisticated fake news stories will be withheld from the open source community over fears it will be abused for malicious purposes.

Researchers at OpenAI institute said they were attempting to create an algorithm that could produce natural sounding text based upon extensive research and language processing, but soon realised it was capable of creating fake news stories taking cues from the 8 million web pages it trawled to learn about language.

Given how convincing some of these stories are, it's believed the system could be exploited to spread malicious or inflammatory content.

"We started testing it, and quickly discovered it's possible to generate malicious-esque content quite easily," said Jack Clark, policy director at OpenAI, speaking to the BBC.

Advertisement
Advertisement - Article continues below

OpenAI's researchers used content posted to link sharing site Reddit that had achieved a "karma" score of 3 or more to make the sources more reliable. It then uses these sources to write stories, making up attributions and quotes to make them sound more convincing.

However, researchers found that the occasionally included inaccuracies, with names and places being used incorrectly. The BBC references one story where a person named "Paddy Power" led a protest, for example.

The research will now be used as a platform to demonstrate that AI applications should be used carefully and to launch a debate about whether it should ever be used for things like news writing.

"It's not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will," Brandie Nonnecke, director of Berkeley's CITRIS Policy Lab told the BBC.

"Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content."

Advertisement
Related Resources

Application security fallacies and realities

Web application attacks are the most common vulnerability, so what is the truth about application security?

Download now

Your first step researching Managed File Transfer

Advice and expertise on researching the right MFT solution for your business

Download now

The KPIs you should be measuring

How MSPs can measure performance and evaluate their relationships with clients

Download now

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

4 Nov 2019
Visit/domain-name-system-dns/34842/microsoft-embraces-dns-over-https-to-secure-the-web
Domain Name System (DNS)

Microsoft embraces DNS over HTTPS to secure the web

19 Nov 2019
Visit/strategy/28115/the-pros-and-cons-of-net-neutrality
Business strategy

The pros and cons of net neutrality

4 Nov 2019
Visit/social-media/34844/can-wikipedia-founders-social-network-really-challenge-facebook
social media

Can Wikipedia founder's social network really challenge Facebook?

19 Nov 2019