OpenAI launches language tool once deemed ‘too dangerous to make public’

A face profile made out of 0 and 1 numbers

OpenAI has released the API for a highly sophisticated text completion tool as its first commercial venture, bucking the non-profit principles upon which it was established.

The organisation’s AI-powered “text in, text out” interface is general-purpose, as opposed to most other AI applications that are ordinarily developed for specific use cases. Built on the GPT-3 cloud, the API will return a text completion, given any text prompt, by attempting to match the pattern you provided it with.

OpenAI has been operating as a not-for-profit research and deployment company since 2015, aiming to ensure AI is developed for the benefit of all. The organisation has achieved several major feats so far, for example in developing a robotic hand with extraordinary dexterity, and developing an AI supercomputer in partnership with Microsoft earlier this year.

The San Francisco-based non-profit also made headlines last year for publicising details around the predecessor to its newly-released text completion API, which was deemed at the time too dangerous to make public.

Researchers said they were attempting to create an algorithm that could produce natural-sounding text, based on extensive research and language processing. They soon realised the tool they had developed was capable of creating fake news stories by taking cues from the eight million web pages it had scanned to learn about language.

Representing a departure from norms, OpenAI has finally released the tool, but through commercial channels, as an API in a private beta, as opposed to distributing the tool through open source. The reason, the organisation says, is that developing commercial products is one of the ways to make sure it has enough funding to succeed.

“We also believe that safely deploying powerful AI systems in the world will be hard to get right,” the company said in an FAQ. “In releasing the API, we are working closely with our partners to see what challenges arise when AI systems are used in the real world. This will help guide our efforts to understand how deploying future AI systems will go, and what we need to do to make sure they are safe and beneficial for everyone.”

“In addition to being a revenue source to help us cover costs in pursuit of our mission, the API has pushed us to sharpen our focus on general-purpose AI technology – advancing the technology, making it usable, and considering its impacts in the real world.”

RELATED RESOURCE

Unleashing the power of AI initiatives with the right infrastructure

What key infrastructure requirements are needed to implement AI effectively?

FREE DOWNLOAD

Another reason for releasing the technology as an API is that many of the models underlying the technology are gigantic, and would take a lot of expertise to develop and deploy, the company added, rendering them too expensive to run but for the largest of firms. With this model, it's hoped the technology can be embedded into other AI systems to make them more accessible to smaller businesses.

OpenAI stands by the claims that its technology is dangerous and can be easily misused. Releasing it in such a manner will give the organisation greater oversight over who is using it, and for why, with API access terminated for obviously harmful use cases, such as harassment, spam, or radicalisation, the company explained.

“Ultimately, our API models do exhibit biases (as shown in the GPT-3 paper) that will appear on occasion in generated text,” OpenAI added. “Our API models could also cause harm in ways that we haven’t thought of yet.”

To resolve these fears, OpenAI has pledged to develop user guidelines to help them learn from each other and mitigate these problems in practice. The organisation is also working with users to understand their use cases and develop tools to label and intervene on manifestations of harmful bias. The final action is research into harmful bias and broader issues in fairness and representation.

Keumars Afifi-Sabet
Contributor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.