IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

OpenAI's new AI model promises to be “more truthful and less toxic”

The organisation has started using human helpers to help teach the new model but warns this could introduce added bias

OpenAI has made a new version of its GPT-3 AI language model available that promises to be better at following user intentions while also producing results that are more truthful and less toxic.

The Open AI API is powered by GPT-3 language models that can be used to perform natural language tasks using carefully engineered text prompts. However, the models can also produce outputs that are untruthful, toxic, or reflect harmful sentiments.

The organisation's AI models have been criticised in the past for a range of shortcomings, including racism against specific genders and religions. The organisation once called GPT-3 too dangerous to make public, due to the API being able to create fake news stories by taking cues from the eight million web pages it had scanned to learn about language.

The organisation said this is partly because GPT-3 is trained to predict the next word on a large dataset of Internet text instead of safely performing the language tasks the user wants.

To make its models safer, and more aligned with users, OpenAI used a technique known as reinforcement learning from human feedback (RLHF), using human helpers called labelers to assist the AI in its learning.

“On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior, and rank several outputs from our models. We then use this data to fine-tune GPT-3,” said the company.

It found the resulting models are much better at following instructions than the GPT-3. They also make up facts less often and show small decreases in toxicity. The organisation’s labelers prefer outputs from its new 1.3B InstructGPT model over outputs from its 175B GPT-3 model, despite having over 100x fewer parameters.

These InstructGPT models have been in beta on the API for over a year and are now the default language models accessible on OpenAI’s API.

“We believe that fine-tuning language models with humans in the loop is a powerful tool for improving their safety and reliability, and we will continue to push in this direction,” the organisation explained.

However, OpenAI outlined that there are some limitations to this model too. The InstructGPT models, for example, are far from fully aligned or fully safe, meaning they still generate toxic outputs, make up facts, or generate sexual and violent content without explicit prompting.

Related Resource

Content syndication isn't dead, but your data processes might be

It's a new (lead) generation

Drawn image in white of a figure with a graph arrow on the up and a dollar sign over a photo of metal cogsFree Download

It said that to support the safety of its API, it will continue to review potential applications before they go live, provide content filters for detecting unsafe completions, and monitor for misuse.

OpenAI also highlighted that in many cases, aligning to the average labeler preference may not be desirable. The example it gave is that when generating text that disproportionately affects a minority group, the preferences of that group should be weighted more heavily.

“Right now, InstructGPT is trained to follow instructions in English; thus, it is biased towards the cultural values of English-speaking people,” it said. “We are conducting research into understanding the differences and disagreements between labelers’ preferences so we can condition our models on the values of more specific populations.”

Featured Resources

Four strategies for building a hybrid workplace that works

All indications are that the future of work is hybrid, if it's not here already

Free webinar

The digital marketer’s guide to contextual insights and trends

How to use contextual intelligence to uncover new insights and inform strategies

Free Download

Ransomware and Microsoft 365 for business

What you need to know about reducing ransomware risk

Free Download

Building a modern strategy for analytics and machine learning success

Turning into business value

Free Download

Recommended

DDN launches AI Innovation Lab in Singapore
artificial intelligence (AI)

DDN launches AI Innovation Lab in Singapore

11 Nov 2021
IBM launches Environmental Intelligence Suite
artificial intelligence (AI)

IBM launches Environmental Intelligence Suite

13 Oct 2021
Vecow and Blaize partner on workstation-grade edge AI
artificial intelligence (AI)

Vecow and Blaize partner on workstation-grade edge AI

27 Sep 2021
IBM adds new features to Watson Assistant
artificial intelligence (AI)

IBM adds new features to Watson Assistant

9 Sep 2021

Most Popular

Russian hackers declare war on 10 countries after failed Eurovision DDoS attack
hacking

Russian hackers declare war on 10 countries after failed Eurovision DDoS attack

16 May 2022
16 ways to speed up your laptop
Laptops

16 ways to speed up your laptop

13 May 2022
Windows Server admins say latest Patch Tuesday broke authentication policies
Server & storage

Windows Server admins say latest Patch Tuesday broke authentication policies

12 May 2022