OpenAI found a way to make AI models more logical and avoid hallucinations
Although AI models are advanced and can do extraordinary things, they are still capable of making mistakes and producing incorrect answers — known as hallucinations.Â
All of the major AI chatbots, including ChatGPT and Google Bard, are prone to these hallucinations. Both OpenAI and Google even include disclosures that their chatbots possibly produce incorrect information.Â
Also: ChatGPT vs Bing Chat vs Google Bard? Which is the best AI chatbot?
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” says OpenAI in a ChatGPT blog post.Â
The creation of false information has led to widespread concerns about the dissemination of misinformation and its potential negative consequences.
In a new research post, OpenAI shares that it may have found a way to make AI models act more logically and avoid hallucinations.Â
OpenAI trained a model that is capable of solving complex mathematical problems through “process supervision,” a method that provides feedback for each individual step as opposed to “outcome supervision,” which provides feedback on an end result.Â
Also: I’ve tested a lot of AI tools for work. Here are my 5 favorite so far
In the research paper, OpenAI tested both methods using the MATH dataset and found that the process supervision method led to “significantly better performance.”
“Process supervision is also more likely to produce interpretable reasoning, since it encourages the model to follow a human-approved process,” says OpenAI in the research paper.Â
Also: How ChatGPT can rewrite and improve your existing code
OpenAI does note that outside the scope of mathematical problems, it is unknown how broadly these results will apply, but it is still important to explore it in other domains. Â
Pingback: สล็oต PG