Home Artificial Intelligence OpenAI, ChatGPT unveils ways to enhance hallucination problems

OpenAI, ChatGPT unveils ways to enhance hallucination problems

10
OpenAI, ChatGPT unveils ways to enhance hallucination problems

(Photo = shutterstock)

OpenAI has unveiled a recent method to enhance the hallucination problem of ‘ChatGPT’ with a human-like pondering approach.

In line with CNBC, in a paper published on the thirty first (local time), OpenAI hallucinates artificial intelligence (AI) through training that rewards individual answers in each inference step as an alternative of rewarding the ultimate answer generated by ChatGPT in a multi-step reasoning process. Suggested solutions to enhance the issue.

AI hallucinations check with large-scale language models (LLMs) resembling OpenAI’s ChatGPT and Google’s ‘Bard’ spitting out completely fabricated false information plausibly as if it were true.

Examples of AI hallucinations include Google’s Bard falsely claiming that the James Webb Space Telescope took the primary picture of an extrasolar planet in February, and a Latest York attorney recently citing a fake precedent made up by ChatGPT in a trial. .

“Even essentially the most advanced models generally tend to create lies,” OpenAI said in a paper. “This shows an inclination to make up facts in moments of uncertainty.”

He then identified that such hallucinations are particularly problematic in domains that require multilevel reasoning, as logical errors at one stage can result in larger errors because the stages progress.

OpenAI emphasized that it’s effective to coach the LLM in a ‘process supervision’ method that rewards answers for every reasoning step as an alternative of ‘final result supervision’ that rewards the ultimate answer to a given query.

OpenAI introduced human feedback-based reinforcement learning (RLHF) to ChatGPT to generate answers tailored to the user’s intent. The RLHF gives prompts to the LLM, has it generate several outputs, after which asks a human rater to rank the generated text from best to worst. A reward model is then trained to predict scores from the LLM text.

The brand new approach to supervising the method is to use RLHF to every individual stage of multi-level reasoning.

It’s explained that this approach can result in more ‘Explainable AI’ since it allows AI models to follow an identical thought process to humans.

There are also skeptical views. Ben Winters, senior adviser to the Center for Electronic Privacy, noted that “simply that alone is difficult to alleviate concerns about misinformation and consequences of AI.”

Suresh Venkatasubramanian, director of Brown University’s Center for Technology Responsibility, points out that “the best way large language models work is mostly unstable, so what works in a single environment may not work in one other.”

AI Times Reporter Chan Park cpark@aitimes.com

10 COMMENTS

  1. Its like you read my mind! You appear to know so much about
    this, like you wrote the book in it or something. I think that you could do with
    some pics to drive the message home a bit, but instead of that, this is wonderful blog.
    A fantastic read. I will definitely be back.

  2. magnificent put up, very informative. I’m wondering why the other specialists of this sector do not understand this. You should continue your writing. I am confident, you have a great readers’ base already!|

LEAVE A REPLY

Please enter your comment!
Please enter your name here