Avoiding ‘hallucinations’… Appearance of AI chatbot with improved reliability


(Photo = shutterstock)

A synthetic intelligence (AI) chatbot focused on accuracy has been developed. It’s a model to compensate for the shortcomings of AI chatbots, including ‘ChatGPT’, that give plausible answers which might be different from the facts.

TechCrunch reported on the eighth (local time) that the US startup Foretout developed ‘Support GPT’ based on the GPT series, a big language model of Open AI, and entered beta testing.

AI chatbots, including ‘ChatGPT’, cause a ‘hallucination phenomenon’ that convincingly gives unsuitable answers despite their amazing performance. ‘SupportGPT’ is designed to make use of a narrow set of answers to avoid this.

Dion Nicholas, CEO of Foresout, said hallucinations are attributable to AI going off target, and hoped this might be reduced by limiting the set of answers the model could access.

For instance, if a user is talking about an interest and asks an out-of-bounds query, similar to the weather, the tool prompts them to return to the context of the unique conversation while showing the varieties of questions that will be answered. It is a technique of limiting the range of answers in order that the chatbot doesn’t give an incorrect answer.

In this fashion, customers have the advantage of using the AI ​​chatbot to get more focused answers to questions, explained CEO Nicholas. Forethought also announced a beta version of ‘Support GPT Playground’, which allows corporations to experiment with ‘Support GPT’ for business use by applying their very own data.

Forethought has been developing solutions to integrate generative AI into enterprise business and has thus far raised $92 million.

Jeong Byeong-il, member jbi@aitimes.com


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x