|LLM|HALLUCINATION| MEMORY|
A hallucination is a fact, not an error; what’s erroneous is a judgment based upon it. — Bertrand Russell
Large language models (LLMs) have shown remarkable performance, but are still affected by hallucinations. Especially for sensitive applications this isn’t any small problem, so several solutions have been studied. Nevertheless, the issue persists though some mitigation strategies have helped reduce them.
Why hallucinations originate continues to be an open query, although there are some theories about what…