Zum Inhalt springen
Agent Hub
Back to glossary
fundamentals

Hallucination

A hallucination is a plausible-sounding but factually wrong answer from a language model — e.g. a made-up quote or an invented software feature.

Also known as: Hallucination

In detail

Hallucinations happen because LLMs are trained on probability, not truth. We reduce them to <2% with these countermeasures:

  • RAG: only answer based on your real documents
  • Source enforcement: every statement needs a source ID, otherwise the answer is rejected
  • Confidence thresholds: low confidence → human handoff instead of guessing
  • Eval suites of real customer questions every new version must pass

Related terms