← Back to glossaryfundamentals
Hallucination
A hallucination is a plausible-sounding but factually wrong answer from a language model — e.g. a made-up quote or an invented software feature.
Also known as: Hallucination
In detail
Hallucinations happen because LLMs are trained on probability, not truth. We reduce them to <2% with these countermeasures:
- RAG: only answer based on your real documents
- Source enforcement: every statement needs a source ID, otherwise the answer is rejected
- Confidence thresholds: low confidence → human handoff instead of guessing
- Eval suites of real customer questions every new version must pass
Related terms
- AI agentAn AI agent is a program built on a language model that completes tasks on its own: it understands a request, plans steps, calls tools, and responds with a result instead of just text.
- PromptA prompt is the instruction a language model receives — it defines role, tone, allowed actions, and provides the context for the response.
- AI agentAn AI agent is a program built on a language model that completes tasks on its own: it understands a request, plans steps, calls tools, and responds with a result instead of just text.