Core Concepts

What is AI Hallucination?

When an AI model generates factually incorrect information with apparent confidence.

Definition

AI hallucination occurs when a language model generates plausible-sounding but factually incorrect information — presenting invented facts, non-existent citations, wrong dates, or fabricated details as if they were true. Hallucinations happen because LLMs generate text statistically rather than retrieving verified facts. RAG and tool use are the primary techniques for grounding AI outputs in verified sources.

Example

An LLM asked 'who founded Stripe?' might confidently state an incorrect founding date or co-founder name. An AI agent with web search capability would look it up and give the accurate answer — grounded in retrieved data, not statistical generation.

AI Hallucination vs bias: What's the difference?

AI Hallucination

When an AI model generates factually incorrect information with apparent confidence.

bias

Bias is systematic skewing toward certain viewpoints or groups. Hallucination is factual incorrectness — the model invents things that aren't true.

Related terms

Back to glossary