What is Hallucination?
In AI, hallucination refers to when a language model generates confident, plausible-sounding output that is factually incorrect or fabricated — a fundamental challenge for agent reliability.
WHY IT MATTERS
Hallucination is the Achilles' heel of LLM-powered systems. A model can state incorrect facts with complete confidence, invent citations that don't exist, or generate code that looks correct but contains subtle bugs.
For AI agents, hallucination risks compound. An agent might hallucinate a wallet address, fabricate a token price, or invent a protocol that doesn't exist — and then act on that hallucinated information. In financial contexts, this means sending funds to wrong addresses or executing trades based on phantom data.
Mitigations include RAG, chain-of-thought reasoning, output verification, and external validation layers. No single technique eliminates hallucination; defense in depth is required.
HOW POLICYLAYER USES THIS
PolicyLayer acts as a hallucination safety net for financial agents. Even if an agent hallucinates a transaction target or amount, PolicyLayer validates every action against whitelists, spending limits, and allowed recipients — catching invalid transactions before they reach the blockchain.