What is Grounding?
Grounding in AI refers to techniques that anchor a language model's outputs to verifiable, real-world data sources — reducing hallucination and improving factual accuracy.
WHY IT MATTERS
An ungrounded LLM is writing fiction that sounds like fact. Grounding connects the model to reality: real databases, live APIs, verified documents, and authoritative sources.
Grounding techniques include RAG, tool use (calling APIs for live data), web search integration, and citation enforcement. The goal is to ensure every claim can be traced to a verifiable source.
For financial agents, grounding means checking real balances, real prices, and real protocol states — not relying on the model's memory of what ETH was worth during training.