What is Function Calling?
Function calling is the capability of large language models to generate structured output that specifies which external function to invoke and with what arguments, enabling LLMs to interact with APIs, databases, and real-world systems.
WHY IT MATTERS
LLMs are text-in, text-out systems. They cannot natively send emails, query databases, or modify files. Function calling bridges this gap — the model outputs structured JSON specifying a function name and parameters, and the application executes it.
OpenAI popularised the pattern in 2023, and it is now standard across all major LLM providers. You define available functions (with names, descriptions, and parameter schemas), the model decides when and how to call them, and your code handles execution.
Function calling is what makes AI agents possible. Without it, an LLM can only suggest actions. With it, the LLM can trigger real-world operations — file writes, shell commands, API calls, infrastructure changes. This is precisely why policy enforcement on function calls matters: every call is a potential side effect that needs governance.
HOW POLICYLAYER USES THIS
Intercept enforces YAML-defined policies on function calls flowing through the MCP protocol. When an agent uses function calling to invoke an MCP tool, Intercept evaluates the call — checking the function name against allow/deny lists, validating arguments against constraints, and enforcing rate limits — before forwarding to the server. Denied calls are blocked and logged.