What is Function Calling?

2 min read Updated

Function calling is the capability of large language models to generate structured output that specifies which external function to invoke and with what arguments, enabling LLMs to interact with APIs, databases, and real-world systems.

WHY IT MATTERS

LLMs are text-in, text-out systems. They cannot natively send emails, query databases, or modify files. Function calling bridges this gap — the model outputs structured JSON specifying a function name and parameters, and the application executes it.

OpenAI popularised the pattern in 2023, and it is now standard across all major LLM providers. You define available functions (with names, descriptions, and parameter schemas), the model decides when and how to call them, and your code handles execution.

Function calling is what makes AI agents possible. Without it, an LLM can only suggest actions. With it, the LLM can trigger real-world operations — file writes, shell commands, API calls, infrastructure changes. This is precisely why policy enforcement on function calls matters: every call is a potential side effect that needs governance.

HOW POLICYLAYER USES THIS

Intercept enforces YAML-defined policies on function calls flowing through the MCP protocol. When an agent uses function calling to invoke an MCP tool, Intercept evaluates the call — checking the function name against allow/deny lists, validating arguments against constraints, and enforcing rate limits — before forwarding to the server. Denied calls are blocked and logged.

FREQUENTLY ASKED QUESTIONS

How is function calling different from tool use?
They are often used interchangeably. 'Function calling' typically refers to the LLM's ability to generate structured function invocations. 'Tool use' is the broader concept of agents interacting with external systems. In MCP, both map to tool calls that Intercept can enforce policies on.
Can the model call functions incorrectly?
Yes. Models can hallucinate function names, pass wrong argument types, or call functions at inappropriate times. Intercept provides an additional safety layer — even if the model generates an incorrect or dangerous call, the policy can catch it.
Which LLMs support function calling?
All major providers: OpenAI (GPT-4o), Anthropic (Claude), Google (Gemini), Meta (Llama), and Mistral. The exact API format varies but the concept is universal — and MCP standardises the interface.

FURTHER READING

Enforce policies on every tool call

Intercept is the open-source MCP proxy that enforces YAML policies on AI agent tool calls. No code changes needed.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.