What is Tool Calling?

2 min read Updated

Tool calling is the mechanism by which a large language model generates structured requests to invoke external tools, APIs, or functions — enabling the model to take actions and retrieve information beyond its training data.

WHY IT MATTERS

Tool calling and function calling are largely synonymous — both refer to the LLM's ability to output structured JSON that triggers external operations. 'Tool calling' has become the more common term as the ecosystem has matured.

The flow is standardised: you provide the LLM with tool definitions (name, description, parameter schema). The model decides when to call a tool and generates the arguments. Your application executes the call and returns results to the model. This cycle can repeat multiple times in a single turn.

Every tool call is a potential side effect — writing a file, sending an email, executing code, modifying infrastructure. Without policy enforcement, the only thing standing between an LLM's decision and real-world consequences is the application code that executes the call.

HOW POLICYLAYER USES THIS

Intercept enforces YAML-defined policies on every tool call flowing through the MCP proxy. When an agent's LLM outputs a tool call, Intercept evaluates it against the policy — checking the tool name, argument values, and rate limits — before forwarding it to the server. Denied calls never reach the server. No changes to the agent or server code are needed.

FREQUENTLY ASKED QUESTIONS

Is tool calling the same as function calling?
Effectively yes. OpenAI originally called it 'function calling' and later adopted 'tool calling.' Anthropic uses 'tool use.' They all describe the same capability — LLMs generating structured invocations of external operations.
Why enforce policies on tool calls?
Tool calls have real-world side effects — file writes, API requests, code execution. Without policy enforcement, a jailbroken or malfunctioning LLM can trigger any tool call it has access to. Intercept ensures only policy-compliant calls execute.
How does parallel tool calling work with Intercept?
When the model returns multiple tool calls in a single response, Intercept evaluates each one independently against the YAML policy. Some may be allowed while others are denied — each is evaluated on its own merits.

FURTHER READING

Enforce policies on every tool call

Intercept is the open-source MCP proxy that enforces YAML policies on AI agent tool calls. No code changes needed.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.