What is MCP Sampling?

1 min read Updated

MCP sampling is a capability in the Model Context Protocol that allows an MCP server to request LLM completions through the connected client — enabling servers to leverage AI reasoning without directly accessing an LLM API.

WHY IT MATTERS

In standard MCP, the flow is one-directional: the client (agent) calls tools on the server. Sampling reverses this — the server asks the client to generate LLM completions. This enables servers to use AI capabilities provided by the client's model.

Use cases include: a tool server that needs to analyse data using the client's LLM, a code review server that needs AI reasoning for intermediate steps, or a workflow server that enriches its responses with LLM-generated content.

Sampling raises significant security considerations. A malicious MCP server could use sampling to extract information from the client's context, consume the client's model quota, or probe the client's system prompt. Policy controls on sampling requests are essential.

HOW POLICYLAYER USES THIS

Intercept can enforce policies on MCP sampling requests. YAML policies can rate-limit sampling calls from servers, restrict the types of sampling requests permitted, and log all sampling activity for audit. This prevents malicious or misconfigured servers from abusing the client's LLM access through excessive or unauthorised sampling.

FREQUENTLY ASKED QUESTIONS

Why would a server need to sample from the client?
The server might not have its own LLM access, or it might need the client's specific model or context for the task. Sampling lets servers be 'AI-enhanced' without their own model infrastructure.
How does Intercept secure sampling?
Intercept can rate-limit sampling requests, restrict the prompts servers can send for sampling, and log all sampling activity. This prevents quota abuse and context extraction attacks.
Is sampling widely supported?
Sampling is the least commonly implemented MCP capability. Most MCP interactions use tools and resources. Sampling is mainly used in advanced scenarios where servers need AI reasoning.

FURTHER READING

Enforce policies on every tool call

Intercept is the open-source MCP proxy that enforces YAML policies on AI agent tool calls. No code changes needed.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.