46 tools. 18 can modify or destroy data without limits.
7 destructive tools with no built-in limits. Policy required.
Last updated:
Destructive tools (delete-connector, delete-flink-statements, delete-tableflow-catalog-integration) permanently delete resources. There is no undo. An agent calling these in a retry loop causes irreversible damage.
Write operations (add-tags-to-topic, alter-topic-config, create-connector) modify state. Without rate limits, an agent can make hundreds of changes in seconds — faster than any human can review or revert.
Execute tools (create-flink-statement) trigger processes with side effects. Builds, notifications, workflows — all fired without throttling.
Intercept sits between your agent and Confluent Kafka. Every tool call checked against your policy before it executes — so your agent can do its job without breaking things.
npx -y @policylayer/intercept scan -- npx -y @@confluentinc/mcp-confluent delete-connector:
rules:
- action: deny Destructive tools should never be available to autonomous agents without human approval.
add-tags-to-topic:
rules:
- rate_limit: 30/hour Prevents bulk unintended modifications from agents caught in loops.
check-flink-statement-health:
rules:
- rate_limit: 60/minute Controls API costs and prevents retry loops from exhausting upstream rate limits.
Yes. The Confluent Kafka server exposes 7 destructive tools including delete-connector, delete-flink-statements, delete-tableflow-catalog-integration. These permanently remove resources with no undo. Intercept blocks destructive tools by default so they never reach the upstream server.
The Confluent Kafka server has 10 write tools including add-tags-to-topic, alter-topic-config, create-connector. Set rate limits in your policy file -- for example, rate_limit: 10/hour prevents an agent from making more than 10 modifications per hour. Intercept enforces this at the transport layer.
46 tools across 4 categories: Destructive, Execute, Read, Write. 28 are read-only. 18 can modify, create, or delete data.
One line change. Instead of running the Confluent Kafka server directly, prefix it with Intercept: intercept -c confluent-kafka.yaml -- npx -y @@@confluentinc/mcp-confluent. Download a pre-built policy from policylayer.com/policies/confluent-kafka and adjust the limits to match your use case.
Starter policies available for each. Same risk classification, same one-command setup.
Set budgets, approvals, and hard limits across MCP servers.
npx -y @policylayer/intercept init