Confluent Kafka

46 tools. 18 can modify or destroy data without limits.

7 destructive tools with no built-in limits. Policy required.

Last updated:

18 can modify or destroy data
28 read-only
46 tools total
Read (28) Write / Execute (11) Destructive / Financial (7)

Destructive tools (delete-connector, delete-flink-statements, delete-tableflow-catalog-integration) permanently delete resources. There is no undo. An agent calling these in a retry loop causes irreversible damage.

Write operations (add-tags-to-topic, alter-topic-config, create-connector) modify state. Without rate limits, an agent can make hundreds of changes in seconds — faster than any human can review or revert.

Execute tools (create-flink-statement) trigger processes with side effects. Builds, notifications, workflows — all fired without throttling.

One command. Full control.

Intercept sits between your agent and Confluent Kafka. Every tool call checked against your policy before it executes — so your agent can do its job without breaking things.

npx -y @policylayer/intercept scan -- npx -y @@confluentinc/mcp-confluent
Scans every tool. Generates a policy. Starts enforcing.
Works with Claude Code · Cursor · Claude Desktop · Windsurf · any MCP client
Deny destructive operations
delete-connector:
  rules:
    - action: deny

Destructive tools should never be available to autonomous agents without human approval.

Rate limit write operations
add-tags-to-topic:
  rules:
    - rate_limit: 30/hour

Prevents bulk unintended modifications from agents caught in loops.

Cap read operations
check-flink-statement-health:
  rules:
    - rate_limit: 60/minute

Controls API costs and prevents retry loops from exhausting upstream rate limits.

Can an AI agent delete data through the Confluent Kafka MCP server? +

Yes. The Confluent Kafka server exposes 7 destructive tools including delete-connector, delete-flink-statements, delete-tableflow-catalog-integration. These permanently remove resources with no undo. Intercept blocks destructive tools by default so they never reach the upstream server.

How do I prevent bulk modifications through Confluent Kafka? +

The Confluent Kafka server has 10 write tools including add-tags-to-topic, alter-topic-config, create-connector. Set rate limits in your policy file -- for example, rate_limit: 10/hour prevents an agent from making more than 10 modifications per hour. Intercept enforces this at the transport layer.

How many tools does the Confluent Kafka MCP server expose? +

46 tools across 4 categories: Destructive, Execute, Read, Write. 28 are read-only. 18 can modify, create, or delete data.

How do I add Intercept to my Confluent Kafka setup? +

One line change. Instead of running the Confluent Kafka server directly, prefix it with Intercept: intercept -c confluent-kafka.yaml -- npx -y @@@confluentinc/mcp-confluent. Download a pre-built policy from policylayer.com/policies/confluent-kafka and adjust the limits to match your use case.

Other MCP servers with similar tools.

Starter policies available for each. Same risk classification, same one-command setup.

policylayer/intercept

Control every MCP tool call
your agent makes.

Set budgets, approvals, and hard limits across MCP servers.

npx -y @policylayer/intercept init
Protect your agent in 30 seconds. Scans your MCP config and generates enforcement policies for every server.
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.