← Back to Blog

Your AI Agent Can Run DROP TABLE on Production

Your AI agent just ran DELETE FROM users without a WHERE clause. It was trying to remove a single test account, hallucinated the query, and wiped your entire users table. No confirmation prompt, no rollback, no undo. Production is down and your backup is from last Tuesday.

This is not a contrived scenario. The PostgreSQL MCP server gives AI agents exactly one tool — and that one tool is enough to destroy everything.

One tool, infinite power

Most MCP servers expose a handful of scoped tools. The PostgreSQL server exposes just one: query. It executes raw SQL against your connected database. That means SELECT, INSERT, UPDATE, DELETE, DROP TABLE, ALTER, TRUNCATE — whatever the database user has permission to run.

The server’s README describes the tool as executing “read-only SQL queries.” In practice, nothing in the MCP layer enforces that. If the database connection has write permissions — and it usually does — the agent can write. Or delete. Or drop.

A single hallucinated query in a tight agentic loop can fire dozens of destructive statements before you even notice. As we explored in What Happens When Your AI Agent Goes Rogue, the damage from an unconstrained agent compounds fast. And with raw SQL access, “fast” means milliseconds.

Rate limiting queries

You cannot stop the agent from generating bad SQL. But you can limit how many queries it fires per minute, buying you time to catch a runaway loop before it does real damage.

Intercept sits between your agent and the PostgreSQL MCP server. Every tools/call is evaluated against a YAML policy before it reaches the database. Here is the full policy:

version: "1"
description: "Policy for modelcontextprotocol/server-postgres"
default: "allow"
tools:
    query:
        rules:
          - name: "query-rate-limit"
            rate_limit: "30/minute"
            on_deny: "Rate limit: max 30 queries per minute — wait before retrying"

    "*":
        rules:
          - name: "global-rate-limit"
            rate_limit: "60/minute"
            on_deny: "Global rate limit: max 60 tool calls per minute across all PostgreSQL tools"

The query tool is capped at 30 calls per minute. A global rate limit of 60 per minute catches any future tools the server might add. When the agent hits either limit, it receives the on_deny message as the tool response — a clear signal to stop, not a silent failure.

Thirty queries per minute is generous for most agent workflows. A data analysis agent pulling reports will rarely hit it. But a malfunctioning agent stuck in a retry loop absolutely will — and that is the point. The rate limit acts as a circuit breaker, stopping the cascade before query 31 lands.

The rate_limit shorthand expands internally to a stateful counter that resets at the start of each calendar-aligned UTC window. No tokens to configure, no middleware to write. For the full mechanism, see Rate Limiting MCP Tool Calls.

Getting started

Install Intercept and point it at the PostgreSQL MCP server:

npm install -g @policylayer/intercept

Then run it with the policy:

intercept -c postgres.yaml -- npx -y @modelcontextprotocol/server-postgres $DATABASE_URL

Every query from your agent now passes through the policy engine. Query number 31 in a minute gets blocked. Your tables stay intact.

Rate limiting is a starting point, not a complete solution. Pair it with a read-only database user where possible, and keep your backups current. But when the agent inevitably hallucinates a destructive query, you will be glad the rate limit caught the loop on iteration 30 instead of iteration 3,000.

Full PostgreSQL policy →

Protect your agent in 30 seconds

Scans your MCP config and generates enforcement policies for every server.

npx -y @policylayer/intercept init
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.