Medium Risk

openai_chat

Send a prompt to OpenAI and get a response

Sends data to external AI API

Part of the OpenAI MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.

@openai-mcp-server Write Risk 3/5

AI agents use openai_chat to create or modify resources in OpenAI. Write operations carry medium risk because an autonomous agent could trigger bulk unintended modifications. Rate limits prevent a single agent session from making hundreds of changes in rapid succession. Argument validation ensures the agent passes expected values.

Without a policy, an AI agent could call openai_chat repeatedly, creating or modifying resources faster than any human could review. Intercept's rate limiting ensures write operations happen at a controlled pace, and argument validation catches malformed or unexpected inputs before they reach OpenAI.

Write tools can modify data. A rate limit prevents runaway bulk operations from AI agents.

openai.yaml
tools:
  openai_chat:
    rules:
      - action: allow
        rate_limit:
          max: 30
          window: 60

See the full OpenAI policy for all 1 tools.

Tool Name openai_chat
Category Write
MCP Server OpenAI MCP Server
Risk Level Medium

Agents calling write-class tools like openai_chat have been implicated in these attack patterns. Read the full case and prevention policy for each:

Browse the full MCP Attack Database →

Other tools in the Write risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.

What does the openai_chat tool do? +

Send a prompt to OpenAI and get a response. It is categorised as a Write tool in the OpenAI MCP Server, which means it can create or modify data. Consider rate limits to prevent runaway writes.

How do I enforce a policy on openai_chat? +

Add a rule in your Intercept YAML policy under the tools section for openai_chat. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the OpenAI MCP server.

What risk level is openai_chat? +

openai_chat is a Write tool with medium risk. Write tools should be rate-limited to prevent accidental bulk modifications.

Can I rate-limit openai_chat? +

Yes. Add a rate_limit block to the openai_chat rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.

How do I block openai_chat completely? +

Set action: deny in the Intercept policy for openai_chat. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.

What MCP server provides openai_chat? +

openai_chat is provided by the OpenAI MCP server (@openai-mcp-server). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.

Enforce policies on OpenAI

Open source. One binary. Zero dependencies.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.