High Risk →

gpu_run

Run any GPU-Bridge AI service. 30 services available: LLM inference (sub-second), image generation (FLUX, SD3.5), video generation, video enhancement (up to 4K), speech-to-text (Whisper, <1s), TTS (40+ voices), music generation, voice cloning, embeddings, document reranking (Jina), OCR, PDF/docum...

Part of the Mcp Server MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.

mcp-server Execute Risk 3/5

AI agents invoke gpu_run to trigger processes or run actions in Mcp Server. Execute operations can have side effects beyond the immediate call -- triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.

gpu_run can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.

Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.

mcp-server.yaml
tools:
  gpu_run:
    rules:
      - action: allow
        rate_limit:
          max: 10
          window: 60
        validate:
          required_args: true

See the full Mcp Server policy for all 5 tools.

Tool Name gpu_run
Category Execute
Risk Level High

Agents calling execute-class tools like gpu_run have been implicated in these attack patterns. Read the full case and prevention policy for each:

Browse the full MCP Attack Database →

Other tools in the Execute risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.

gpu_run is one of the high-risk operations in Mcp Server. For the full severity-focused view — only the high-risk tools with their recommended policies — see the breakdown for this server, or browse all high-risk tools across every MCP server.

What does the gpu_run tool do? +

Run any GPU-Bridge AI service. 30 services available: LLM inference (sub-second), image generation (FLUX, SD3.5), video generation, video enhancement (up to 4K), speech-to-text (Whisper, <1s), TTS (40+ voices), music generation, voice cloning, embeddings, document reranking (Jina), OCR, PDF/document parsing, NSFW detection, image captioning, visual Q&A, background removal, face restoration, upscaling, stickers, and more. Use gpu_catalog to see all available services.. It is categorised as a Execute tool in the Mcp Server MCP Server, which means it can trigger actions or run processes. Use rate limits and argument validation.

How do I enforce a policy on gpu_run? +

Add a rule in your Intercept YAML policy under the tools section for gpu_run. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Mcp Server MCP server.

What risk level is gpu_run? +

gpu_run is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.

Can I rate-limit gpu_run? +

Yes. Add a rate_limit block to the gpu_run rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.

How do I block gpu_run completely? +

Set action: deny in the Intercept policy for gpu_run. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.

What MCP server provides gpu_run? +

gpu_run is provided by the Mcp Server MCP server (mcp-server). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.

Enforce policies on Mcp Server

Open source. One binary. Zero dependencies.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.