Low Risk

scan_response

Call this AFTER the LLM generates a response, BEFORE returning it to the user or downstream system. DECISION LOGIC: - If blocked=true: do NOT deliver the response. Regenerate with a modified prompt or return the user_message as a safe fallback. - If blocked=false: the response is safe to deliver...

Handles credentials or secrets (pii_tokens[].token)

Part of the Shrike Security MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.

shrike-mcp Read Risk 2/5

AI agents call scan_response to retrieve information from Shrike Security without modifying any data. This is common in research, monitoring, and reporting workflows where the agent needs context before taking action. Because read operations don't change state, they are generally safe to allow without restrictions -- but you may still want rate limits to control API costs.

Even though scan_response only reads data, uncontrolled read access can leak sensitive information or rack up API costs. An agent caught in a retry loop could make thousands of calls per minute. A rate limit gives you a safety net without blocking legitimate use.

Read-only tools are safe to allow by default. No rate limit needed unless you want to control costs.

shrike-security.yaml
tools:
  scan_response:
    rules:
      - action: allow

See the full Shrike Security policy for all 12 tools.

Tool Name scan_response
Category Read
Risk Level Low

View all 12 tools →

Agents calling read-class tools like scan_response have been implicated in these attack patterns. Read the full case and prevention policy for each:

Browse the full MCP Attack Database →

Other tools in the Read risk category across the catalogue. The same policy patterns (rate-limit, allow) apply to each.

What does the scan_response tool do? +

Call this AFTER the LLM generates a response, BEFORE returning it to the user or downstream system. DECISION LOGIC: - If blocked=true: do NOT deliver the response. Regenerate with a modified prompt or return the user_message as a safe fallback. - If blocked=false: the response is safe to deliver. Detects in LLM output: - System prompt leaks (LLM revealing its instructions) - Unexpected PII in output (PII not present in the original prompt) - Toxic or hostile language in generated content - Topic drift (response diverges from prompt intent) Provide original_prompt for best results — it enables PII diff analysis and topic mismatch detection. When pii_tokens is provided (from scan_prompt with redact_pii=true), safe responses include rehydrated_response with PII tokens restored. Enterprise context: Paired with scan_prompt, this completes the inbound/outbound scan pattern that prevents data exfiltration through model outputs and ensures compliance with data handling policies. ERROR HANDLING: If this tool returns an error or is unavailable, default to BLOCKING the response. Do NOT deliver unscanned LLM output.. It is categorised as a Read tool in the Shrike Security MCP Server, which means it retrieves data without modifying state.

How do I enforce a policy on scan_response? +

Add a rule in your Intercept YAML policy under the tools section for scan_response. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Shrike Security MCP server.

What risk level is scan_response? +

scan_response is a Read tool with low risk. Read-only tools are generally safe to allow by default.

Can I rate-limit scan_response? +

Yes. Add a rate_limit block to the scan_response rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.

How do I block scan_response completely? +

Set action: deny in the Intercept policy for scan_response. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.

What MCP server provides scan_response? +

scan_response is provided by the Shrike Security MCP server (shrike-mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.

Enforce policies on Shrike Security

Open source. One binary. Zero dependencies.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.