Low Risk

recommend_model

Get personalized model recommendations based on use case, budget, and requirements. Returns top 3 picks with reasoning (~350 tokens).

Part of the Llm Advisor MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.

AI agents call recommend_model to retrieve information from Llm Advisor without modifying any data. This is common in research, monitoring, and reporting workflows where the agent needs context before taking action. Because read operations don't change state, they are generally safe to allow without restrictions -- but you may still want rate limits to control API costs.

Even though recommend_model only reads data, uncontrolled read access can leak sensitive information or rack up API costs. An agent caught in a retry loop could make thousands of calls per minute. A rate limit gives you a safety net without blocking legitimate use.

Read-only tools are safe to allow by default. No rate limit needed unless you want to control costs.

io-github-daichi-kudo-llm-advisor.yaml
tools:
  recommend_model:
    rules:
      - action: allow

See the full Llm Advisor policy for all 4 tools.

Tool Name recommend_model
Category Read
Risk Level Low

Agents calling read-class tools like recommend_model have been implicated in these attack patterns. Read the full case and prevention policy for each:

Browse the full MCP Attack Database →

Other tools in the Read risk category across the catalogue. The same policy patterns (rate-limit, allow) apply to each.

What does the recommend_model tool do? +

Get personalized model recommendations based on use case, budget, and requirements. Returns top 3 picks with reasoning (~350 tokens).. It is categorised as a Read tool in the Llm Advisor MCP Server, which means it retrieves data without modifying state.

How do I enforce a policy on recommend_model? +

Add a rule in your Intercept YAML policy under the tools section for recommend_model. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Llm Advisor MCP server.

What risk level is recommend_model? +

recommend_model is a Read tool with low risk. Read-only tools are generally safe to allow by default.

Can I rate-limit recommend_model? +

Yes. Add a rate_limit block to the recommend_model rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.

How do I block recommend_model completely? +

Set action: deny in the Intercept policy for recommend_model. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.

What MCP server provides recommend_model? +

recommend_model is provided by the Llm Advisor MCP server (llm-advisor-mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.

Enforce policies on Llm Advisor

Open source. One binary. Zero dependencies.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.