4 tools from the Llm Advisor MCP Server, categorised by risk level.
View the Llm Advisor policy →compare_models Compare 2-5 LLM/VLM models side-by-side: pricing, benchmarks, capabilities. Returns a compact Markdown comparison table (~400 tokens). get_model_info Get detailed information about a specific LLM/VLM model: pricing, benchmarks, capabilities, and ready-to-use API code example. Returns structured M... list_top_models List top-ranked LLM/VLM models for a category. Categories: coding, math, vision, general, cost-effective, open-source, speed, context-window, reaso... recommend_model Get personalized model recommendations based on use case, budget, and requirements. Returns top 3 picks with reasoning (~350 tokens). The Llm Advisor MCP server exposes 4 tools across 1 categories: Read.
Use Intercept, the open-source MCP proxy. Write YAML rules for each tool — rate limits, argument validation, or deny rules — then run Intercept in front of the Llm Advisor server.
Llm Advisor tools are categorised as Read (4). Each category has a recommended default policy.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept