4 tools from the Metrillm MCP Server, categorised by risk level.
View the Metrillm policy →get_results Retrieve previously saved benchmark results from ~/.metrillm/results/. Optionally filter by model name. Returns an array of full benchmark result o... list_models List all LLM models available locally on the inference runtime (e.g. Ollama). Returns model name, size, parameter count, quantization, and family. The Metrillm MCP server exposes 4 tools across 3 categories: Read, Write, Execute.
Use Intercept, the open-source MCP proxy. Write YAML rules for each tool — rate limits, argument validation, or deny rules — then run Intercept in front of the Metrillm server.
Metrillm tools are categorised as Read (2), Write (1), Execute (1). Each category has a recommended default policy.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept