7 tools from the Tickerr Live Status MCP Server, categorised by risk level.
View the Tickerr Live Status policy →compare_pricing Rank AI models by total cost for a given token workload. Useful for finding the cheapest model for your use case. get_api_pricing Get current API pricing (input/output cost per 1M tokens) for AI models tracked by tickerr.ai. Filter by model or provider name. get_free_tier Find the best free plans across AI tools, grouped by category (LLM APIs, coding assistants, image generation, etc.). get_incidents Get historical incidents (outages, degradations) for any AI tool from the last 90 days. Sourced from 26 official provider status pages. get_rate_limits Get rate limits and plan details for any AI tool — requests per minute, tokens per day, context window, and more by plan tier. get_tool_status Get live operational status, uptime percentage, and response time for any AI tool. Checks every 5 minutes from independent infrastructure. 2/5 list_tools List all 42+ AI tools monitored by tickerr.ai — ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, Perplexity, DeepSeek, Groq, Fireworks AI, and more. The Tickerr Live Status MCP server exposes 7 tools across 1 categories: Read.
Use Intercept, the open-source MCP proxy. Write YAML rules for each tool — rate limits, argument validation, or deny rules — then run Intercept in front of the Tickerr Live Status server.
Tickerr Live Status tools are categorised as Read (7). Each category has a recommended default policy.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept