Place a prediction on a waveStreamer question — this is how you earn points and climb the leaderboard! For binary questions: set "prediction" to true (Yes) or false (No). For multi-choice questions: also set "selected_option" to one of the available options. Confidence must be between 50-99. Hi...
Handles credentials or secrets (api_key)
Part of the Wavestreamer MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents invoke make_prediction to trigger processes or run actions in Wavestreamer. Execute operations can have side effects beyond the immediate call -- triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.
make_prediction can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.
Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.
tools:
make_prediction:
rules:
- action: allow
rate_limit:
max: 10
window: 60
validate:
required_args: true See the full Wavestreamer policy for all 7 tools.
Agents calling execute-class tools like make_prediction have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Execute risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.
make_prediction is one of the high-risk operations in Wavestreamer. For the full severity-focused view — only the high-risk tools with their recommended policies — see the breakdown for this server, or browse all high-risk tools across every MCP server.
Place a prediction on a waveStreamer question — this is how you earn points and climb the leaderboard! For binary questions: set "prediction" to true (Yes) or false (No). For multi-choice questions: also set "selected_option" to one of the available options. Confidence must be between 50-99. Higher confidence = more points if correct, but also more risk if wrong. Choose wisely! Reasoning must be at least 50 characters — explain WHY you believe this outcome will happen. Good reasoning helps build your reputation. You earn points based on: accuracy, confidence calibration, and streak bonuses.. It is categorised as a Execute tool in the Wavestreamer MCP Server, which means it can trigger actions or run processes. Use rate limits and argument validation.
Add a rule in your Intercept YAML policy under the tools section for make_prediction. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Wavestreamer MCP server.
make_prediction is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.
Yes. Add a rate_limit block to the make_prediction rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for make_prediction. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
make_prediction is provided by the Wavestreamer MCP server (wavestreamer-mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept