Wait for a job to reach a terminal state, emitting live MCP progress notifications. Preferred over manually looping get_job_status. The MCP host shows a live progress indicator while this tool holds the connection — no extra token cost. Architecture ──────────── ...
Bulk/mass operation — affects multiple targets
Part of the Hyperplexity MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents invoke wait_for_job to trigger processes or run actions in Hyperplexity. Execute operations can have side effects beyond the immediate call -- triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.
wait_for_job can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.
Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.
tools:
wait_for_job:
rules:
- action: allow
rate_limit:
max: 10
window: 60
validate:
required_args: true See the full Hyperplexity policy for all 16 tools.
Agents calling execute-class tools like wait_for_job have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Execute risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.
wait_for_job is one of the high-risk operations in Hyperplexity. For the full severity-focused view — only the high-risk tools with their recommended policies — see the breakdown for this server, or browse all high-risk tools across every MCP server.
Wait for a job to reach a terminal state, emitting live MCP progress notifications. Preferred over manually looping get_job_status. The MCP host shows a live progress indicator while this tool holds the connection — no extra token cost. Architecture ──────────── Every poll cycle does two things in sequence: 1. Fetch /messages → extract native progress %, emit MCP notification 2. Fetch /status → act on phase transitions or terminal states This separation means messages drive the visual indicator (they are more real-time) while status is the authoritative source for workflow transitions. Neither endpoint is used as a shortcut to skip the other; both are polled every cycle so transient failures in one don't cause false terminations. Progress is always monotonically non-decreasing. Within a phase, msg_progress can oscillate (e.g. QC triggers new row-discovery rounds in the table-maker), but the emitted value is clamped to last_emitted. Across phases a geometric slice scheme is used so the bar never goes backward regardless of how many phases occur. Progress geometry (lazy split) ────────────────────────────── Starts with the full 0–99 range so single-phase jobs (e.g. full validation after approve_validation) map their native 0–100% directly across the whole bar. On each intermediate phase transition, 80% of the current range is "spent" on the completed phase and the remaining 20% is handed to the next phase — keeping progress monotonic for any number of QC re-discovery rounds or pipeline stages. True terminal always emits exactly 100. Terminal states: preview_complete, failed, completed-without-intermediate-step. Intermediate: completed + current_step in (Config Generation, Table Making, Claim Extraction, …) — tool advances phase and keeps polling. Returns the same payload shape as get_job_status so downstream tools (approve_validation, get_results, etc.) apply directly. job_id: the session_id value returned by upload_file / start_table_validation / start_table_maker. "job_id" and "session_id" are the same string — every workflow uses session_id as its job identifier throughout the pipeline. timeout_seconds: max wall time before returning last known state (default 900). Upload-interview + config-gen phases and large table previews can take up to 15 minutes — set timeout_seconds=900 or higher for long-running jobs. poll_interval: seconds between poll cycles (default 10) warmup_seconds: when > 0, applies synthetic sqrt-curve progress from 0→70% over this many seconds during the pre-message phase (before the first progress message or intermediate step arrives). Use this when the pipeline has a silent setup phase (e.g. instructions= mode where the backend runs an internal AI interview + config generation before preview messages begin). The warmup is automatically disabled once the first intermediate step completes (phase-split takes over). For instructions= mode, pass 300. . It is categorised as a Execute tool in the Hyperplexity MCP Server, which means it can trigger actions or run processes. Use rate limits and argument validation.
Add a rule in your Intercept YAML policy under the tools section for wait_for_job. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Hyperplexity MCP server.
wait_for_job is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.
Yes. Add a rate_limit block to the wait_for_job rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for wait_for_job. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
wait_for_job is provided by the Hyperplexity MCP server (hyperplexity/hyperplexity). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept