Wait for a conversation turn to complete, emitting live synthetic progress. Preferred over manually polling get_conversation. Since conversation processing has no native progress signal, this tool emits time-based synthetic progress — advancing quickly at first, then slow...
Part of the Hyperplexity MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents invoke wait_for_conversation to trigger processes or run actions in Hyperplexity. Execute operations can have side effects beyond the immediate call -- triggering builds, sending notifications, or starting workflows. Rate limits and argument validation are essential to prevent runaway execution.
wait_for_conversation can trigger processes with real-world consequences. An uncontrolled agent might start dozens of builds, send mass notifications, or kick off expensive compute jobs. Intercept enforces rate limits and validates arguments to keep execution within safe bounds.
Execute tools trigger processes. Rate-limit and validate arguments to prevent unintended side effects.
tools:
wait_for_conversation:
rules:
- action: allow
rate_limit:
max: 10
window: 60
validate:
required_args: true See the full Hyperplexity policy for all 16 tools.
Agents calling execute-class tools like wait_for_conversation have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Execute risk category across the catalogue. The same policy patterns (rate-limit, validate) apply to each.
wait_for_conversation is one of the high-risk operations in Hyperplexity. For the full severity-focused view — only the high-risk tools with their recommended policies — see the breakdown for this server, or browse all high-risk tools across every MCP server.
Wait for a conversation turn to complete, emitting live synthetic progress. Preferred over manually polling get_conversation. Since conversation processing has no native progress signal, this tool emits time-based synthetic progress — advancing quickly at first, then slowing as it approaches expected_seconds — so the MCP host shows a "still thinking" indicator rather than a frozen bar. Returns when any of these conditions are met: user_reply_needed=True → AI asked a question; call send_conversation_reply trigger_execution=True → AI approved execution; preview is auto-queued, switch to wait_for_job(session_id) Non-processing status → unexpected terminal (inspect status field) Timeout → returns last known state with _wait_timeout note Applies to all conversation types: upload interview, table-maker interview, config refinement. expected_seconds: typical AI response time for this turn (default 120). First table-maker turn (research + planning): ~120–180s. Upload interview first turn (CSV analysis + plan): ~90–150s. Follow-up confirmations ("yes, proceed"): ~30–60s. poll_interval: seconds between status checks (default 8). timeout_seconds: max wall time before returning (default 900). Upload interview turns can take up to 15 minutes — set accordingly. . It is categorised as a Execute tool in the Hyperplexity MCP Server, which means it can trigger actions or run processes. Use rate limits and argument validation.
Add a rule in your Intercept YAML policy under the tools section for wait_for_conversation. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Hyperplexity MCP server.
wait_for_conversation is a Execute tool with high risk. Execute tools should be rate-limited and have argument validation enabled.
Yes. Add a rate_limit block to the wait_for_conversation rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for wait_for_conversation. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
wait_for_conversation is provided by the Hyperplexity MCP server (hyperplexity/hyperplexity). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept