Starts an asynchronous crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (us...
Accepts URL/endpoint input (url); High parameter count (17 properties); Single-target operation
Part of the Firecrawl Web Scraping Server MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents call firecrawl_crawl to retrieve information from Firecrawl Web Scraping Server without modifying any data. This is common in research, monitoring, and reporting workflows where the agent needs context before taking action. Because read operations don't change state, they are generally safe to allow without restrictions -- but you may still want rate limits to control API costs.
Even though firecrawl_crawl only reads data, uncontrolled read access can leak sensitive information or rack up API costs. An agent caught in a retry loop could make thousands of calls per minute. A rate limit gives you a safety net without blocking legitimate use.
Read-only tools are safe to allow by default. No rate limit needed unless you want to control costs.
tools:
firecrawl_crawl:
rules:
- action: allow See the full Firecrawl Web Scraping Server policy for all 8 tools.
Agents calling read-class tools like firecrawl_crawl have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Read risk category across the catalogue. The same policy patterns (rate-limit, allow) apply to each.
Starts an asynchronous crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead). **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDepth": 2, "limit": 100, "allowExternalLinks": false, "deduplicateSimilarURLs": true } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress. . It is categorised as a Read tool in the Firecrawl Web Scraping Server MCP Server, which means it retrieves data without modifying state.
Add a rule in your Intercept YAML policy under the tools section for firecrawl_crawl. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Firecrawl Web Scraping Server MCP server.
firecrawl_crawl is a Read tool with low risk. Read-only tools are generally safe to allow by default.
Yes. Add a rate_limit block to the firecrawl_crawl rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for firecrawl_crawl. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
firecrawl_crawl is provided by the Firecrawl Web Scraping Server MCP server (NYO2008/firecrawl-mcp-server). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept