10 tools from the Firecrawl Web Scraping Server MCP Server, categorised by risk level.
View the Firecrawl Web Scraping Server policy →firecrawl_batch_scrape Scrape multiple URLs in batch mode. Returns a job ID that can be used to check status. firecrawl_check_batch_status Check the status of a batch scraping job. firecrawl_check_crawl_status Check the status of a crawl job. firecrawl_deep_research Conduct deep research on a query using web crawling, search, and AI analysis. 2/5 firecrawl_extract Extract structured information from web pages using LLM. Supports both cloud AI and self-hosted LLM extraction. firecrawl_map Discover URLs from a starting point. Can use both sitemap.xml and HTML link discovery. firecrawl_search Search and retrieve content from web pages with optional scraping. Returns SERP results by default (url, title, description) or full page content w... 2/5 firecrawl_crawl Start an asynchronous crawl of multiple pages from a starting URL. Supports depth control, path filtering, and webhook notifications. 4/5 firecrawl_scrape Scrape a single webpage with advanced options for content extraction. Supports various formats including markdown, HTML, and screenshots. Can execu... 4/5 The Firecrawl Web Scraping Server MCP server exposes 10 tools across 3 categories: Read, Write, Execute.
Use Intercept, the open-source MCP proxy. Write YAML rules for each tool — rate limits, argument validation, or deny rules — then run Intercept in front of the Firecrawl Web Scraping Server server.
Firecrawl Web Scraping Server tools are categorised as Read (7), Write (1), Execute (2). Each category has a recommended default policy.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept