8 tools from the Firecrawl Web Scraping Server MCP Server, categorised by risk level.
View the Firecrawl Web Scraping Server policy →firecrawl_check_crawl_status
Check the status of a crawl job.
**Usage Example:**
```json
{
"name": "firecrawl_check_crawl_status",
"arguments": {
"id": "550e8400-e29b... firecrawl_crawl
Starts an asynchronous crawl job on a website and extracts content from all pages.
**Best for:** Extracting content from multiple related pages, ... firecrawl_deep_research
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
**Best for:** Complex research questions requiring mul... 2/5 firecrawl_extract
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
**Best for:** Extrac... firecrawl_map
Map a website to discover all indexed URLs on the site.
**Best for:** Discovering URLs on a website before deciding what to scrape; finding speci... firecrawl_scrape
Scrape content from a single URL with advanced options.
**Best for:** Single page content extraction, when you know exactly which page contains t... 2/5 firecrawl_search
Search the web and optionally extract content from search results.
**Best for:** Finding specific information across multiple websites, when you ... 2/5 The Firecrawl Web Scraping Server MCP server exposes 8 tools across 2 categories: Read, Write.
Use Intercept, the open-source MCP proxy. Write YAML rules for each tool — rate limits, argument validation, or deny rules — then run Intercept in front of the Firecrawl Web Scraping Server server.
Firecrawl Web Scraping Server tools are categorised as Read (7), Write (1). Each category has a recommended default policy.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept