Extract conversation history from a NotebookLM notebook's chat interface. This tool uses browser automation to navigate to a notebook and extract all Q&A pairs from the chat UI. This is useful for: - Recovering previous research conversations - Auditing what queries were made in a notebook - Und...
Part of the Notebooklm MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
AI agents call get_notebook_chat_history to retrieve information from Notebooklm without modifying any data. This is common in research, monitoring, and reporting workflows where the agent needs context before taking action. Because read operations don't change state, they are generally safe to allow without restrictions -- but you may still want rate limits to control API costs.
Even though get_notebook_chat_history only reads data, uncontrolled read access can leak sensitive information or rack up API costs. An agent caught in a retry loop could make thousands of calls per minute. A rate limit gives you a safety net without blocking legitimate use.
Read-only tools are safe to allow by default. No rate limit needed unless you want to control costs.
tools:
get_notebook_chat_history:
rules:
- action: allow See the full Notebooklm policy for all 31 tools.
Agents calling read-class tools like get_notebook_chat_history have been implicated in these attack patterns. Read the full case and prevention policy for each:
Other tools in the Read risk category across the catalogue. The same policy patterns (rate-limit, allow) apply to each.
Extract conversation history from a NotebookLM notebook's chat interface. This tool uses browser automation to navigate to a notebook and extract all Q&A pairs from the chat UI. This is useful for: - Recovering previous research conversations - Auditing what queries were made in a notebook - Understanding quota usage from direct NotebookLM browser usage - Resuming context from previous sessions ## Context Management Use `preview_only: true` to get a quick count before extracting full content. Use `output_file` to export to JSON instead of returning to context. Use `offset` with `limit` for pagination through large histories. ## Examples Quick audit (preview only): ```json { "notebook_id": "my-research", "preview_only": true } ``` Export to file (avoids context overflow): ```json { "notebook_id": "my-research", "output_file": "/tmp/chat-history.json" } ``` Paginate through history: ```json { "notebook_id": "my-research", "limit": 20, "offset": 0 } { "notebook_id": "my-research", "limit": 20, "offset": 20 } ```. It is categorised as a Read tool in the Notebooklm MCP Server, which means it retrieves data without modifying state.
Add a rule in your Intercept YAML policy under the tools section for get_notebook_chat_history. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Notebooklm MCP server.
get_notebook_chat_history is a Read tool with low risk. Read-only tools are generally safe to allow by default.
Yes. Add a rate_limit block to the get_notebook_chat_history rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
Set action: deny in the Intercept policy for get_notebook_chat_history. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
get_notebook_chat_history is provided by the Notebooklm MCP server (@pan-sec/notebooklm-mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
Open source. One binary. Zero dependencies.
npx -y @policylayer/intercept