NotebookLM is great at turning a pile of sources into answers, study material, and "studio" artifacts (audio overviews, slides, etc.). The catch is that it's mostly a UI-first product.
notebooklm-mcp is a community-built bridge that exposes NotebookLM operations as MCP tools, so Claude Code / IDE agents / MCP-compatible clients can drive NotebookLM programmatically.
You can create notebooks, add sources (URLs / YouTube / Drive / pasted text), run research, sync stale Drive sources, and generate studio artifacts.
Let's look at how we can install, auth, wire it into your MCP client, run real workflows, troubleshoot, and (if you want) extend the server.

What you're building with notebooklm-mcp
- Host/client: Claude Code, Claude Desktop, Cursor, VS Code, Gemini CLI, etc.
- Server:
notebooklm-mcprunning locally - Tools: NotebookLM actions (notebooks, sources, research, studio generation, auth refresh, …)
Important disclaimer
This project uses undocumented internal NotebookLM APIs and requires authentication via browser cookies. That means it can break if Google changes things, and you should treat it as personal/experimental tooling.
Before we start, grab the agentic SaaS patterns winning in 2026 (shown below) https://stan.store/agentnative

Let's continue.
Step 0: Prereqs
- A NotebookLM account (free/pro)
- Python environment tooling
- Chrome for auth flow
Step 1: Install the MCP server
Pick one:
# Recommended (uv)
uv tool install notebooklm-mcp-server
# pip
pip install notebooklm-mcp-server
# pipx
pipx install notebooklm-mcp-serverOr install from source:
git clone https://github.com/jacob-bd/notebooklm-mcp.git
cd notebooklm-mcp
uv tool install .Upgrading later
uv tool upgrade notebooklm-mcp-server
# or: pip install --upgrade notebooklm-mcp-server
# or: pipx upgrade notebooklm-mcp-serverAfter upgrading, restart your MCP client so it reconnects to the updated server.
Step 2: Authenticate
Run:
# Recommended: Auto mode
notebooklm-mcp-authWhat happens:
- It launches a dedicated Chrome profile
- You log in to Google
- The tool extracts cookies automatically
- Your login persists for future refreshes
Alternative (manual):
notebooklm-mcp-auth --fileThat mode walks you through extracting cookies via Chrome DevTools and saving them to a file.
No environment variables required as the MCP server reads cached tokens from ~/.notebooklm-mcp/auth.json
Step 3: Connect your MCP client
⚠️ Context warning: this server provides a large toolset and can eat into your context window; the README recommends disabling it when you're not actively using NotebookLM (Claude Code can toggle with @notebooklm-mcp or /mcp).
Claude Code
# Add for all projects (recommended)
claude mcp add --scope user notebooklm-mcp notebooklm-mcp
# Or current project only
claude mcp add notebooklm-mcp notebooklm-mcp
# Verify
claude mcp listManual config (if needed):
{
"mcpServers": {
"notebooklm-mcp": {
"command": "notebooklm-mcp"
}
}
}Cursor, VS Code, Claude Desktop (JSON config)
Find the installed binary path:
which notebooklm-mcpPut the full path in your tool's MCP config file (examples in README: Cursor ~/.cursor/mcp.json, VS Code ~/.vscode/mcp.json, Claude Desktop config on macOS).
Example config:
{
"mcpServers": {
"notebooklm-mcp": {
"command": "/Users/<YOUR_USERNAME>/.local/bin/notebooklm-mcp"
}
}
}Other MCP-compatible tools
If your tool has an "mcp add" command, then:
<your-tool> mcp add notebooklm-mcp notebooklm-mcpOtherwise, use the JSON config approach with the full binary path.
Step 4: Your first "agent-driven NotebookLM" workflow
Once your client connects, you can talk to NotebookLM through tools.
Here are some of the tools you have access to:
- Notebooks:
notebook_list,notebook_create,notebook_get,notebook_rename,notebook_delete - Sources:
notebook_add_url,notebook_add_text,notebook_add_drive,source_delete,source_get_content,source_describe - Drive freshness:
source_list_drive,source_sync_drive - Research:
research_start,research_status,research_import - Studio generation:
audio_overview_create,video_overview_create,infographic_create,slide_deck_create,studio_status,studio_delete - Chat tuning & auth:
chat_configure,refresh_auth,save_auth_tokens
Many destructive or billable operations require confirmation (delete, sync, generation).
Example prompts that map cleanly to tools
These are written the way you'd type them into Claude Code / an IDE agent:
- "Create a notebook called 'ACME Market Research', then add these sources: [URL1], [URL2], and my Drive doc 'Q4 Plan'."
- "Run web research on 'agentic RAG evaluation techniques', import the top 10 sources, then summarize the notebook."
- "Generate an audio overview in a 'podcast' tone, then poll until it's ready."
Step 5: Make it feel "developer-grade"
1) Configure chat output
chat_configure lets you set chat "goal/style" and response length (useful when you want NotebookLM answers to be more concise vs. more report-like).
2) Use source_get_content when you need raw exports
If your goal is to export notebook content (e.g., dump sources to markdown, feed into another pipeline), source_get_content is explicitly "raw text content… no AI processing."
3) Keep Drive sources fresh
If you're using Drive docs/slides as living sources, the "freshness" pair is:
source_list_drive→ see stale vs freshsource_sync_drive→ refresh stale (confirmation required)
Troubleshooting cheatsheet
"Tools aren't showing up / context feels cramped"
Disable the MCP server when you're not using NotebookLM (@notebooklm-mcp toggle or /mcp in Claude Code).
"Auth expired / 401 / things randomly fail after a while"
- Re-run
notebooklm-mcp-auth, or use therefresh_authtool depending on your client workflow. - Newer versions added multi-layer auth recovery (CSRF refresh -> disk reload -> optional headless Chrome auth).
"Adding a big Google Slides deck times out"
A recent fix extends Drive source timeouts (30s → 120s) and returns a status: timeout hinting the operation may still have succeeded.
Under the hood
Based on the README + release notes + maintainer commentary:
- It reverse-engineers NotebookLM internal HTTP/RPC calls and runs fully on HTTP requests (not UI automation).
- It mapped "about 31 tools" and warns about context usage.
- Internally, the project has an
api_client.pyand uses constants/code mappings to validate tool inputs and keep output human-readable (see release notes aboutconstants.pyand theCodeMapperrefactor).
Extending the server (adding a new tool)
If you want to add NotebookLM capabilities that aren't exposed yet, the maintainer lays out a simple loop:
- Use Chrome DevTools to perform the action and inspect network calls
- Reproduce the action using a Python test script
- Add the tool to the MCP server
- Run end-to-end tests
A practical pattern to follow
When you add tools, keep two rules from the existing design:
- Validate inputs via a shared mapping layer
- Require confirmations for destructive actions (delete/sync) and for studio generation operations.
Once all these are wired up, you can build workflows that feel like an agentic research IDE:
- Repo onboarding: paste a repo README + docs into NotebookLM, then have your coding agent query it while implementing changes.
- Living specs: Drive docs as sources + scheduled sync + "briefing" style NotebookLM outputs.
- Research pipelines: web research -> import sources -> create slide deck + audio overview for stakeholders.
You can see further details in the repository.
Happy building!