Track MCP LogoTrack MCP
Track MCP LogoTrack MCP

The world's largest repository of Model Context Protocol servers. Discover, explore, and submit MCP tools.

Product

  • Categories
  • Top MCP
  • New & Updated
  • Submit MCP

Company

  • About

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 TrackMCP. All rights reserved.

Built with ❤️ by Krishna Goyal

    Google Research Mcp

    Power your AI agents with Google Search–enhanced research via an open-source MCP server. Includes tools for Google Search, YouTube/web scraping, LLM-driven synthesis, persistent caching, and dual transport (STDIO + HTTP SSE) for efficient, flexible integration.

    6 stars
    TypeScript
    Updated Sep 25, 2025

    Table of Contents

    • Features
    • Use Cases
    • Comparison with Alternatives
    • Quick Start
    • Claude Desktop (macOS)
    • Claude Desktop (Windows)
    • One-Click Install (MCPB)
    • Claude Code
    • Cline / Roo Code
    • Local Development
    • Verify It Works
    • For AI Assistants (LLMs)
    • Recommended Tool Selection
    • Example Tool Calls
    • Key Behaviors
    • Table of Contents
    • Available Tools
    • When to Use Each Tool
    • Tool Reference
    • search_and_scrape (Recommended for research)
    • google_search
    • google_image_search
    • google_news_search
    • scrape_page
    • sequential_search
    • academic_search
    • patent_search
    • Features
    • Core Capabilities
    • MCP Protocol Support
    • Production Ready
    • System Architecture
    • Getting Started
    • Prerequisites
    • Installation & Setup
    • Running the Server
    • Running with Docker
    • Usage
    • Choosing a Transport
    • Client Integration
    • STDIO Client (Local Process)
    • HTTP+SSE Client (Web Application)
    • Management API
    • Security
    • OAuth 2.1 Authorization
    • Available Scopes
    • MCP Resources
    • MCP Prompts
    • Basic Research Prompts
    • Advanced Research Prompts
    • Testing
    • Development Tools
    • MCP Inspector
    • Troubleshooting
    • Roadmap
    • Contributing
    • License

    Table of Contents

    • Features
    • Use Cases
    • Comparison with Alternatives
    • Quick Start
    • Claude Desktop (macOS)
    • Claude Desktop (Windows)
    • One-Click Install (MCPB)
    • Claude Code
    • Cline / Roo Code
    • Local Development
    • Verify It Works
    • For AI Assistants (LLMs)
    • Recommended Tool Selection
    • Example Tool Calls
    • Key Behaviors
    • Table of Contents
    • Available Tools
    • When to Use Each Tool
    • Tool Reference
    • search_and_scrape (Recommended for research)
    • google_search
    • google_image_search
    • google_news_search
    • scrape_page
    • sequential_search
    • academic_search
    • patent_search
    • Features
    • Core Capabilities
    • MCP Protocol Support
    • Production Ready
    • System Architecture
    • Getting Started
    • Prerequisites
    • Installation & Setup
    • Running the Server
    • Running with Docker
    • Usage
    • Choosing a Transport
    • Client Integration
    • STDIO Client (Local Process)
    • HTTP+SSE Client (Web Application)
    • Management API
    • Security
    • OAuth 2.1 Authorization
    • Available Scopes
    • MCP Resources
    • MCP Prompts
    • Basic Research Prompts
    • Advanced Research Prompts
    • Testing
    • Development Tools
    • MCP Inspector
    • Troubleshooting
    • Roadmap
    • Contributing
    • License

    Documentation

    Google Researcher MCP Server

    npm version

    npm downloads

    License: MIT

    MCP

    TypeScript

    Tests

    Node.js Version

    Professional research tools for AI assistants - Google Search, web scraping, academic papers, patents, and more

    Features

    ToolDescription
    google_searchWeb search with site, date, and language filters
    google_news_searchNews search with freshness controls
    google_image_searchImage search with type, size, color filters
    scrape_pageExtract content from web pages, PDFs, DOCX
    search_and_scrapeCombined search + content extraction
    academic_searchPapers from arXiv, PubMed, IEEE, Springer
    patent_searchPatent search with assignee/inventor filters
    YouTubeAutomatic transcript extraction
    sequential_searchMulti-step research tracking

    Use Cases

    • Research Assistants: Enable Claude to search the web and synthesize information
    • Content Creation: Gather sources, citations, and supporting evidence
    • Academic Research: Find papers, extract citations, track research progress
    • Competitive Intelligence: Patent searches, company research, market analysis
    • News Monitoring: Track breaking news, industry updates, specific sources
    • Technical Documentation: Extract content from docs, tutorials, and references

    Comparison with Alternatives

    FeatureGoogle ResearcherBasic Web SearchManual Research
    Web SearchYesYesNo
    News SearchYesNoNo
    Image SearchYesNoNo
    Academic PapersYesNoYes
    Patent SearchYesNoYes
    YouTube TranscriptsYesNoNo
    PDF ExtractionYesNoNo
    Citation GenerationYesNoYes
    Response CachingYes (30min)NoN/A
    Rate LimitingYesNoN/A

    ---

    This is a Model Context Protocol (MCP) server that enables AI assistants like Claude, GPT, and other LLMs to:

    • Search the web via Google (general, images, news)
    • Read any webpage including JavaScript-rendered sites
    • Extract YouTube transcripts automatically
    • Parse documents (PDF, DOCX, PPTX)

    Built for production use with caching, quality scoring, and enterprise security.

    Quick Start

    Claude Desktop (macOS)

    Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

    json
    {
      "mcpServers": {
        "google-researcher": {
          "command": "npx",
          "args": ["-y", "google-researcher-mcp"],
          "env": {
            "GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
            "GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
          }
        }
      }
    }

    Claude Desktop (Windows)

    Add to %APPDATA%\Claude\claude_desktop_config.json:

    json
    {
      "mcpServers": {
        "google-researcher": {
          "command": "npx",
          "args": ["-y", "google-researcher-mcp"],
          "env": {
            "GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
            "GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
          }
        }
      }
    }

    One-Click Install (MCPB)

    Download the latest .mcpb bundle from GitHub Releases and double-click to install in Claude Desktop. You'll be prompted to enter your Google API credentials.

    Claude Code

    Add to ~/.claude.json:

    json
    {
      "mcpServers": {
        "google-researcher": {
          "command": "npx",
          "args": ["-y", "google-researcher-mcp"],
          "env": {
            "GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
            "GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
          }
        }
      }
    }

    Cline / Roo Code

    Use the same JSON configuration above in your MCP settings.

    Need API keys? See the API Setup Guide for step-by-step instructions to get your Google API credentials.

    Local Development

    bash
    git clone https://github.com/zoharbabin/google-researcher-mcp.git && cd google-researcher-mcp
    npm install && npx playwright install chromium
    cp .env.example .env   # Then add your Google API keys to .env
    npm run dev            # Server is now running on STDIO transport

    Note: This starts the server in STDIO mode, which is all you need for local AI assistant integrations. HTTP transport with OAuth is only required for web-based or multi-client setups — see Choosing a Transport.

    Verify It Works

    Once configured, ask your AI assistant:

    "Search for the latest news about AI regulations"

    The assistant will use the google_news_search tool and return current articles. If you see search results, the server is working correctly.

    ---

    For AI Assistants (LLMs)

    If you are an AI assistant reading this, here's how to use this MCP server:

    Recommended Tool Selection

    Your TaskUse This Tool
    Research a topic, answer a questionsearch_and_scrape — searches AND retrieves content in one call (recommended)
    Complex multi-step investigationsequential_search — tracks progress across 3+ searches, supports branching
    Find academic papersacademic_search — searches arXiv, PubMed, IEEE with citations (APA, MLA, BibTeX)
    Search patentspatent_search — Google Patents for prior art, FTO analysis
    Find recent newsgoogle_news_search — with freshness filtering and date sorting
    Find imagesgoogle_image_search — with size/type/color filtering
    Get a list of URLs onlygoogle_search — when you need URLs but will process pages yourself
    Read a specific URLscrape_page — also extracts YouTube transcripts and parses PDF/DOCX/PPTX

    Example Tool Calls

    json
    // Research a topic (RECOMMENDED for most queries)
    { "name": "search_and_scrape", "arguments": { "query": "climate change effects 2024", "num_results": 5 } }
    
    // Multi-step research with tracking (for complex investigations)
    { "name": "sequential_search", "arguments": { "searchStep": "Starting research on quantum computing", "stepNumber": 1, "totalStepsEstimate": 4, "nextStepNeeded": true } }
    
    // Find academic papers (peer-reviewed sources with citations)
    { "name": "academic_search", "arguments": { "query": "transformer neural networks", "num_results": 5 } }
    
    // Search patents (prior art, FTO analysis)
    { "name": "patent_search", "arguments": { "query": "machine learning optimization", "search_type": "prior_art" } }
    
    // Get recent news
    { "name": "google_news_search", "arguments": { "query": "AI regulations", "freshness": "week" } }
    
    // Find images
    { "name": "google_image_search", "arguments": { "query": "solar panel installation", "type": "photo" } }
    
    // Read a specific page
    { "name": "scrape_page", "arguments": { "url": "https://example.com/article" } }
    
    // Get YouTube transcript
    { "name": "scrape_page", "arguments": { "url": "https://www.youtube.com/watch?v=VIDEO_ID" } }

    Key Behaviors

    • Caching: Results are cached (30 min for search, 1 hour for scrape). Repeated queries are fast.
    • Quality Scoring: search_and_scrape ranks sources by relevance, freshness, authority, and content quality.
    • Graceful Failures: If some sources fail, you still get results from successful ones.
    • Document Support: scrape_page auto-detects PDFs, DOCX, PPTX and extracts text.

    ---

    Table of Contents

    • For AI Assistants (LLMs)
    • Available Tools
    • Features
    • System Architecture
    • Getting Started
    • Prerequisites
    • Installation & Setup
    • Running the Server
    • Running with Docker
    • Usage
    • Choosing a Transport
    • Client Integration
    • Management API
    • Security
    • OAuth 2.1 Authorization
    • Available Scopes
    • MCP Resources
    • MCP Prompts
    • Testing
    • Development Tools
    • MCP Inspector
    • Troubleshooting
    • Roadmap
    • Contributing
    • License

    Available Tools

    When to Use Each Tool

    ToolBest ForUse When...
    **search_and_scrape**Research (recommended)You need to answer a question using web sources. Most efficient — searches AND retrieves content in one call. Sources are quality-scored.
    **sequential_search**Complex investigations3+ searches needed with different angles, or research you might abandon early. Tracks progress, supports branching. You reason; it tracks state.
    **academic_search**Peer-reviewed papersResearch requiring authoritative academic sources. Returns papers with citations (APA, MLA, BibTeX), abstracts, and PDF links.
    **patent_search**Patent researchPrior art search, freedom to operate (FTO) analysis, patent landscaping. Returns patents with numbers, assignees, inventors, and PDF links.
    **google_search**Finding URLs onlyYou only need a list of URLs (not their content), or want to process pages yourself with custom logic.
    **google_image_search**Finding imagesYou need visual content — photos, illustrations, graphics. For text research, use search_and_scrape.
    **google_news_search**Current newsYou need recent news articles. Use scrape_page on results to read full articles.
    **scrape_page**Reading a specific URLYou have a URL and need its content. Auto-handles YouTube transcripts and documents (PDF, DOCX, PPTX).

    Tool Reference

    search_and_scrape (Recommended for research)

    Searches Google and retrieves content from top results in one call. Returns quality-scored, deduplicated text with source attribution. Includes size metadata (estimatedTokens, sizeCategory, truncated) in response.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber3Number of results (1-10)
    include_sourcesbooleantrueAppend source URLs
    deduplicatebooleantrueRemove duplicate content
    max_length_per_sourcenumber50KBMax content per source in chars
    total_max_lengthnumber300KBMax total combined content in chars
    filter_by_querybooleanfalseFilter to only paragraphs containing query keywords

    google_search

    Returns ranked URLs from Google. Use when you only need links, not content.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber5Number of results (1-10)
    time_rangestring-day, week, month, year
    site_searchstring-Limit to domain
    exact_termsstring-Required phrase
    exclude_termsstring-Exclude words

    google_image_search

    Searches Google Images with filtering options.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber5Number of results (1-10)
    sizestring-huge, large, medium, small
    typestring-clipart, face, lineart, photo, animated
    color_typestring-color, gray, mono, trans
    file_typestring-jpg, gif, png, bmp, svg, webp

    google_news_search

    Searches Google News with freshness and date sorting.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber5Number of results (1-10)
    freshnessstringweekhour, day, week, month, year
    sort_bystringrelevancerelevance, date
    news_sourcestring-Filter to specific source

    scrape_page

    Extracts text from any URL. Auto-detects: web pages (static/JS), YouTube (transcript), documents (PDF/DOCX/PPTX).

    ParameterTypeDefaultDescription
    urlstringrequiredURL to scrape (max 2048 chars)
    max_lengthnumber50KBMaximum content length in chars. Content exceeding this is truncated at natural breakpoints.
    modestringfullfull returns content, preview returns metadata + structure only (useful to check size before fetching)

    sequential_search

    Tracks multi-step research state. Following the sequential_thinking pattern: you do the reasoning, the tool tracks state.

    ParameterTypeDefaultDescription
    searchStepstringrequiredDescription of current step (1-2000 chars)
    stepNumbernumberrequiredCurrent step number (starts at 1)
    totalStepsEstimatenumber5Estimated total steps (1-50)
    nextStepNeededbooleanrequiredtrue if more steps needed, false when done
    sourceobject-Source found: { url, summary, qualityScore? }
    knowledgeGapstring-Gap identified — what's still missing
    isRevisionboolean-true if revising a previous step
    revisesStepnumber-Step number being revised
    branchIdstring-Identifier for branching research

    academic_search

    Searches academic papers via Google Custom Search API, filtered to academic sources (arXiv, PubMed, IEEE, Nature, Springer, etc.). Returns papers with pre-formatted citations.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber5Number of papers (1-10)
    year_fromnumber-Filter by min publication year
    year_tonumber-Filter by max publication year
    sourcestringallall, arxiv, pubmed, ieee, nature, springer
    pdf_onlybooleanfalseOnly return results with PDF links
    sort_bystringrelevancerelevance, date

    patent_search

    Searches Google Patents for prior art, freedom to operate (FTO) analysis, and patent landscaping. Returns patents with numbers, assignees, inventors, and PDF links.

    ParameterTypeDefaultDescription
    querystringrequiredSearch query (1-500 chars)
    num_resultsnumber5Number of results (1-10)
    search_typestringprior_artprior_art, specific, landscape
    patent_officestringallall, US, EP, WO, JP, CN, KR
    assigneestring-Filter by assignee/company
    inventorstring-Filter by inventor name
    cpc_codestring-Filter by CPC classification code
    year_fromnumber-Filter by min year
    year_tonumber-Filter by max year

    Features

    Core Capabilities

    FeatureDescription
    Web ScrapingFast static HTML + automatic Playwright fallback for JavaScript-rendered pages
    YouTube TranscriptsRobust extraction with retry logic and 10 classified error types
    Document ParsingAuto-detects and extracts text from PDF, DOCX, PPTX
    Quality ScoringSources ranked by relevance (35%), freshness (20%), authority (25%), content quality (20%)

    MCP Protocol Support

    FeatureDescription
    Tools8 tools: search_and_scrape, google_search, google_image_search, google_news_search, scrape_page, sequential_search, academic_search, patent_search
    ResourcesExpose server state: stats://tools (per-tool metrics), stats://cache, search://recent, config://server
    PromptsPre-built templates: comprehensive-research, fact-check, summarize-url, news-briefing
    AnnotationsContent tagged with audience, priority, and timestamps

    Production Ready

    FeatureDescription
    CachingTwo-layer (memory + disk) with per-tool namespaces, reduces API costs
    Dual TransportSTDIO for local clients, HTTP+SSE for web apps
    SecurityOAuth 2.1, SSRF protection, granular scopes
    ResilienceCircuit breaker, timeouts, graceful degradation
    MonitoringAdmin endpoints for cache stats, event store, health checks

    For detailed documentation: YouTube Transcripts · Architecture · Testing

    System Architecture

    mermaid
    graph TD
        A[MCP Client] -->|local process| B[STDIO Transport]
        A -->|network| C[HTTP+SSE Transport]
    
        C --> L[OAuth 2.1 + Rate Limiter]
        L --> D
        C -.->|session replay| K[Event Store]
        B --> D[McpServerMCP SDK routing + dispatch]
    
        D --> F[google_search]
        D --> G[scrape_page]
        D --> I[search_and_scrape]
        D --> IMG[google_image_search]
        D --> NEWS[google_news_search]
        I -.->|delegates| F
        I -.->|delegates| G
        I --> Q[Quality Scoring]
    
        G --> N[SSRF Validator]
        N --> S1[CheerioCrawlerstatic HTML]
        S1 -.->|insufficient content| S2[PlaywrightJS rendering]
        G --> YT[YouTube TranscriptExtractor]
    
        F & G & IMG & NEWS --> J[Persistent Cachememory + disk]
    
        D -.-> R[MCP Resources]
        D -.-> P[MCP Prompts]
    
        style J fill:#f9f,stroke:#333,stroke-width:2px
        style K fill:#ccf,stroke:#333,stroke-width:2px
        style L fill:#f99,stroke:#333,stroke-width:2px
        style N fill:#ff9,stroke:#333,stroke-width:2px
        style Q fill:#9f9,stroke:#333,stroke-width:2px

    For a detailed explanation, see the Architecture Guide.

    Getting Started

    Prerequisites

    • Node.js 20.0.0 or higher
    • Google API Keys:
    • Custom Search API Key
    • Custom Search Engine ID
    • Chromium (for JavaScript rendering): Installed automatically via npx playwright install chromium
    • OAuth 2.1 Provider (HTTP transport only): An external authorization server (e.g., Auth0, Okta) to issue JWTs. Not needed for STDIO.

    Installation & Setup

    1. Clone the Repository:

    bash
    git clone https://github.com/zoharbabin/google-researcher-mcp.git
        cd google-researcher-mcp

    2. Install Dependencies:

    bash
    npm install
        npx playwright install chromium

    3. Configure Environment Variables:

    bash
    cp .env.example .env

    Open .env and add your Google API keys. All other variables are optional — see the comments in .env.example for detailed explanations.

    Running the Server

    • Development (auto-reload on file changes):
    bash
    npm run dev
    • Production:
    bash
    npm run build
        npm start

    Running with Docker

    bash
    # Build the image
    docker build -t google-researcher-mcp .
    
    # Run in STDIO mode (default, for MCP clients)
    docker run -i --rm --env-file .env google-researcher-mcp
    
    # Run with HTTP transport on port 3000
    # (MCP_TEST_MODE= overrides the Dockerfile default of "stdio" to enable HTTP)
    docker run -d --rm --env-file .env -e MCP_TEST_MODE= -p 3000:3000 google-researcher-mcp

    Docker Compose (quick HTTP transport setup):

    bash
    cp .env.example .env   # Fill in your API keys
    docker compose up --build
    curl http://localhost:3000/health

    Docker with Claude Code (~/.claude/claude_desktop_config.json):

    json
    {
      "mcpServers": {
        "google-researcher": {
          "command": "docker",
          "args": ["run", "-i", "--rm", "--env-file", "/path/to/.env", "google-researcher-mcp"]
        }
      }
    }

    Security note: Never bake secrets into the Docker image. Always pass them at runtime via --env-file or -e flags.

    Usage

    Choosing a Transport

    STDIOHTTP+SSE
    Best forLocal MCP clients (Claude Code, Cline, Roo Code)Web apps, multi-client setups, remote access
    AuthNone needed (process-level isolation)OAuth 2.1 Bearer tokens required
    SetupZero config — just provide API keysRequires OAuth provider (Auth0, Okta, etc.)
    ScalingOne server per client processSingle server, many concurrent clients

    Recommendation: Use STDIO for local AI assistant integrations. Use HTTP+SSE only when you need a shared service or web application integration.

    Client Integration

    STDIO Client (Local Process)

    javascript
    import { Client } from "@modelcontextprotocol/sdk/client/index.js";
    import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
    
    const transport = new StdioClientTransport({
      command: "node",
      args: ["dist/server.js"]
    });
    const client = new Client({ name: "my-client" });
    await client.connect(transport);
    
    // Search Google
    const searchResult = await client.callTool({
      name: "google_search",
      arguments: { query: "Model Context Protocol" }
    });
    console.log(searchResult.content[0].text);
    
    // Extract a YouTube transcript
    const transcript = await client.callTool({
      name: "scrape_page",
      arguments: { url: "https://www.youtube.com/watch?v=dQw4w9WgXcQ" }
    });
    console.log(transcript.content[0].text);

    HTTP+SSE Client (Web Application)

    Requires a valid OAuth 2.1 Bearer token from your configured authorization server.

    javascript
    import { Client } from "@modelcontextprotocol/sdk/client/index.js";
    import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
    
    const transport = new StreamableHTTPClientTransport(
      new URL("http://localhost:3000/mcp"),
      {
        getAuthorization: async () => `Bearer YOUR_ACCESS_TOKEN`
      }
    );
    const client = new Client({ name: "my-client" });
    await client.connect(transport);
    
    const result = await client.callTool({
      name: "search_and_scrape",
      arguments: { query: "Model Context Protocol", num_results: 3 }
    });
    console.log(result.content[0].text);

    Management API

    Administrative and monitoring endpoints (HTTP transport only):

    MethodEndpointDescriptionAuth
    GET/healthServer health check (status, version, uptime)Public
    GET/versionServer version and runtime infoPublic
    GET/mcp/cache-statsCache performance statisticsmcp:admin:cache:read
    GET/mcp/event-store-statsEvent store usage statisticsmcp:admin:event-store:read
    POST/mcp/cache-invalidateClear specific cache entriesmcp:admin:cache:invalidate
    POST/mcp/cache-persistForce cache save to diskmcp:admin:cache:persist
    GET/mcp/oauth-configCurrent OAuth configurationmcp:admin:config:read
    GET/mcp/oauth-scopesOAuth scopes documentationPublic
    GET/mcp/oauth-token-infoToken detailsAuthenticated

    Security

    OAuth 2.1 Authorization

    All HTTP endpoints under /mcp/ (except public documentation) are protected by OAuth 2.1:

    • Token Validation: JWTs are validated against your authorization server's JWKS endpoint (${OAUTH_ISSUER_URL}/.well-known/jwks.json).
    • Scope Enforcement: Each tool and admin action requires a specific OAuth scope.

    Configure OAUTH_ISSUER_URL and OAUTH_AUDIENCE in .env. See .env.example for details.

    STDIO users: OAuth is not used for STDIO transport. You can skip all OAuth configuration.

    Available Scopes

    Tool Execution:

    • mcp:tool:google_search:execute
    • mcp:tool:google_image_search:execute
    • mcp:tool:google_news_search:execute
    • mcp:tool:scrape_page:execute
    • mcp:tool:search_and_scrape:execute

    Administration:

    • mcp:admin:cache:read
    • mcp:admin:cache:invalidate
    • mcp:admin:cache:persist
    • mcp:admin:event-store:read
    • mcp:admin:config:read

    MCP Resources

    The server exposes state via the MCP Resources protocol. Use resources/list to discover available resources and resources/read to retrieve them.

    URIDescription
    search://recentLast 20 search queries with timestamps and result counts
    config://serverServer configuration (version, start time, transport mode)
    stats://cacheCache statistics (hit rate, entry count, memory usage)
    stats://eventsEvent store statistics (event count, storage size)

    Example (using MCP SDK):

    javascript
    const resources = await client.listResources();
    const recentSearches = await client.readResource({ uri: "search://recent" });

    MCP Prompts

    Pre-built research workflow templates are available via the MCP Prompts protocol. Use prompts/list to discover prompts and prompts/get to retrieve a prompt with arguments.

    Basic Research Prompts

    PromptArgumentsDescription
    comprehensive-researchtopic, depth (quick/standard/deep)Multi-source research on a topic
    fact-checkclaim, sources (number)Verify a claim against multiple sources
    summarize-urlurl, format (brief/detailed/bullets)Summarize content from a single URL
    news-briefingtopic, timeRange (day/week/month)Get current news summary on a topic

    Advanced Research Prompts

    PromptArgumentsDescription
    patent-portfolio-analysiscompany, includeSubsidiariesAnalyze a company's patent holdings
    competitive-analysisentities (comma-separated), aspectsCompare companies/products
    literature-reviewtopic, yearFrom, sourcesAcademic literature synthesis
    technical-deep-divetechnology, focusAreaIn-depth technical investigation

    **Focus areas for technical-deep-dive:** architecture, implementation, comparison, best-practices, troubleshooting

    Example (using MCP SDK):

    javascript
    const prompts = await client.listPrompts();
    
    // Basic research
    const research = await client.getPrompt({
      name: "comprehensive-research",
      arguments: { topic: "quantum computing", depth: "standard" }
    });
    
    // Advanced: Patent analysis
    const patents = await client.getPrompt({
      name: "patent-portfolio-analysis",
      arguments: { company: "Kaltura", includeSubsidiaries: true }
    });
    
    // Advanced: Competitive analysis
    const comparison = await client.getPrompt({
      name: "competitive-analysis",
      arguments: { entities: "React, Vue, Angular", aspects: "performance, learning curve, ecosystem" }
    });

    Testing

    ScriptDescription
    npm testRun all unit/component tests (Jest)
    npm run test:e2eFull end-to-end suite (STDIO + HTTP + YouTube)
    npm run test:coverageGenerate code coverage report
    npm run test:e2e:stdioSTDIO transport E2E only
    npm run test:e2e:sseHTTP transport E2E only
    npm run test:e2e:youtubeYouTube transcript E2E only

    All NPM scripts:

    ScriptDescription
    npm startRun the built server (production)
    npm run devStart with live-reload (development)
    npm run buildCompile TypeScript to dist/
    npm run inspectOpen MCP Inspector for interactive debugging

    For testing philosophy and structure, see the Testing Guide.

    Development Tools

    MCP Inspector

    The MCP Inspector is a visual debugging tool for MCP servers. Use it to interactively test tools, browse resources, and verify prompts.

    Run the Inspector:

    bash
    npm run inspect

    This opens a browser interface at http://localhost:5173 connected to the server via STDIO.

    What to Expect:

    PrimitiveCountItems
    Tools8google_search, google_image_search, google_news_search, scrape_page, search_and_scrape, sequential_search, academic_search, patent_search
    Resources6search://recent, config://server, stats://cache, stats://events, search://session/current, stats://resources
    Prompts8comprehensive-research, fact-check, summarize-url, news-briefing, patent-portfolio-analysis, competitive-analysis, literature-review, technical-deep-dive

    Troubleshooting Inspector Issues:

    • "Cannot find module" error: Run npm run build first — Inspector requires compiled JavaScript.
    • Tool calls fail with API errors: Ensure GOOGLE_CUSTOM_SEARCH_API_KEY and GOOGLE_CUSTOM_SEARCH_ID are set in your .env file.
    • Port 5173 in use: The Inspector UI runs on port 5173. Stop other services using that port or check if another Inspector instance is running.
    • Server crashes on startup: Check that all dependencies are installed (npm install) and Playwright is set up (npx playwright install chromium).

    Troubleshooting

    • Server won't start: Ensure GOOGLE_CUSTOM_SEARCH_API_KEY and GOOGLE_CUSTOM_SEARCH_ID are set in .env. The server exits with a clear error if either is missing.
    • Empty scrape results: The persistent cache may contain stale entries. Delete storage/persistent_cache/namespaces/scrapePage/ and restart to force fresh scrapes.
    • Playwright/Chromium errors: Re-run npx playwright install chromium. On Linux, also run npx playwright install-deps chromium for system dependencies. In Docker, these are pre-installed.
    • Port 3000 in use: Stop the other process (lsof -ti:3000 | xargs kill) or set PORT=3001 npm start.
    • YouTube transcripts fail: Some videos have transcripts disabled by the owner. The error message includes the specific reason (e.g., TRANSCRIPT_DISABLED, VIDEO_UNAVAILABLE). See the YouTube Transcript Documentation for all error codes.
    • Cache issues: Use /mcp/cache-stats to inspect cache health, or /mcp/cache-persist to force a save. See the Management API.
    • OAuth errors: Verify OAUTH_ISSUER_URL and OAUTH_AUDIENCE in .env. Use /mcp/oauth-config to inspect current configuration.
    • Docker health check failing: The health check hits /health on port 3000, which requires HTTP transport. In STDIO mode (MCP_TEST_MODE=stdio), the health check will fail — this is expected.

    Roadmap

    Feature requests and improvements are tracked as GitHub Issues. Contributions welcome.

    Contributing

    We welcome contributions of all kinds! Please see the Contribution Guidelines for details.

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    Similar MCP

    Based on tags & features

    • MC

      Mcp Server Aws Sso

      TypeScript·
      6
    • MC

      Mcp Ipfs

      TypeScript·
      11
    • LI

      Liveblocks Mcp Server

      TypeScript·
      11
    • MC

      Mcp Wave

      TypeScript00

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k
    View All MCP Servers

    Similar MCP

    Based on tags & features

    • MC

      Mcp Server Aws Sso

      TypeScript·
      6
    • MC

      Mcp Ipfs

      TypeScript·
      11
    • LI

      Liveblocks Mcp Server

      TypeScript·
      11
    • MC

      Mcp Wave

      TypeScript00

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k