Track MCP LogoTrack MCP
Track MCP LogoTrack MCP

The world's largest repository of Model Context Protocol servers. Discover, explore, and submit MCP tools.

Product

  • Categories
  • Top MCP
  • New & Updated
  • Submit MCP

Company

  • About

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 TrackMCP. All rights reserved.

Built with ❤️ by Krishna Goyal

    Docs Mcp Server

    Enhance Your AI Coding Assistant for the Model Context Protocol. Enhance AI assistants with powerful integrations. TypeScript-based implementation.

    698 stars
    TypeScript
    Updated Oct 19, 2025

    Table of Contents

    • ✨ Why Grounded Docs MCP Server?
    • 📄 Supported Formats
    • 🚀 Quick Start
    • CLI First
    • Output Behavior
    • Agent Skills
    • MCP Server
    • 🧠 Configure Embedding Model (Recommended)
    • 📚 Documentation
    • Getting Started
    • Key Concepts & Architecture
    • 🤝 Contributing
    • License

    Table of Contents

    • ✨ Why Grounded Docs MCP Server?
    • 📄 Supported Formats
    • 🚀 Quick Start
    • CLI First
    • Output Behavior
    • Agent Skills
    • MCP Server
    • 🧠 Configure Embedding Model (Recommended)
    • 📚 Documentation
    • Getting Started
    • Key Concepts & Architecture
    • 🤝 Contributing
    • License

    Documentation

    Grounded Docs: Your AI's Up-to-Date Documentation Expert

    Docs MCP Server solves the problem of AI hallucinations and outdated knowledge by providing a personal, always-current documentation index for your AI coding assistant. It fetches official docs from websites, GitHub, npm, PyPI, and local files, allowing your AI to query the exact version you are using.

    Docs MCP Server Web Interface

    ✨ Why Grounded Docs MCP Server?

    The open-source alternative to Context7, Nia, and Ref.Tools.

    • ✅ Up-to-Date Context: Fetches documentation directly from official sources on demand.
    • 🎯 Version-Specific: Queries target the exact library versions in your project.
    • 💡 Reduces Hallucinations: Grounds LLMs in real documentation.
    • 🔒 Private & Local: Runs entirely on your machine; your code never leaves your network.
    • 🧩 Broad Compatibility: Works with any MCP-compatible client (Claude, Cline, etc.).
    • 📁 Multiple Sources: Index websites, GitHub repositories, local folders, and zip archives.
    • 📄 Rich File Support: Processes HTML, Markdown, PDF, Office documents (Word, Excel, PowerPoint), OpenDocument, RTF, EPUB, Jupyter Notebooks, and 90+ source code languages.

    ---

    📄 Supported Formats

    CategoryFormats
    DocumentsPDF, Word (.docx/.doc), Excel (.xlsx/.xls), PowerPoint (.pptx/.ppt), OpenDocument (.odt/.ods/.odp), RTF, EPUB, FictionBook, Jupyter Notebooks
    ArchivesZIP, TAR, gzipped TAR (contents are extracted and processed individually)
    WebHTML, XHTML
    MarkupMarkdown, MDX, reStructuredText, AsciiDoc, Org Mode, Textile, R Markdown
    Source CodeTypeScript, JavaScript, Python, Go, Rust, C/C++, Java, Kotlin, Ruby, PHP, Swift, C#, and many more
    DataJSON, YAML, TOML, CSV, XML, SQL, GraphQL, Protocol Buffers
    ConfigDockerfile, Makefile, Terraform/HCL, INI, dotenv, Bazel

    See **Supported Formats** for the complete reference including MIME types and processing details.

    ---

    🚀 Quick Start

    CLI First

    For agents and scripts, the CLI is usually the simplest way to use Grounded Docs.

    1. Index documentation (requires Node.js 22+):

    bash
    npx @arabold/docs-mcp-server@latest scrape react https://react.dev/reference/react

    2. Query the index:

    bash
    npx @arabold/docs-mcp-server@latest search react "useEffect cleanup" --output yaml

    3. Fetch a single page as Markdown:

    bash
    npx @arabold/docs-mcp-server@latest fetch-url https://react.dev/reference/react/useEffect

    Output Behavior

    • Structured commands default to clean JSON on stdout in non-interactive runs.
    • Use --output json|yaml|toon to pick a structured format.
    • Plain-text commands such as fetch-url keep their text payload on stdout.
    • Diagnostics go through the shared logger and are kept off stdout in non-interactive runs.
    • Use --quiet to suppress non-error diagnostics or --verbose to enable debug output.

    Agent Skills

    The [skills/](skills/) directory contains Agent Skills that teach AI coding assistants how to use the CLI — covering documentation search, index management, and URL fetching.

    MCP Server

    If you want a long-running MCP endpoint for Claude, Cline, Copilot, Gemini CLI, or other MCP clients:

    1. Start the server:

    bash
    npx @arabold/docs-mcp-server@latest

    2. Open the Web UI at **http://localhost:6280** to add documentation.

    3. Connect your AI client by adding this to your MCP settings (e.g., claude_desktop_config.json):

    json
    {
      "mcpServers": {
        "docs-mcp-server": {
          "type": "sse",
          "url": "http://localhost:6280/sse"
        }
      }
    }

    See **Connecting Clients** for VS Code (Cline, Roo) and other setup options.

    Alternative: Run with Docker

    bash
    docker run --rm \
      -v docs-mcp-data:/data \
      -v docs-mcp-config:/config \
      -p 6280:6280 \
      ghcr.io/arabold/docs-mcp-server:latest \
      --protocol http --host 0.0.0.0 --port 6280

    🧠 Configure Embedding Model (Recommended)

    Using an embedding model is optional but dramatically improves search quality by enabling semantic vector search.

    Example: Enable OpenAI Embeddings

    bash
    OPENAI_API_KEY="sk-proj-..." npx @arabold/docs-mcp-server@latest

    See **Embedding Models for configuring Ollama, Gemini, Azure**, and others.

    ---

    📚 Documentation

    Getting Started

    • **Installation**: Detailed setup guides for Docker, Node.js (npx), and Embedded mode.
    • **Connecting Clients**: How to connect Claude, VS Code (Cline/Roo), and other MCP clients.
    • **Basic Usage**: Using the Web UI, CLI, and scraping local files.
    • **Configuration**: Full reference for config files and environment variables.
    • **Supported Formats**: Complete file format and MIME type reference.
    • **Embedding Models**: Configure OpenAI, Ollama, Gemini, and other providers.

    Key Concepts & Architecture

    • **Deployment Modes**: Standalone vs. Distributed (Docker Compose).
    • **Authentication**: Securing your server with OAuth2/OIDC.
    • **Telemetry**: Privacy-first usage data collection.
    • **Architecture**: Deep dive into the system design.

    ---

    🤝 Contributing

    We welcome contributions! Please see **CONTRIBUTING.md** for development guidelines and setup instructions.

    License

    This project is licensed under the MIT License. See LICENSE for details.

    Similar MCP

    Based on tags & features

    • MC

      Mcp Server Kubernetes

      TypeScript·
      1.1k
    • MC

      Mcp Wave

      TypeScript00
    • GL

      Glm Mcp Server

      TypeScript·
      3
    • OP

      Openai Gpt Image Mcp

      TypeScript·
      75

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k
    View All MCP Servers

    Similar MCP

    Based on tags & features

    • MC

      Mcp Server Kubernetes

      TypeScript·
      1.1k
    • MC

      Mcp Wave

      TypeScript00
    • GL

      Glm Mcp Server

      TypeScript·
      3
    • OP

      Openai Gpt Image Mcp

      TypeScript·
      75

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k