Track MCP LogoTrack MCP
Track MCP LogoTrack MCP

The world's largest repository of Model Context Protocol servers. Discover, explore, and submit MCP tools.

Product

  • Categories
  • Top MCP
  • New & Updated
  • Submit MCP

Company

  • About

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 TrackMCP. All rights reserved.

Built with ❤️ by Krishna Goyal

    Basic Memory

    AI conversations that actually remember. Never re-explain your project to Claude again. Local-first, integrates with Obsidian.

    1,950 stars
    Python
    Updated Oct 19, 2025
    ai
    claude
    knowledge-management
    knowlege-graph
    llm
    local-first
    markdown
    mcp
    obsidian
    obsidian-md
    open-source
    privacy-first
    privacy-first-ai
    productivity
    python

    Table of Contents

    • 🚀 Basic Memory Cloud is Live!
    • What's New in v0.19.0
    • Pick up your conversation right where you left off
    • Quick Start
    • Automatic Updates
    • Why Basic Memory?
    • How It Works in Practice
    • Observations
    • Relations
    • Technical Implementation
    • Frontmatter
    • Observations
    • Relations
    • Using with VS Code
    • Using with Claude Desktop
    • Futher info
    • Telemetry
    • Logging
    • Environment Variables
    • Examples
    • Development
    • Running Tests
    • License
    • Star History

    Table of Contents

    • 🚀 Basic Memory Cloud is Live!
    • What's New in v0.19.0
    • Pick up your conversation right where you left off
    • Quick Start
    • Automatic Updates
    • Why Basic Memory?
    • How It Works in Practice
    • Observations
    • Relations
    • Technical Implementation
    • Frontmatter
    • Observations
    • Relations
    • Using with VS Code
    • Using with Claude Desktop
    • Futher info
    • Telemetry
    • Logging
    • Environment Variables
    • Examples
    • Development
    • Running Tests
    • License
    • Star History

    Documentation

    License: AGPL v3

    PyPI version

    Python 3.12+

    Tests

    Ruff

    MCP Server

    MCP Dev

    🚀 Basic Memory Cloud is Live!

    • Cross-device and multi-platform support is here. Your knowledge graph now works on desktop, web, and mobile.
    • Cloud is optional. The local-first open-source workflow continues as always.
    • OSS discount: use code BMFOSS for 20% off for 3 months.

    Sign up now →

    with a 7 day free trial

    Basic Memory

    Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like

    Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to

    enable any compatible LLM to read and write to your local knowledge base.

    What's New in v0.19.0

    • Semantic Vector Search — find notes by meaning, not just keywords. Combines full-text and vector similarity for hybrid search with FastEmbed embeddings.
    • Schema System — infer, validate, and diff the structure of your knowledge base with schema_infer, schema_validate, and schema_diff tools.
    • Per-Project Cloud Routing — route individual projects through the cloud while others stay local, using API key authentication (basic-memory project set-cloud).
    • FastMCP 3.0 — upgraded to FastMCP 3.0 with tool annotations for better client integration.
    • CLI Overhaul — JSON output mode (--json) for scripting, workspace-aware commands, and an htop-inspired project dashboard.
    • Smarter Editing — edit_note append/prepend auto-creates notes if they don't exist; write_note has an overwrite guard to prevent accidental data loss.
    • Richer Search Results — matched chunk text returned in search results for better context.

    See the full CHANGELOG for details.

    • Website: basicmemory.com
    • Documentation: docs.basicmemory.com
    • Community: Discord

    Pick up your conversation right where you left off

    • AI assistants can load context from local files in a new conversation
    • Notes are saved locally as Markdown files in real time
    • No project knowledge or special prompting required

    https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc

    Quick Start

    bash
    # Install with uv (recommended)
    uv tool install basic-memory
    
    # Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
    # Add this to your config:
    {
      "mcpServers": {
        "basic-memory": {
          "command": "uvx",
          "args": [
            "basic-memory",
            "mcp"
          ]
        }
      }
    }
    # Now in Claude Desktop, you can:
    # - Write notes with "Create a note about coffee brewing methods"
    # - Read notes with "What do I know about pour over coffee?"
    # - Search with "Find information about Ethiopian beans"

    You can view shared context via files in ~/basic-memory (default directory location).

    Automatic Updates

    Basic Memory includes a default-on auto-update flow for CLI installs.

    • Auto-install supported: uv tool and Homebrew installs
    • Default check interval: every 24 hours (86400 seconds)
    • MCP-safe behavior: update checks run silently in basic-memory mcp mode
    • **uvx behavior:** skipped (runtime is ephemeral and managed by uvx)

    Manual update commands:

    bash
    # Check now and install if supported
    bm update
    
    # Check only, do not install
    bm update --check

    Config options in ~/.basic-memory/config.json:

    json
    {
      "auto_update": true,
      "update_check_interval": 86400
    }

    To disable automatic updates, set "auto_update": false.

    Why Basic Memory?

    Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation

    starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:

    • Chat histories capture conversations but aren't structured knowledge
    • RAG systems can query documents but don't let LLMs write back
    • Vector databases require complex setups and often live in the cloud
    • Knowledge graphs typically need specialized tools to maintain

    Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can

    read

    and write to. The key advantages:

    • Local-first: All knowledge stays in files you control
    • Bi-directional: Both you and the LLM read and write to the same files
    • Structured yet simple: Uses familiar Markdown with semantic patterns
    • Traversable knowledge graph: LLMs can follow links between topics
    • Standard formats: Works with existing editors like Obsidian
    • Lightweight infrastructure: Just local files indexed in a local SQLite database

    With Basic Memory, you can:

    • Have conversations that build on previous knowledge
    • Create structured notes during natural conversations
    • Have conversations with LLMs that remember what you've discussed before
    • Navigate your knowledge graph semantically
    • Keep everything local and under your control
    • Use familiar tools like Obsidian to view and edit notes
    • Build a personal knowledge base that grows over time
    • Sync your knowledge to the cloud with bidirectional synchronization
    • Authenticate and manage cloud projects with subscription validation
    • Mount cloud storage for direct file access

    How It Works in Practice

    Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:

    1. Start by chatting normally:

    code
    I've been experimenting with different coffee brewing methods. Key things I've learned:
    
    - Pour over gives more clarity in flavor than French press
    - Water temperature is critical - around 205°F seems best
    - Freshly ground beans make a huge difference

    ... continue conversation.

    2. Ask the LLM to help structure this knowledge:

    code
    "Let's write a note about coffee brewing methods."

    LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):

    markdown
    ---
    title: Coffee Brewing Methods
    permalink: coffee-brewing-methods
    tags:
    - coffee
    - brewing
    ---
    
    # Coffee Brewing Methods
    
    ## Observations
    
    - [method] Pour over provides more clarity and highlights subtle flavors
    - [technique] Water temperature at 205°F (96°C) extracts optimal compounds
    - [principle] Freshly ground beans preserve aromatics and flavor
    
    ## Relations
    
    - relates_to [[Coffee Bean Origins]]
    - requires [[Proper Grinding Technique]]
    - affects [[Flavor Extraction]]

    The note embeds semantic content and links to other topics via simple Markdown formatting.

    3. You see this file on your computer in real time in the current project directory (default ~/$HOME/basic-memory).

    • Realtime sync can be enabled via running basic-memory sync --watch

    4. In a chat with the LLM, you can reference a topic:

    code
    Look at `coffee-brewing-methods` for context about pour over coffee

    The LLM can now build rich context from the knowledge graph. For example:

    code
    Following relation 'relates_to [[Coffee Bean Origins]]':
    - Found information about Ethiopian Yirgacheffe
    - Notes on Colombian beans' nutty profile
    - Altitude effects on bean characteristics
    
    Following relation 'requires [[Proper Grinding Technique]]':
    - Burr vs. blade grinder comparisons
    - Grind size recommendations for different methods
    - Impact of consistent particle size on extraction

    Each related document can lead to more context, building a rich semantic understanding of your knowledge base.

    This creates a two-way flow where:

    • Humans write and edit Markdown files
    • LLMs read and write through the MCP protocol
    • Sync keeps everything consistent
    • All knowledge stays in local files.

    Technical Implementation

    Under the hood, Basic Memory:

    1. Stores everything in Markdown files

    2. Uses a SQLite database for searching and indexing

    3. Extracts semantic meaning from simple Markdown patterns

    • Files become Entity objects
    • Each Entity can have Observations, or facts associated with it
    • Relations connect entities together to form the knowledge graph

    4. Maintains the local knowledge graph derived from the files

    5. Provides bidirectional synchronization between files and the knowledge graph

    6. Implements the Model Context Protocol (MCP) for AI integration

    7. Exposes tools that let AI assistants traverse and manipulate the knowledge graph

    8. Uses memory:// URLs to reference entities across tools and conversations

    The file format is just Markdown with some simple markup:

    Each Markdown file has:

    Frontmatter

    markdown
    title: 
    type:  (e.g. note)
    permalink: 
    
    Built with ♥️ by [Basic Machines](https://basicmachines.co?utm_source=github&utm_medium=referral&utm_campaign=readme)

    Similar MCP

    Based on tags & features

    • FA

      Fal Mcp Server

      Python·
      8
    • BI

      Biomcp

      Python·
      327
    • AN

      Anyquery

      Go·
      1.4k
    • AW

      Aws Mcp Server

      Python·
      165

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k
    View All MCP Servers

    Similar MCP

    Based on tags & features

    • FA

      Fal Mcp Server

      Python·
      8
    • BI

      Biomcp

      Python·
      327
    • AN

      Anyquery

      Go·
      1.4k
    • AW

      Aws Mcp Server

      Python·
      165

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k