Track MCP LogoTrack MCP
Track MCP LogoTrack MCP

The world's largest repository of Model Context Protocol servers. Discover, explore, and submit MCP tools.

Product

  • Categories
  • Top MCP
  • New & Updated
  • Submit MCP

Company

  • About

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 TrackMCP. All rights reserved.

Built with ❤️ by Krishna Goyal

    In Memoria

    Persistent Intelligence Infrastructure for AI Agents

    86 stars
    Rust
    Updated Nov 1, 2025
    ai-agents
    ai-tools
    codebase-intelligence
    developer-tools
    local-first
    mcp-server
    persistent-memory

    Table of Contents

    • Quick Demo
    • The Problem: Session Amnesia
    • The Solution: Persistent Intelligence
    • What It Does
    • Example Workflow
    • Quick Start
    • Installation
    • Connect to Your AI Tool
    • Learn Your Codebase
    • How It Works
    • Architecture
    • The Core Components
    • What Makes It Different
    • What's New in v0.5.x
    • 🗺️ Project Blueprints (Phase 1)
    • 💼 Work Context System (Phase 2)
    • 🧭 Smart File Routing (Phase 3)
    • ⚡ Smooth Progress Tracking (v0.5.3)
    • MCP Tools for AI Assistants
    • 🎯 Core Analysis (2 tools)
    • 🧠 Intelligence (7 tools)
    • 🤖 Automation (1 tool)
    • 📊 Monitoring (3 tools)
    • GitHub Copilot Integration
    • Setup
    • GitHub Copilot Integration (VS Code)
    • Requirements
    • MCP Server Setup (Required)
    • Copilot Instructions (Automatic)
    • Custom Agents (formerly “Chat Modes”)
    • Agent File Location
    • Using Agents in VS Code
    • Example Usage
    • Common Pitfalls
    • Notes on VS Code Terminology
    • Language Support
    • Status: Work in Progress
    • What Works Well
    • Known Limitations
    • We Need Your Help
    • Technical Comparison
    • Team Usage
    • Build from Source
    • FAQ
    • Roadmap
    • Community & Support
    • License

    Table of Contents

    • Quick Demo
    • The Problem: Session Amnesia
    • The Solution: Persistent Intelligence
    • What It Does
    • Example Workflow
    • Quick Start
    • Installation
    • Connect to Your AI Tool
    • Learn Your Codebase
    • How It Works
    • Architecture
    • The Core Components
    • What Makes It Different
    • What's New in v0.5.x
    • 🗺️ Project Blueprints (Phase 1)
    • 💼 Work Context System (Phase 2)
    • 🧭 Smart File Routing (Phase 3)
    • ⚡ Smooth Progress Tracking (v0.5.3)
    • MCP Tools for AI Assistants
    • 🎯 Core Analysis (2 tools)
    • 🧠 Intelligence (7 tools)
    • 🤖 Automation (1 tool)
    • 📊 Monitoring (3 tools)
    • GitHub Copilot Integration
    • Setup
    • GitHub Copilot Integration (VS Code)
    • Requirements
    • MCP Server Setup (Required)
    • Copilot Instructions (Automatic)
    • Custom Agents (formerly “Chat Modes”)
    • Agent File Location
    • Using Agents in VS Code
    • Example Usage
    • Common Pitfalls
    • Notes on VS Code Terminology
    • Language Support
    • Status: Work in Progress
    • What Works Well
    • Known Limitations
    • We Need Your Help
    • Technical Comparison
    • Team Usage
    • Build from Source
    • FAQ
    • Roadmap
    • Community & Support
    • License

    Documentation

    In Memoria

    npm version

    npm downloads

    License: MIT

    Discord

    Giving AI coding assistants a memory that actually persists.

    Quick Demo

    asciicast

    _Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files._

    ---

    The Problem: Session Amnesia

    You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great.

    Then you close the window.

    Next session? Complete amnesia. You're explaining the same architectural decisions again. The same naming conventions. The same "no, we don't use classes here, we use functional composition" for the fifteenth time.

    Every AI coding session starts from scratch.

    This isn't just annoying, it's inefficient. These tools re-analyze your codebase on every interaction, burning tokens and time. They give generic suggestions that don't match your style. They have no memory of what worked last time, what you rejected, or why.

    The Solution: Persistent Intelligence

    In Memoria is an MCP server that learns from your actual codebase and remembers across sessions. It builds persistent intelligence about your code (patterns, architecture, conventions, decisions) that AI assistants can query through the Model Context Protocol.

    Think of it as giving your AI pair programmer a notepad that doesn't get wiped clean every time you restart the session.

    Current version: 0.6.0 - See what's changed

    What It Does

    • Learns your patterns - Analyzes your code to understand naming conventions, architectural choices, and structural preferences
    • Instant project context - Provides tech stack, entry points, and architecture in

    error handling convention..."

    Next session (days later):

    You: "Where did we put the password reset code?"

    AI: *queries In Memoria*

    "In src/auth/password-reset.ts, following the pattern we

    established in our last session..."

    code
    No re-explaining. No generic suggestions. Just continuous, context-aware assistance.
    
    ## Quick Start
    
    ### Installation

    Install globally

    npm install -g in-memoria

    Or use directly with npx

    npx in-memoria --help

    code
    ### Connect to Your AI Tool
    
    **Claude Desktop** - Add to your config (`~/Library/Application Support/Claude/claude_desktop_config.json`):

    {

    "mcpServers": {

    "in-memoria": {

    "command": "npx",

    "args": ["in-memoria", "server"]

    }

    }

    }

    code
    **Claude Code CLI**:

    claude mcp add in-memoria -- npx in-memoria server

    code
    **GitHub Copilot** - See [Copilot Integration](#github-copilot-integration) section below
    
    ### Learn Your Codebase

    Analyze and learn from your project

    npx in-memoria learn ./my-project

    Or let AI agents trigger learning automatically

    (Just start the server and let auto_learn_if_needed handle it)

    npx in-memoria server

    code
    ## How It Works
    
    In Memoria is built on Rust + TypeScript, using the Model Context Protocol to connect AI tools to persistent codebase intelligence.
    
    ### Architecture

    ┌─────────────────────┐ MCP ┌──────────────────────┐ napi-rs ┌─────────────────────┐

    │ AI Tool (Claude) │◄──────────►│ TypeScript Server │◄─────────────►│ Rust Core │

    └─────────────────────┘ └──────────┬───────────┘ │ • AST Parser │

    │ │ • Pattern Learner │

    │ │ • Semantic Engine │

    ▼ │ • Blueprint System │

    ┌──────────────────────┐ └─────────────────────┘

    │ SQLite (persistent) │

    │ SurrealDB (in-mem) │

    └──────────────────────┘

    code
    ### The Core Components
    
    **Rust Layer** - Fast, native processing:
    
    - Tree-sitter AST parsing for 12 languages (TypeScript, JavaScript, Python, PHP, Rust, Go, Java, C/C++, C#, Svelte, SQL)
    - Blueprint analyzer (detects project structure, entry points, architecture patterns)
    - Pattern learner (statistical analysis of your coding style)
    - Semantic engine (understands code relationships and concepts)
    
    **TypeScript Layer** - MCP server and orchestration:
    
    - 13 specialized tools for AI assistants (organized into 4 categories)
    - SQLite for structured data, SurrealDB with SurrealKV for persistent vector embeddings
    - File watching for incremental updates
    - Smart routing that maps features to files
    
    **Storage** - Local-first:
    
    - Everything stays on your machine
    - SQLite for patterns and metadata
    - SurrealDB with SurrealKV backend for persistent vector embeddings
    - Local transformers.js for embeddings (Xenova/all-MiniLM-L6-v2)
    
    ### What Makes It Different
    
    This isn't just another RAG system or static rules engine:
    
    - **Learns from actual code** - Not manually-defined rules, but statistical patterns from your real codebase
    - **Predicts your approach** - Based on how you've solved similar problems before
    - **Token efficient** - Responses optimized to minimize LLM context usage ( **For AI agents**: See [`AGENT.md`](AGENT.md) for complete tool reference with usage patterns and decision trees.
    
    ## GitHub Copilot Integration
    
    In Memoria works with GitHub Copilot through custom instructions and chat modes.
    
    ### Setup
    
    This repository includes:
    
    - `.github/copilot-instructions.md` - Automatic guidance for Copilot
    - `.github/chatmodes/` - Three specialized chat modes:
      - 🔍 **inmemoria-explorer** - Intelligent codebase navigation
      - 🚀 **inmemoria-feature** - Feature implementation with patterns
      - 🔎 **inmemoria-review** - Code review with consistency checking
    
    ## GitHub Copilot Integration (VS Code)
    
    In Memoria integrates with **GitHub Copilot Chat** using **MCP + Custom Agents** (formerly called *Chat Modes*).
    These agents allow Copilot to query In Memoria’s persistent intelligence when working in **Agent mode**.
    
    > ⚠️ **Important**: Copilot will only call MCP tools when the chat is in **Agent** mode (not Ask or Edit).
    
    ### Requirements
    
    * **VS Code 1.106+**
    * **GitHub Copilot Chat** enabled
    * In Memoria MCP server configured and running
    
    ---
    
    ### MCP Server Setup (Required)
    
    Create or edit the following file in your workspace:
    
    **`.vscode/mcp.json`**

    {

    "servers": {

    "in-memoria": {

    "command": "npx",

    "args": ["in-memoria", "server"]

    }

    }

    }

    code
    Open this file in VS Code and click **Start** when prompted, or start it manually.
    
    ---
    
    ### Copilot Instructions (Automatic)
    
    This repository includes:
    
    * **`.github/copilot-instructions.md`**
    
    VS Code automatically loads this file and applies guidance to Copilot Chat.
    No additional setup is required.
    
    ---
    
    ### Custom Agents (formerly “Chat Modes”)
    
    This repository provides **three Custom Agents** for Copilot:
    
    | Agent                     | Purpose                                       |
    | ------------------------- | --------------------------------------------- |
    | 🔍 **inmemoria-explorer** | Intelligent codebase navigation               |
    | 🚀 **inmemoria-feature**  | Feature implementation using learned patterns |
    | 🔎 **inmemoria-review**   | Consistency & pattern-based code review       |
    
    #### Agent File Location
    
    > ⚠️ VS Code has renamed *Chat Modes* → **Custom Agents**
    
    To ensure compatibility with current VS Code versions:
    
    1. Create the folder:

    .github/agents/

    code
    2. Move or copy files from:

    .github/chatmodes/

    code
    into:

    .github/agents/

    code
    3. Rename each file:

    *.chatmode.md → *.agent.md

    code
    Example:

    inmemoria-feature.chatmode.md → inmemoria-feature.agent.md

    code
    ---
    
    ### Using Agents in VS Code
    
    1. Open **Copilot Chat**
    2. Select the custom agent (e.g., inmemoria-featurename) directly from the dropdown.
    
    If agents do not appear:
    
    * Reload the window
    * Ensure files are in `.github/agents/`
    
    ---
    
    ### Example Usage

    Where is the authentication logic?

    code
    → Copilot queries In Memoria’s semantic index

    Add password reset functionality

    code
    → Copilot retrieves:
    
    * file routing
    * architectural patterns
    * prior auth decisions

    Review this code for consistency

    code
    → Copilot compares against learned conventions
    
    ---
    
    ### Common Pitfalls
    
    * ❌ Using **Ask/Edit mode** → MCP tools are ignored
    * ❌ MCP server not running
    * ❌ Agent files left in `.github/chatmodes/` only
    * ❌ VS Code version < 1.106
    
    ---
    
    ### Notes on VS Code Terminology
    
    | Old Name                      | Current Name                    |
    | ----------------------------- | ------------------------------- |
    | Chat Modes                    | Custom Agents                   |
    | `Chat: Configure Chat Modes…` | `Chat: Configure Custom Agents` |
    | `.github/chatmodes/`          | `.github/agents/`               |
    
    VS Code still recognizes legacy files, but **`.github/agents/*.agent.md` is the recommended format going forward**.
    
    ## Language Support
    
    Native AST parsing via tree-sitter for:
    
    - TypeScript & JavaScript (including JSX/TSX)
    - Python
    - PHP
    - Rust
    - Go
    - Java
    - C & C++
    - C#
    - Svelte
    - SQL
    
    Build artifacts (`node_modules/`, `dist/`, `.next/`, etc.) are automatically filtered out.
    
    ## Status: Work in Progress
    
    **Let's be honest**: In Memoria is early-stage software. It works, but it's not perfect.
    
    ### What Works Well
    
    - ✅ Pattern learning from real codebases
    - ✅ Semantic search across concepts
    - ✅ Project blueprint generation
    - ✅ MCP integration with Claude Desktop/Code
    - ✅ Cross-platform support (Linux, macOS, Windows)
    - ✅ Token-efficient responses
    
    ### Known Limitations
    
    - ⚠️ Large codebases (100k+ files) can be slow on first analysis
    - ⚠️ Pattern accuracy improves with codebase consistency
    - ⚠️ Some languages have better tree-sitter support than others
    - ⚠️ Documentation could be more comprehensive
    
    ### We Need Your Help
    
    This is open-source infrastructure for AI-assisted development. Currently a solo project by [@pi22by7](https://github.com/pi22by7), but contributions are not just welcome, they're essential.
    
    **Before contributing code**, please:
    
    - Check the [GitHub Projects board](https://github.com/pi22by7/in-memoria/projects) to see what's planned
    - Join [Discord](https://discord.gg/6mGsM4qkYm) to discuss your ideas (@pi_22by7)
    - [Open an issue](https://github.com/pi22by7/in-memoria/issues) to discuss the feature/fix
    - Email me at [talk@pi22by7.me](mailto:talk@pi22by7.me) for larger contributions
    
    **Ways to contribute**:
    
    - 🐛 **Report bugs** - Found something broken? [Open an issue](https://github.com/pi22by7/in-memoria/issues)
    - 💡 **Suggest features** - Have ideas? Discuss on [Discord](https://discord.gg/6mGsM4qkYm) or [GitHub Discussions](https://github.com/pi22by7/in-memoria/discussions)
    - 🔧 **Submit PRs** - Code contributions are always appreciated (discuss first!)
    - 📖 **Improve docs** - Help make this easier to understand
    - 🧪 **Test on your codebase** - Try it out and tell us what breaks
    - 💬 **Join the community** - [Discord](https://discord.gg/6mGsM4qkYm) for real-time discussions
    
    See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
    
    ## Technical Comparison
    
    **vs GitHub Copilot's memory**:
    
    - Copilot: Basic fact storage, no pattern learning
    - In Memoria: Semantic analysis + pattern learning + architectural intelligence + work context
    
    **vs Cursor's rules**:
    
    - Cursor: Static rules, manually defined
    - In Memoria: Dynamic learning from actual code + smart file routing + project blueprints
    
    **vs Custom RAG**:
    
    - RAG: Retrieves relevant code snippets
    - In Memoria: Understands patterns + predicts approaches + routes to files + tracks work context
    
    ## Team Usage
    
    In Memoria works for both individual developers and teams:
    
    **Individual**:
    
    - Learns your personal coding style
    - Remembers architectural decisions you've made
    - Provides context-aware suggestions
    
    **Team**:
    
    - Share `.in-memoria.db` files to distribute learned patterns
    - Onboard new developers with pre-learned codebase intelligence
    - Ensure consistent AI suggestions across the team
    
    ## Build from Source

    git clone https://github.com/pi22by7/in-memoria

    cd in-memoria

    npm install

    npm run build

    code
    **Requirements**:
    
    - Node.js 18+
    - Rust 1.70+ (for building from source)
    - 2GB RAM minimum
    
    **Development**:

    npm run dev # Start in development mode

    npm test # Run test suite (98.3% pass rate)

    npm run build:rust # Build Rust components

    code
    **Quality metrics**:
    
    - 118/120 unit tests passing (98.3%)
    - 23/23 MCP integration tests passing (100%)
    - Zero clippy warnings in Rust code
    - Zero memory leaks verified
    
    ## FAQ
    
    **Q: Does this replace my AI coding assistant?**
    A: No, it enhances them. In Memoria provides the memory and context that tools like Claude, Copilot, and Cursor can use to give better suggestions.
    
    **Q: What data is collected?**
    A: Everything stays local. No telemetry, no phone-home. Your code never leaves your machine. All embeddings are generated locally using transformers.js models.
    
    **Q: How accurate is pattern learning?**
    A: It improves with codebase size and consistency. Projects with established patterns see better results than small or inconsistent codebases. The system learns from frequency and repetition.
    
    **Q: What's the performance impact?**
    A: Minimal. Initial learning takes time (proportional to codebase size), but subsequent queries are fast. File watching enables incremental updates. Smart filtering skips build artifacts automatically.
    
    **Q: What if analysis fails or produces weird results?**
    A: [Open an issue](https://github.com/pi22by7/in-memoria/issues) with details. Built-in timeouts and circuit breakers handle most edge cases, but real-world codebases are messy and we need your feedback to improve.
    
    **Q: Can I use this in production?**
    A: You _can_, but remember this is v0.5.x. Expect rough edges. Test thoroughly. Report issues. We're working toward stability but aren't there yet.
    
    **Q: Why Rust + TypeScript?**
    A: Rust for performance-critical AST parsing and pattern analysis. TypeScript for MCP server and orchestration. Best of both worlds: fast core, flexible integration layer.
    
    **Q: What about other AI tools (not Claude/Copilot)?**
    A: Any tool supporting MCP can use In Memoria. We've tested with Claude Desktop, Claude Code, and GitHub Copilot. Others should work but may need configuration.
    
    ## Roadmap
    
    We're following a phased approach:
    
    - ✅ **Phase 1**: Project Blueprint System (v0.5.0)
    - ✅ **Phase 2**: Work Context & Session Memory (v0.5.0)
    - ✅ **Phase 3**: Smart File Routing (v0.5.0)
    - ✅ **Phase 4**: Tool Consolidation (v0.5.0)
    - 🚧 **Phase 5**: Enhanced Vector Search & Embeddings
    - 📋 **Phase 6**: Multi-project Intelligence
    - 📋 **Phase 7**: Collaboration Features
    
    See [GitHub Projects](https://github.com/pi22by7/in-memoria/projects) for detailed tracking.
    
    ## Community & Support
    
    **Project maintained by**: [@pi22by7](https://github.com/pi22by7)
    
    - 💬 **Discord**: [discord.gg/6mGsM4qkYm](https://discord.gg/6mGsM4qkYm) - Join the community, ask questions, discuss improvements (ping @pi_22by7)
    - 📧 **Email**: [talk@pi22by7.me](mailto:talk@pi22by7.me) - For private inquiries or larger contribution discussions
    - 🐛 **Issues**: [GitHub Issues](https://github.com/pi22by7/in-memoria/issues) - Report bugs and request features
    - 💡 **Discussions**: [GitHub Discussions](https://github.com/pi22by7/in-memoria/discussions) - General discussions and Q&A
    - 📖 **Documentation**: See [AGENT.md](AGENT.md) for AI agent instructions
    - 🤝 **Contributing**: Check [CONTRIBUTING.md](CONTRIBUTING.md) for development guidelines
    
    **Before contributing**: Please discuss your ideas on Discord, via email, or in an issue before starting work on significant features. This helps ensure alignment with project direction and avoids duplicate efforts.
    
    ## License
    
    MIT - see [LICENSE](LICENSE)
    
    Built with ❤️ by [@pi22by7](https://github.com/pi22by7) for the AI-assisted development community.
    
    ---
    
    **Try it**: `npx in-memoria server`
    
    **Latest release**: [v0.6.0](CHANGELOG.md) - Smooth progress tracking and Phase 1-4 complete
    
    _In memoria: in memory. Because your AI assistant should remember._
    
    **Questions? Ideas?** Join us on [Discord](https://discord.gg/6mGsM4qkYm) or reach out at [talk@pi22by7.me](mailto:talk@pi22by7.me)

    Similar MCP

    Based on tags & features

    • CO

      Code Assistant

      Rust·
      103
    • MC

      Mcpjungle

      Go·
      617
    • IM

      Imagen3 Mcp

      Rust·
      46
    • MC

      Mcp Access Point

      Rust·
      135

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k
    View All MCP Servers

    Similar MCP

    Based on tags & features

    • CO

      Code Assistant

      Rust·
      103
    • MC

      Mcpjungle

      Go·
      617
    • IM

      Imagen3 Mcp

      Rust·
      46
    • MC

      Mcp Access Point

      Rust·
      135

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k