This MCP server provides tools for listing and retrieving content from different knowledge bases.
Documentation
Knowledge Base MCP Server
This MCP server provides tools for listing and retrieving content from different knowledge bases.
Demo
Live demo recording coming soon (tracking #40).
Setup Instructions
These instructions assume you have Node.js (version 20 or higher) and npm installed on your system.
Install (one command)
npx -y @jeanibarz/knowledge-base-mcp-server@latestnpx fetches the package from npm and launches the stdio server. Point your MCP client at npx -y @jeanibarz/knowledge-base-mcp-server@latest and configure the environment variables documented below. See docs/clients.md for copy-pasteable snippets (Claude Desktop, Codex CLI, Cursor, Continue, Cline).
**Pin
@latest, not the unversioned spec.**npx -y @jeanibarz/knowledge-base-mcp-server(no version) caches the resolved version in~/.npm/_npx/indefinitely — subsequent client launches reuse that cached version even after a new release ships. The@latestform hashes to a different cache key and re-resolves on every launch, so new fixes arrive on the next client restart instead of requiring a manual~/.npm/_npx/clear. See RFC 012 §2.4.
Install (CLI alongside the MCP server, RFC 012)
For an interactive shell or AI-agent shell-tool flow, install globally and use the kb bin directly. The OS resolves the binary on every invocation, so npm i -g …@latest is picked up without restarting any AI client that has the MCP server loaded:
npm install -g @jeanibarz/knowledge-base-mcp-server@latest
kb list # list available knowledge bases
kb search "your query" # read-only search; cheap, fast (~0.6 s)
kb search "query" --refresh # also re-scan KB files (write path)
kb --helpThe kb bin shares the same env vars as the MCP server (KNOWLEDGE_BASES_ROOT_DIR, FAISS_INDEX_PATH, EMBEDDING_PROVIDER, OLLAMA_*, OPENAI_*, HUGGINGFACE_*). kb search defaults to read-only — it loads the existing FAISS index but does not re-scan KB files. Pass --refresh to re-index. Output includes a freshness footer indicating whether the index is up-to-date relative to KB file mtimes.
The MCP server (knowledge-base-mcp-server bin) is unchanged and still works with all the configurations in docs/clients.md. The CLI is additive.
Comparing embedding models (RFC 013)
Once on 0.3.0, you can keep multiple embedding models side-by-side and query each by id. Useful for retrieval-quality A/B without losing the previous model:
# List registered models. The * marks the active one.
kb models list
# Add a second model — embeds your KB once under the new model.
# For paid providers, prints an estimated cost and prompts before any HTTP traffic.
kb models add ollama nomic-embed-text # local, free
kb models add openai text-embedding-3-small # paid; estimate first
kb models add huggingface BAAI/bge-small-en-v1.5
# Query a specific model without changing the default.
kb search "your query" --model=openai__text-embedding-3-small
# Side-by-side comparison: unified rank/score table over both models' top-k.
kb compare "your query" ollama__nomic-embed-text-latest openai__text-embedding-3-small
# Switch the default model.
kb models set-active openai__text-embedding-3-small
# Remove a model (refuses to remove the active one).
kb models remove huggingface__BAAI-bge-small-en-v1.5` is __, derived deterministically from (provider, model_name) as typed (e.g. OLLAMA_MODEL=nomic-embed-text:latest → ollama__nomic-embed-text-latest). On-disk layout: each model lives at ${FAISS_INDEX_PATH}/models//. The active model is recorded in ${FAISS_INDEX_PATH}/active.txt and overridable per-process via KB_ACTIVE_MODEL. See [docs/rfcs/013-multimodel-support.md`](docs/rfcs/013-multimodel-support.md) for the full design.
Migration from 0.2.x → 0.3.0 is automatic on first server (or kb) start: the existing single-model index is moved into ${FAISS_INDEX_PATH}/models// and active.txt is written. Atomic, ~12 ms measured. Before upgrading, fully exit any AI client (Claude Code, Cursor, Continue, Cline) that has the MCP server loaded — the migration acquires the single-instance PID advisory before any rename, so it cannot run while a 0.2.x MCP child is still using the directory. See CHANGELOG for rollback recipes.
MCP surface — retrieve_knowledge gains an optional model_name argument; a new list_models tool returns the registered models. Tools that don't pass model_name keep working unchanged (wire format is byte-equal to 0.2.x).
MCP error codes
Tool errors are returned with isError: true and a JSON text payload so MCP clients can branch without substring matching:
{
"error": {
"code": "PROVIDER_AUTH",
"message": "OPENAI_API_KEY environment variable is required when using OpenAI provider"
}
}| Code | Meaning | Typical client action |
|---|---|---|
INDEX_NOT_INITIALIZED | A search ran before a FAISS index was available. | Retry after initialization or trigger a refresh. |
PROVIDER_UNAVAILABLE | The embedding provider is temporarily unavailable. | Retry with backoff. |
PROVIDER_TIMEOUT | The embedding provider timed out. | Retry with backoff. |
PROVIDER_AUTH | Provider credentials are missing or invalid. | Ask the user to configure a valid API key. |
KB_NOT_FOUND | The requested knowledge base does not exist. | Prompt for one of the listed knowledge bases. |
PERMISSION_DENIED | The server cannot read or write a required local path. | Surface to the operator/admin. |
CORRUPT_INDEX | The persisted FAISS index is corrupt or unreadable. | Rebuild or recover the index. |
VALIDATION | A caller-supplied argument failed validation. | Fix the request before retrying. |
INTERNAL | An unclassified server error occurred. | Surface the message and logs for investigation. |
Install via Smithery
To install Knowledge Base Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @jeanibarz/knowledge-base-mcp-server --client claudeInstall from source
Use this path if you want to develop against the repo or pin an unreleased commit.
Prerequisites
1. Clone the repository:
git clone
cd knowledge-base-mcp-server2. Install dependencies:
npm install3. Configure environment variables:
This server supports three embedding providers: Ollama (recommended for reliability), OpenAI and HuggingFace (fallback option).
### Option 1: Ollama Configuration (Recommended)
- Set
EMBEDDING_PROVIDER=ollamato use local Ollama embeddings - Install Ollama and pull an embedding model:
ollama pull dengcao/Qwen3-Embedding-0.6B:Q8_0 - Configure the following environment variables:
EMBEDDING_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434 # Default Ollama URL
OLLAMA_MODEL=dengcao/Qwen3-Embedding-0.6B:Q8_0 # Default embedding model
KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases- Minimum context window: the embedding model must accept at least ~500 tokens of input. The default chunker emits ~1000-character chunks which commonly tokenize past 256 tokens, so models like
all-minilm(256 ctx) will reject every request. Usenomic-embed-text(8192 ctx),dengcao/Qwen3-Embedding-0.6B:Q8_0(32K ctx), or any model with ≥512 ctx instead.
### Option 2: OpenAI Configuration
- Set
EMBEDDING_PROVIDER=openaito use OpenAI API for embeddings - Configure the following environment variables:
EMBEDDING_PROVIDER=openai
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL_NAME=text-embedding-3-small
KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases- As of this release, the OpenAI default is
text-embedding-3-small(up fromtext-embedding-ada-002). Both produce 1536-dim vectors, but the model name change will trigger a one-time FAISS index rebuild on the next query. Override withOPENAI_MODEL_NAME=...if you prefer the old default. See the CHANGELOG for details.
### Option 3: HuggingFace Configuration (Fallback)
- Set
EMBEDDING_PROVIDER=huggingfaceor leave unset (default) - Obtain a free API key from HuggingFace
- Configure the following environment variables:
EMBEDDING_PROVIDER=huggingface # Optional, this is the default
HUGGINGFACE_API_KEY=your_api_key_here
HUGGINGFACE_MODEL_NAME=BAAI/bge-small-en-v1.5
HUGGINGFACE_PROVIDER=hf-inference # Optional, router provider for serverless inference
KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases- As of this release, the HuggingFace default is
BAAI/bge-small-en-v1.5(up fromsentence-transformers/all-MiniLM-L6-v2). Both produce 384-dim vectors, but the model name change will trigger a one-time FAISS index rebuild on the next query. Override withHUGGINGFACE_MODEL_NAME=...if you prefer the old default. See the CHANGELOG for details. - HuggingFace retired the legacy
api-inference.huggingface.co/models/...
endpoint in 2025. Feature-extraction calls are now routed through the
Inference Providers router at
https://router.huggingface.co/hf-inference/models//pipeline/feature-extraction
by default. Set HUGGINGFACE_PROVIDER to choose a different supported
Inference Provider such as together, replicate, fireworks-ai,
sambanova, nebius, or novita. The existing
HUGGINGFACE_API_KEY value can be either a Hugging Face token or a
compatible provider key, depending on how the request is authenticated
upstream. To target a self-hosted or dedicated Inference Endpoint, set
HUGGINGFACE_ENDPOINT_URL to the full POST URL; explicit endpoint URLs
bypass router provider selection.
### Additional Configuration
- The server supports the
FAISS_INDEX_PATHenvironment variable to specify the path to the FAISS index. If not set, it will default to$HOME/knowledge_bases/.faiss. - Logging can be routed to a file by setting
LOG_FILE=/path/to/logs/knowledge-base.log. Log verbosity defaults toinfoand can be adjusted withLOG_LEVEL=debug|info|warn|error. - Tailor tool descriptions per deployment. The
retrieve_knowledgeandlist_knowledge_basesdescriptions the agent reads when picking tools can be overridden viaRETRIEVE_KNOWLEDGE_DESCRIPTIONandLIST_KNOWLEDGE_BASES_DESCRIPTION. Unset or empty falls back to the built-in defaults. Example:
RETRIEVE_KNOWLEDGE_DESCRIPTION="Search engineering runbooks, RFCs, and postmortems."
LIST_KNOWLEDGE_BASES_DESCRIPTION="List available engineering knowledge bases."- Ingest filter overrides (RFC 011 M1). The server embeds only files whose extension is in
{.md, .markdown, .txt, .rst}and excludes workflow sidecars (_seen.jsonl,_index.jsonl), log / staging subtrees (logs/,tmp/,_tmp/), and OS turds (.DS_Store,Thumbs.db,desktop.ini). To extend the allowlist or add more exclusions:
# Comma-separated extensions (case-insensitive; leading dot optional).
INGEST_EXTRA_EXTENSIONS=".json,.yaml"
# Comma-separated minimatch globs relative to the KB root.
INGEST_EXCLUDE_PATHS="drafts/**,scratch.md" Extensionless files (e.g. README, LICENSE, Makefile) are not embedded by the default allowlist; rename them with a .md or .txt suffix if you want them indexed. The base exclusions are authoritative: operators can add more but cannot remove the built-ins.
- You can set these environment variables in your
.bashrcor.zshrcfile, or directly in the MCP settings.
4. Build the server:
npm run build5. Add the server to your MCP client:
See docs/clients.md for copy-pasteable configuration snippets for Claude Desktop, Codex CLI, Cursor, Continue, and Cline.
6. Create knowledge base directories:
- Create subdirectories within the
KNOWLEDGE_BASES_ROOT_DIRfor each knowledge base (e.g.,company,it_support,onboarding). - Place text files (e.g.,
.txt,.md) containing the knowledge base content within these subdirectories.
- The server recursively reads all text files (e.g.,
.txt,.md) within the specified knowledge base subdirectories. - The server skips hidden files and directories (those starting with a
.). - For each file, the server calculates the SHA256 hash and stores it in a file with the same name in a hidden
.indexsubdirectory. This hash is used to determine if the file has been modified since the last indexing. - File content is split into chunks before indexing:
.mdfiles useMarkdownTextSplitter(heading-aware), and every other text file usesRecursiveCharacterTextSplitter. Both splitters share the samechunkSize: 1000, chunkOverlap: 200defaults, so a large.txt,.rst, or source file produces many chunks rather than a single embedding. - The content of each chunk is then added to a FAISS index, which is used for similarity search.
- The FAISS index is automatically initialized when the server starts. It checks for changes in the knowledge base files and updates the index accordingly.
Install (local development, live kb from your checkout)
Use this when you're actively developing on the repo and want your global kb and knowledge-base-mcp-server bins to always reflect the current state of main (or your feature branch) — without npm publish and without manual reinstalls after each git pull.
git clone https://github.com/jeanibarz/knowledge-base-mcp-server.git
cd knowledge-base-mcp-server
npm run dev:setupdev:setup does three things, all idempotent:
1. **npm install + npm run build** — first build, so the bins exist before linking.
2. **npm link** — symlinks kb and knowledge-base-mcp-server into the global node prefix (printed during setup so you can verify it lands where you expect). From then on, every npm run build overwrites build/ in place and the global bins pick up the new code on the next invocation. No re-link needed after rebuilds.
3. **git config core.hooksPath .githooks** — points git at the tracked [.githooks/](./.githooks) directory so the post-merge and post-rewrite hooks fire after every git pull (merge or rebase) and git merge. The hook re-runs npm install if package.json changed and npm run build if any source changed. Skips quietly when nothing relevant moved. The hook order puts this last, so a failed install/build leaves the repo in its original state.
After setup, the daily loop is just:
git pull # hook rebuilds automatically (merge or rebase)
kb search "..." # uses the freshly-built bin from this checkoutOr, when editing locally:
# edit src/...
npm run build # global `kb` immediately reflects your changeSwitching back to the published npm release (e.g. to compare behaviour):
npm unlink -g @jeanibarz/knowledge-base-mcp-server
npm install -g @jeanibarz/knowledge-base-mcp-server@latest**Why npm link instead of npm install -g .?** npm link is a symlink, so npm run build is reflected without reinstalling. npm install -g . copies the build snapshot, so every change requires a re-install.
Hook scope. The hooks trigger on git pull / git merge / git pull --rebase, not on git checkout between branches. Run npm run build manually after a branch switch if needed. If a rebuild fails, the hook prints a warning and exits 0 so the pull itself isn't reported as failed — fix the build, then run npm run build manually.
Usage
The server exposes two tools:
-
list_knowledge_bases: Lists the available knowledge bases. -
retrieve_knowledge: Retrieves similar chunks from the knowledge base based on a query. Optionally, if a knowledge base is specified, only that one is searched; otherwise, all available knowledge bases are considered. By default, at most 10 document chunks are returned with a score below a threshold of 2. A different threshold can optionally be provided using thethresholdparameter.
You can use these tools through the MCP interface.
The retrieve_knowledge tool performs a semantic search using a FAISS index. The index is automatically updated when the server starts or when a file in a knowledge base is modified.
The output of the retrieve_knowledge tool is a markdown formatted string with the following structure:
## Semantic Search Results
**Result 1:**
[Content of the most similar chunk]
**Source:**{
"source": "[Path to the file containing the chunk]"
}
---
**Result 2:**
[Content of the second most similar chunk]
**Source:**{
"source": "[Path to the file containing the chunk]"
}
> **Disclaimer:** The provided results might not all be relevant. Please cross-check the relevance of the information.Each result includes the content of the most similar chunk, the source file, and a similarity score.
Remote transport (optional)
By default the server speaks MCP over stdio — every supported client (Claude Desktop, Codex, Cursor, Continue, Cline) launches it as a child process. Stage 1 of RFC 008 adds an opt-in SSE transport for browser-based clients, Smithery remote mode, and shared deployments. Stdio is unchanged unless you set MCP_TRANSPORT.
export MCP_TRANSPORT=sse
export MCP_AUTH_TOKEN="$(openssl rand -base64 32)" # must be ≥32 characters; shorter tokens abort startup
export MCP_ALLOWED_ORIGINS="http://localhost:5173" # comma-separated; leave unset to deny all browser origins
export MCP_PORT=8765 # default
export MCP_BIND_ADDR=127.0.0.1 # default — loopback only
node build/index.jsEndpoints exposed in this mode:
GET /health— unauthenticated liveness probe; returns200 {"status":"ok"}only. Per RFC 008 §6.8 it intentionally exposes no version, uptime, or filesystem fingerprint to anonymous callers.GET /sse— long-lived SSE stream. RequiresAuthorization: Bearer.POST /messages?sessionId=— JSON-RPC POST per session. Same bearer requirement.
Streamable-HTTP is not wired up in stage 1 — MCP_TRANSPORT=http is rejected at startup. See RFC 008 §9 for the full rollout plan.
Security defaults: the server refuses to start in SSE mode without MCP_AUTH_TOKEN, binds only to loopback, and uses a constant-time bearer comparison. Operators exposing the endpoint off-host should set MCP_BIND_ADDR=0.0.0.0 *and* terminate TLS in a reverse proxy — TLS is out of scope for this server. Only one process per FAISS_INDEX_PATH is supported (see [docs/architecture/threat-model.md](./docs/architecture/threat-model.md)).
Troubleshooting & Logging
- Set
LOG_FILEto capture structured logs (JSON-RPC traffic continues to use stdout). This is especially helpful when diagnosing MCP handshake errors because all diagnostic messages are written to stderr and the optional log file. - Permission errors when creating or updating the FAISS index are surfaced with explicit messages in both the console and the log file. Verify that the process can write to
FAISS_INDEX_PATHand the.indexdirectories inside each knowledge base. - Run
npm testto execute the Jest suite (serialised with--runInBand) that covers logger fallback behaviour and FAISS permission handling.
Security
The server is designed to run as a local tool: one user, one machine, one trusted terminal. Two trust boundaries matter in practice. The $FAISS_INDEX_PATH directory is a code-execution boundary — FaissStore.load deserialises the docstore via pickleparser, so the directory must only contain files written by this server (no untrusted backups, no shared-write mounts). The $KNOWLEDGE_BASES_ROOT_DIR tree is a content boundary — its contents are embedded and returned verbatim to the MCP client, so markdown from untrusted sources is a prompt-injection risk for downstream agents. Additionally, only **one server process per FAISS_INDEX_PATH** is supported today; running multiple processes against the same index will corrupt it. Full discussion, including provider-key handling and the planned concurrency lockfile, is in [docs/architecture/threat-model.md](./docs/architecture/threat-model.md).
Similar MCP
Based on tags & features
Trending MCP
Most active this week