Claude MCP Server - Intelligent Prompt Engineering & Management TypeScript-based implementation.
Documentation
Claude Prompts MCP Server
What your AI client gives you — and what this server adds
| Your client already does | This server adds |
|---|---|
| Run a prompt | Compose prompts with validation, reasoning guidance, and formatting in one expression |
| Single-shot skills | Multi-step workflows that thread context between steps |
| Execute subagents | Hand off mid-chain steps to agents with full workflow context |
| Client-native skill format | Author once as YAML, export to any client with skills:export |
| Manual prompt writing | Versioned templates with hot-reload, rollback, and history |
| Trust the output | Validate output between steps — self-evaluation and shell commands |
---
Quick Start
Claude Code (Recommended)
# Add marketplace (first time only)
/plugin marketplace add minipuft/minipuft-plugins
# Install
/plugin install claude-prompts@minipuft
# Try it
>>tech_evaluation_chain library:'zod' context:'API validation'Development setup
Load plugin from local source for development:
git clone https://github.com/minipuft/claude-prompts ~/Applications/claude-prompts
cd ~/Applications/claude-prompts/server && npm install && npm run build
claude --plugin-dir ~/Applications/claude-promptsEdit hooks/prompts → restart Claude Code. Edit TypeScript → rebuild first.
User Data: Custom prompts stored in ~/.local/share/claude-prompts/ persist across updates.
---
Claude Desktop
Option A: GitHub Release (recommended)
1. Download claude-prompts-{version}.mcpb from Releases
2. Drag into Claude Desktop Settings → MCP Servers
3. Done
The .mcpb bundle is self-contained (~5MB) — no npm required.
Option B: NPX (auto-updates)
Add to your config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client", "claude-code"]
}
}
}Restart Claude Desktop and test: >>research_chain topic:'remote team policies'
---
VS Code / Copilot
Click the badge above for one-click install, or add manually to .vscode/mcp.json:
{
"servers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"]
}
}
}Cursor
Click the badge above for one-click install, or add manually to ~/.cursor/mcp.json:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client=cursor"]
}
}
}OpenCode
Install the opencode-prompts plugin — it registers the MCP server and adds hooks for chain tracking, gate enforcement, and state preservation:
npm install -g opencode-prompts
opencode-prompts install[!NOTE]
MCP server only (no hooks): Add to
~/.config/opencode/opencode.jsonwith--client=opencode. You'll have MCP tools but no chain tracking, gate enforcement, or state preservation across compactions. See opencode-prompts for what hooks provide.
Gemini CLI
Install the gemini-prompts extension — it registers the MCP server and adds hooks for >> syntax detection, chain tracking, and gate reminders:
gemini extensions install https://github.com/minipuft/gemini-prompts[!NOTE]
MCP server only (no hooks): Run
npx -y claude-prompts@latest --client=geminidirectly. You'll have MCP tools but no>>syntax detection, chain tracking, or gate reminders. See gemini-prompts for what hooks provide.
Other Clients (Codex, Windsurf, Zed)
Add to your MCP config file with a --client preset for deterministic handoff guidance:
| Client | Config Location | Recommended --client |
|---|---|---|
| Codex | ~/.codex/config.toml | codex |
| Windsurf | ~/.codeium/windsurf/mcp_config.json | cursor (experimental) |
| Zed | ~/.config/zed/settings.json → mcp key | unknown |
JSON-based configs (Windsurf/Zed):
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client=cursor"]
}
}
}**Codex (~/.codex/config.toml):**
[mcp_servers.claude_prompts]
command = "npx"
args = ["-y", "claude-prompts@latest", "--client=codex"]Supported presets: claude-code, codex, gemini, opencode, cursor, unknown.
For complete per-client setup and limitations:
From Source (developers only)
git clone https://github.com/minipuft/claude-prompts.git
cd claude-prompts/server
npm install && npm run build && npm testPoint your MCP config to server/dist/index.js. The esbuild bundle is self-contained.
Transport options: --transport=stdio (default), --transport=streamable-http (HTTP clients).
Custom Resources
Use your own prompts without cloning. Add MCP_RESOURCES_PATH to any MCP config:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client", "claude-code"],
"env": {
"MCP_RESOURCES_PATH": "/path/to/your/resources"
}
}
}
}Your resources directory can contain: prompts/, gates/, methodologies/, styles/.
See CLI Configuration for all options including fine-grained path overrides.
---
See the dashboard — system status overview
Loaded resources, active configuration, and server health at a glance
---
What You Get
Four resource types you author, version, and compose into workflows.
See the catalog — listing all available prompts
90 prompts across 11 categories — all hot-reloadable and versionable
Prompt Templates
Versioned YAML with hot-reload. Edit a template, test it immediately — or ask your AI to update it through MCP.
>>code_review target:'src/auth/' language:'typescript'Validation Rules (Gates)
Criteria the AI checks its own output against. Blocking or advisory.
:: 'no false positives' :: 'cite sources with links'Failed checks can retry automatically or pause for your decision.
[!TIP]
Define your own checks. See the Gates Guide for blocking vs advisory rules, retry behavior, and shell verification.
Reasoning Guidance (Methodologies)
Frameworks that shape how the AI thinks through a problem — not just what it outputs. 6 built-in, or create your own.
@CAGEERF # Context → Analysis → Goals → Execution → Evaluation → Refinement
@ReACT # Reason → Act → Observe loops
@5W1H # Who, What, Where, When, Why, How[!TIP]
Create your own framework. See the Methodologies Guide for built-in frameworks and custom authoring.
Styles
Response formatting and tone.
#analytical # Structured, evidence-based output
#concise # Brief, action-focusedAll resources are hot-reloadable, versioned with rollback history, and managed through the resource_manager tool.
[!TIP]
Ready to build your own? Start with the Prompt Authoring Tutorial.
---
Compose Workflows
The operator syntax wires resources together — chain steps, add validation inline, hand off steps to agents.
>>review target:'src/auth/' @CAGEERF :: 'no false positives'
--> security_scan :: verify:"npm test"
--> recommendations :: 'actionable, with code'
==> implementationSee the chain — phases completing back-to-back
Phases compound reasoning across steps — each step builds on validated output from the previous one
See the output — tech evaluation chain with context7 research
Context7 fetches live library docs mid-chain — final output is a structured assessment with sources
What happened:
1. Loaded the review template with arguments
2. Injected CAGEERF reasoning guidance
3. Added a validation rule (AI self-evaluates against it)
4. Chained output to the next step
5. Ran a shell command for ground-truth validation
6. Handed the final step off to a client-native subagent
Verification Loops
Ground-truth validation via shell commands — the AI keeps iterating until tests pass:
>>implement-feature :: verify:"npm test" loop:trueImplements, runs the test, reads failures, fixes, retries. Spawns a fresh context after repeated failures to avoid context rot.
| Preset | Tries | Timeout | Use Case |
|---|---|---|---|
:fast | 1 | 30s | Quick check |
:full | 5 | 5 min | CI validation |
:extended | 10 | 10 min | Large test suites |
[!TIP]
Autonomous test-fix cycles. See Ralph Loops for presets, timeout configuration, and context-rot prevention.
Judge Mode
Let the AI pick the right resources for the task:
%judge Help me refactor this authentication moduleAnalyzes available templates, reasoning frameworks, validation rules, and styles — applies the best combination automatically.
[!TIP]
How judge mode selects resources. See Judge Mode Guide for scoring, overrides, and preview with
%judge.
[!TIP]
Chains support conditional branching, context threading, and agent handoffs.
---
Run Anywhere
Author workflows as YAML templates. Export as native skills to your client.
# skills-sync.yaml — choose what to export
registrations:
claude-code:
user:
- prompt:development/review
- prompt:development/validate_worknpm run skills:exportThe review prompt becomes a /review Claude Code skill. validate_work becomes /validate_work. Same source, native experience — no MCP call required at runtime.
Compiles to Claude Code skills, Cursor rules, OpenCode commands, and more. npm run skills:diff flags when exports drift from source.
See the export — dry-run compile + skill preview
Dry-run compiles YAML templates into native client skills — review before writing
[!TIP]
The Skills Sync Guide covers configuration, supported clients, and drift detection.
---
With Hooks
Well-composed prompts carry their own structure. Hooks keep the experience consistent across models and long sessions.
What hooks do
Route operator syntax to the right tool automatically.
Track workflow progress across steps and long sessions.
Enforce validation rules and step handoffs between agents.
| Behavior | What happens |
|---|---|
| Prompt routing | >>analyze in conversation → correct MCP tool call |
| Chain continuity | Injects step progress and continuation between steps |
| Validation tracking | Tracks pass/fail verdicts across chain steps |
| Agent handoffs | Routes ==> steps to client-native subagents |
| Session persistence | Preserves workflow state through context compaction |
Hooks ship with the plugin install. Available for Claude Code (full), OpenCode (full), Gemini CLI (partial). Other clients: MCP tools only.
---
Syntax Reference
| Symbol | Name | What It Does | Example |
|---|---|---|---|
>> | Prompt | Execute template | >>code_review |
--> | Chain | Pipe to next step | step1 --> step2 |
==> | Handoff | Route step to agent | step1 ==> agent_step |
* | Repeat | Run prompt N times | >>brainstorm * 5 |
@ | Framework | Inject reasoning guidance | @CAGEERF |
:: | Gate | Add validation criteria | :: 'cite sources' |
% | Modifier | Toggle behavior | %clean, %judge |
# | Style | Apply formatting | #analytical |
Modifiers:
%clean— No framework/gate injection%lean— Gates only, skip framework%guided— Force framework injection%judge— AI selects best resources
→ MCP Tools Reference for full command documentation.
The Three Tools
| Tool | Purpose |
|---|---|
prompt_engine | Execute prompts with frameworks and validation |
resource_manager | Create, update, version, and export resources |
system_control | Status, analytics, framework switching |
prompt_engine(command:"@CAGEERF >>analysis topic:'AI safety'")
resource_manager(resource_type:"prompt", action:"list")
system_control(action:"status")---
How It Works
%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;
linkStyle default stroke:#94a3b8,stroke-width:2px
User["1. User sends command"]:::actor
Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
User --> Example --> Parse
subgraph Server["MCP Server"]
direction TB
Parse["2. Parse operators"]:::process
Inject["3. Inject framework + gates"]:::process
Render["4. Render prompt"]:::process
Decide{"6. Route verdict"}:::decision
Parse --> Inject --> Render
end
Server:::server
subgraph Client["Claude (Client)"]
direction TB
Execute["5. Run prompt + check gates"]:::client
end
Client:::clientbg
Render -->|"Prompt with gate criteria"| Execute
Execute -->|"Verdict + output"| Decide
Decide -->|"PASS → render next step"| Render
Decide -->|"FAIL → render retry prompt"| Render
Decide -->|"Done"| Result["7. Return to user"]:::actorCommand with operators → server parses and injects resources → client executes and self-evaluates → route: next step (pass), retry (fail), or return result (done).
---
Documentation
| I want to... | Go here |
|---|---|
| Build my first prompt | Prompt Authoring Tutorial |
| Chain multi-step workflows | Chains Lifecycle |
| Add validation to workflows | Gates Guide |
| Use or create reasoning frameworks | Methodologies Guide |
| Use autonomous verification loops | Ralph Loops |
Configure per-client MCP installs and --client presets | Client Integration Guide |
| Compare client profile mapping and limitations | Client Capabilities Reference |
| Export skills to other clients | Skills Sync |
| Configure the server | CLI & Configuration |
| Let the AI pick resources automatically | Judge Mode Guide |
| Look up MCP tool parameters | MCP Tools Reference |
| Look up prompt YAML fields | Prompt YAML Schema |
| Understand the architecture | Architecture Overview |
| Fix common issues | Troubleshooting |
---
Contributing
cd server
npm install
npm run build # esbuild bundles to dist/index.js
npm test # Run test suite
npm run validate:all # Full CI validationThe build produces a self-contained bundle. server/dist/ is gitignored — CI builds fresh from source.
See CONTRIBUTING.md for workflow details.
---
License
Similar MCP
Based on tags & features
Trending MCP
Most active this week