MCP Tools Quick Start Guide
CODITECT provides 9 MCP (Model Context Protocol) tools that give Claude Code access to organizational memory, code intelligence, multi-LLM orchestration, and backup operations. This guide shows how to enable all tools in under 5 minutes and use them effectively.
Tool Ecosystem Overview
CODITECT's 9 MCP tools fall into 4 categories. Together they expose 74+ callable tools through the MCP protocol.
Search & Memory (2 tools)
| Tool | MCP Tools | What It Does |
|---|---|---|
| mcp-semantic-search | 8 | Hybrid search combining FTS5 keyword + vector similarity (all-MiniLM-L6-v2) with RRF fusion across 143K+ embeddings, 1,856 decisions, 475 error solutions, and knowledge graph |
| mcp-context-graph | 17 | Knowledge graph navigation with task-specific subgraph building, agent context injection, and cross-session workflow state management |
Code Intelligence (2 tools)
| Tool | MCP Tools | What It Does |
|---|---|---|
| mcp-call-graph | 7 | AST-based call graph indexing (Python, JS, TS) with memory-linked search connecting functions to session history and decisions |
| mcp-impact-analysis | 5 | Decision-aware risk scoring combining blast radius (0-40), ADR constraints (0-35), and historical issues (0-25) into a 0-100 risk score |
Orchestration (3 tools)
| Tool | MCP Tools | What It Does |
|---|---|---|
| mcp-cross-llm-bridge | 15 | Multi-LLM orchestration with semantic command processing, intelligent task routing across 5 providers, skill translation, and token cost tracking |
| mcp-skill-server | 18 | Progressive disclosure skill access (94% context reduction) with category-based loading, semantic search, and CEF experience pack activation |
| mcp-unified-gateway | 74 (aggregated) | Single-connection gateway aggregating all 8 backend servers with lazy loading (20MB vs 250MB), O(1) tool routing, and graceful degradation |
Operations (2 tools)
| Tool | MCP Tools | What It Does |
|---|---|---|
| mcp-backup | 5 | Backup/restore of CODITECT databases, Claude config, session logs, and hooks to GCS with validation and dry-run support |
| transcript-normalization | CLI | Processing pipeline converting raw transcript TXT to structured Markdown with sentence splitting, speaker detection, and de-hyphenation |
How They Relate
┌─────────────────────────┐
│ mcp-unified-gateway │ ← Use this OR individual servers
│ (74 tools, 1 process) │
└────────────┬────────────┘
│ routes to
┌──────────┬───────────┼───────────┬──────────┐
▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ...
│ semantic │ │call-graph│ │ context │ │cross-llm │
│ search │ │ │ │ graph │ │ bridge │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └──────────┘
│ │ │
└────────────┴────────────┘
│
┌───────┴───────┐
│ org.db │ ← Shared organizational memory
│ sessions.db │
└───────────────┘
The unified gateway aggregates 8 backend servers behind one connection. You can use it instead of configuring each server individually. The transcript-normalization tool runs standalone (CLI only, not MCP).
1-2-3 Quick Start
Step 1: Verify Prerequisites
# Activate the CODITECT virtual environment
source ~/.coditect/.venv/bin/activate
# Check MCP SDK is installed
python3 -c "import mcp; print(f'MCP SDK v{mcp.__version__}')"
# Check databases exist
ls -la ~/PROJECTS/.coditect-data/context-storage/*.db
You need:
- Python 3.10+
mcppackage installed- CODITECT databases populated (run
/orientif first time)
Step 2: Configure MCP
Choose Option A (recommended) for a single gateway, or Option B for individual servers.
Option A: Unified Gateway (recommended)
Add to your project's .mcp.json:
{
"mcpServers": {
"coditect-unified": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": [
"/Users/YOU/.coditect/tools/mcp-unified-gateway/server.py",
"mcp"
],
"env": {
"PYTHONPATH": "/Users/YOU/.coditect"
}
}
}
}
Replace /Users/YOU with your home directory path. This gives you all 74 tools through one process.
Option B: Individual Servers
Add each server you need to .mcp.json:
{
"mcpServers": {
"coditect-semantic-search": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-semantic-search/server.py", "--mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-call-graph": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-call-graph/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-impact-analysis": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-impact-analysis/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-context-graph": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-context-graph/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-cross-llm": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-cross-llm-bridge/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-skill-server": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-skill-server/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-backup": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-backup/server.py"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
}
}
}
Step 3: Verify Tools Are Available
Restart Claude Code, then check the tools are loaded:
# In Claude Code, the MCP tools appear automatically
# You can verify by asking Claude to list available MCP tools
# Or check the gateway CLI:
python3 ~/.coditect/tools/mcp-unified-gateway/server.py list
python3 ~/.coditect/tools/mcp-unified-gateway/server.py stats
Expected output: 74 tools across 8 backends.
5 Common Workflows
Workflow 1: Code Change Impact Review
Use case: Before modifying a function, understand what will break and what decisions constrain the change.
Tools used: mcp-call-graph + mcp-impact-analysis
Step 1: Index the codebase (if not already indexed)
→ index_directory(dir_path="/path/to/project")
Step 2: Check who calls the function you want to change
→ get_callers(function_name="process_payment")
Step 3: Get full impact analysis with risk score
→ analyze_impact(function_name="process_payment", include_indirect=true)
Step 4: Find architectural decisions that constrain changes
→ find_decisions(target="process_payment")
What you learn: Blast radius (15 callers), risk score (67/100 = high), constraining ADRs ("Must use idempotent payment processing per ADR-042"), and past issues ("ValueError on null amounts fixed in Jan 2026").
Workflow 2: Semantic Context Retrieval
Use case: Find past decisions, error solutions, and related context for a task you're working on.
Tools used: mcp-semantic-search + mcp-context-graph
Step 1: Search for relevant past context
→ hybrid_search(query="database migration strategy", limit=10)
Step 2: Search specifically for architectural decisions
→ search_decisions(query="database migration")
Step 3: Build a focused context graph for your task
→ build_context_graph(
task_description="plan database schema migration for v2",
strategy="policy_first",
token_budget=4000
)
What you learn: Related decisions (ADR-118 database architecture), past migration errors and their solutions, and a focused context graph with the most relevant knowledge nodes for your task.
Workflow 3: Cross-LLM Task Routing
Use case: Route different parts of a task to the best LLM for each subtask, tracking costs.
Tools used: mcp-cross-llm-bridge
Step 1: Process a natural language command
→ process_semantic_command(command="analyze security vulnerabilities in auth module")
Step 2: Route to optimal LLM based on task type
→ route_task_to_optimal_llm(
task_description="security code review",
priority="quality"
)
Step 3: Check token spending
→ get_spending_report(period="daily")
Step 4: Compare provider costs
→ compare_provider_costs(task_type="code_review")
What you learn: The optimal provider for each task type, estimated cost per provider, and your daily/weekly/monthly token spending vs budget.
Workflow 4: Skill Discovery and Loading
Use case: Find the right skill for a task and load it with minimal context window usage.
Tools used: mcp-skill-server
Step 1: Search for relevant skills by intent
→ search_skills_semantic(query="deploy to kubernetes")
Step 2: List skills in a specific category
→ list_skills(category="devops")
Step 3: Load the skill (Level 2 - full instructions)
→ load_skill(skill_name="container-orchestration")
Step 4: Load a CEF experience pack for your role
→ activate_experience_pack(pack_name="devops-engineer")
What you learn: The 315+ available skills organized by category, with 94% less context usage than loading all skills at once. CEF experience packs pre-load the right skills for your role.
Workflow 5: Backup and Recovery
Use case: Back up databases before a risky operation, verify backup health.
Tools used: mcp-backup
Step 1: Check backup health
→ backup_status(mode="health")
Step 2: Validate databases before backup
→ backup_validate()
Step 3: Create a new backup (dry-run first)
→ backup_create(dry_run=true)
Step 4: Create actual backup
→ backup_create(dry_run=false)
Step 5: List available backups for verification
→ backup_list(limit=5)
What you learn: Last backup time, storage usage, database integrity status, and a verified new backup in GCS.
Tool-to-Use-Case Decision Matrix
Use this matrix to find the right tool for your task.
| I want to... | Use This Tool | Key Function |
|---|---|---|
| Search past conversations | mcp-semantic-search | hybrid_search |
| Find error solutions | mcp-semantic-search | search_errors |
| Find architectural decisions | mcp-semantic-search or mcp-impact-analysis | search_decisions / find_decisions |
| See who calls a function | mcp-call-graph | get_callers |
| See what a function calls | mcp-call-graph | get_callees |
| Trace call path A→B | mcp-call-graph | call_chain |
| Assess risk of changing code | mcp-impact-analysis | assess_risk |
| Get blast radius of a change | mcp-impact-analysis | analyze_impact |
| Build task-specific context | mcp-context-graph | build_context_graph |
| Inject context into an agent | mcp-context-graph | get_agent_context |
| Track multi-step workflow | mcp-context-graph | start_workflow / resume_workflow |
| Route task to best LLM | mcp-cross-llm-bridge | route_task_to_optimal_llm |
| Parse natural language command | mcp-cross-llm-bridge | process_semantic_command |
| Track LLM spending | mcp-cross-llm-bridge | get_spending_report |
| Find a skill for my task | mcp-skill-server | search_skills_semantic |
| Load a skill into context | mcp-skill-server | load_skill |
| Activate role-based skill pack | mcp-skill-server | activate_experience_pack |
| Back up databases | mcp-backup | backup_create |
| Restore from backup | mcp-backup | backup_restore |
| Check backup health | mcp-backup | backup_status |
| Normalize transcripts | transcript-normalization | CLI: python3 normalize.py |
| Use all tools in one connection | mcp-unified-gateway | All 74 tools via one MCP endpoint |
Gateway vs Individual Servers
| Consideration | Unified Gateway | Individual Servers |
|---|---|---|
| Configuration | 1 entry in .mcp.json | 7-8 entries |
| Startup memory | 20MB (lazy loading) | 250MB (all loaded) |
| Processes | 1 | 7-8 |
| Routing overhead | <2ms per call | None |
| Failure isolation | Graceful degradation per backend | Full isolation |
| Debugging | Check server.py stats | Check each server independently |
| Recommendation | Use for most users | Use when debugging a specific tool |
Recommendation: Start with the unified gateway. Switch to individual servers only if you need to debug a specific tool or want full process isolation.
Troubleshooting
Tools Not Appearing in Claude Code
# Check MCP server starts without errors
python3 ~/.coditect/tools/mcp-unified-gateway/server.py stats
# Check PYTHONPATH includes coditect-core
echo $PYTHONPATH
# Verify .mcp.json syntax
python3 -c "import json; json.load(open('.mcp.json'))"
"Module not found" Errors
# Ensure you're using the CODITECT venv Python
which python3 # Should point to ~/.coditect/.venv/bin/python3
# Install missing dependencies
source ~/.coditect/.venv/bin/activate
pip install mcp sentence-transformers tree-sitter
Slow Vector Search
Vector search scans all embeddings (143K+). For queries under 50ms:
- Use
hybrid_search(FTS5 pre-filters before vector) - Reduce
limitparameter - Use
keyword_searchfor exact term matching
Database Locked Errors
CODITECT databases use SQLite WAL mode. If you see lock errors:
# Check for stuck processes
lsof ~/PROJECTS/.coditect-data/context-storage/org.db
# WAL checkpoint (safe to run)
python3 -c "
import sqlite3
conn = sqlite3.connect('~/PROJECTS/.coditect-data/context-storage/org.db')
conn.execute('PRAGMA wal_checkpoint(TRUNCATE)')
conn.close()
"
Further Reading
- MCP Server Development Skill — Build your own MCP tools
- Memory Management Guide — How CODITECT stores and retrieves context
- Context Graph Guide — Deep dive into knowledge graph features
- Component Reference — Full component inventory
- Individual tool READMEs in
tools/mcp-*/README.md
Task ID: F.6.5 Author: Claude (Opus 4.6) Created: 2026-02-07