Skip to main content

MCP Tools Quick Start Guide

CODITECT provides 9 MCP (Model Context Protocol) tools that give Claude Code access to organizational memory, code intelligence, multi-LLM orchestration, and backup operations. This guide shows how to enable all tools in under 5 minutes and use them effectively.


Tool Ecosystem Overview

CODITECT's 9 MCP tools fall into 4 categories. Together they expose 74+ callable tools through the MCP protocol.

Search & Memory (2 tools)

ToolMCP ToolsWhat It Does
mcp-semantic-search8Hybrid search combining FTS5 keyword + vector similarity (all-MiniLM-L6-v2) with RRF fusion across 143K+ embeddings, 1,856 decisions, 475 error solutions, and knowledge graph
mcp-context-graph17Knowledge graph navigation with task-specific subgraph building, agent context injection, and cross-session workflow state management

Code Intelligence (2 tools)

ToolMCP ToolsWhat It Does
mcp-call-graph7AST-based call graph indexing (Python, JS, TS) with memory-linked search connecting functions to session history and decisions
mcp-impact-analysis5Decision-aware risk scoring combining blast radius (0-40), ADR constraints (0-35), and historical issues (0-25) into a 0-100 risk score

Orchestration (3 tools)

ToolMCP ToolsWhat It Does
mcp-cross-llm-bridge15Multi-LLM orchestration with semantic command processing, intelligent task routing across 5 providers, skill translation, and token cost tracking
mcp-skill-server18Progressive disclosure skill access (94% context reduction) with category-based loading, semantic search, and CEF experience pack activation
mcp-unified-gateway74 (aggregated)Single-connection gateway aggregating all 8 backend servers with lazy loading (20MB vs 250MB), O(1) tool routing, and graceful degradation

Operations (2 tools)

ToolMCP ToolsWhat It Does
mcp-backup5Backup/restore of CODITECT databases, Claude config, session logs, and hooks to GCS with validation and dry-run support
transcript-normalizationCLIProcessing pipeline converting raw transcript TXT to structured Markdown with sentence splitting, speaker detection, and de-hyphenation

How They Relate

                    ┌─────────────────────────┐
│ mcp-unified-gateway │ ← Use this OR individual servers
│ (74 tools, 1 process) │
└────────────┬────────────┘
│ routes to
┌──────────┬───────────┼───────────┬──────────┐
▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ...
│ semantic │ │call-graph│ │ context │ │cross-llm │
│ search │ │ │ │ graph │ │ bridge │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └──────────┘
│ │ │
└────────────┴────────────┘

┌───────┴───────┐
│ org.db │ ← Shared organizational memory
│ sessions.db │
└───────────────┘

The unified gateway aggregates 8 backend servers behind one connection. You can use it instead of configuring each server individually. The transcript-normalization tool runs standalone (CLI only, not MCP).


1-2-3 Quick Start

Step 1: Verify Prerequisites

# Activate the CODITECT virtual environment
source ~/.coditect/.venv/bin/activate

# Check MCP SDK is installed
python3 -c "import mcp; print(f'MCP SDK v{mcp.__version__}')"

# Check databases exist
ls -la ~/PROJECTS/.coditect-data/context-storage/*.db

You need:

  • Python 3.10+
  • mcp package installed
  • CODITECT databases populated (run /orient if first time)

Step 2: Configure MCP

Choose Option A (recommended) for a single gateway, or Option B for individual servers.

Option A: Unified Gateway (recommended)

Add to your project's .mcp.json:

{
"mcpServers": {
"coditect-unified": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": [
"/Users/YOU/.coditect/tools/mcp-unified-gateway/server.py",
"mcp"
],
"env": {
"PYTHONPATH": "/Users/YOU/.coditect"
}
}
}
}

Replace /Users/YOU with your home directory path. This gives you all 74 tools through one process.

Option B: Individual Servers

Add each server you need to .mcp.json:

{
"mcpServers": {
"coditect-semantic-search": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-semantic-search/server.py", "--mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-call-graph": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-call-graph/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-impact-analysis": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-impact-analysis/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-context-graph": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-context-graph/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-cross-llm": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-cross-llm-bridge/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-skill-server": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-skill-server/server.py", "mcp"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
},
"coditect-backup": {
"command": "/Users/YOU/.coditect/.venv/bin/python3",
"args": ["/Users/YOU/.coditect/tools/mcp-backup/server.py"],
"env": { "PYTHONPATH": "/Users/YOU/.coditect" }
}
}
}

Step 3: Verify Tools Are Available

Restart Claude Code, then check the tools are loaded:

# In Claude Code, the MCP tools appear automatically
# You can verify by asking Claude to list available MCP tools
# Or check the gateway CLI:
python3 ~/.coditect/tools/mcp-unified-gateway/server.py list
python3 ~/.coditect/tools/mcp-unified-gateway/server.py stats

Expected output: 74 tools across 8 backends.


5 Common Workflows

Workflow 1: Code Change Impact Review

Use case: Before modifying a function, understand what will break and what decisions constrain the change.

Tools used: mcp-call-graph + mcp-impact-analysis

Step 1: Index the codebase (if not already indexed)
→ index_directory(dir_path="/path/to/project")

Step 2: Check who calls the function you want to change
→ get_callers(function_name="process_payment")

Step 3: Get full impact analysis with risk score
→ analyze_impact(function_name="process_payment", include_indirect=true)

Step 4: Find architectural decisions that constrain changes
→ find_decisions(target="process_payment")

What you learn: Blast radius (15 callers), risk score (67/100 = high), constraining ADRs ("Must use idempotent payment processing per ADR-042"), and past issues ("ValueError on null amounts fixed in Jan 2026").

Workflow 2: Semantic Context Retrieval

Use case: Find past decisions, error solutions, and related context for a task you're working on.

Tools used: mcp-semantic-search + mcp-context-graph

Step 1: Search for relevant past context
→ hybrid_search(query="database migration strategy", limit=10)

Step 2: Search specifically for architectural decisions
→ search_decisions(query="database migration")

Step 3: Build a focused context graph for your task
→ build_context_graph(
task_description="plan database schema migration for v2",
strategy="policy_first",
token_budget=4000
)

What you learn: Related decisions (ADR-118 database architecture), past migration errors and their solutions, and a focused context graph with the most relevant knowledge nodes for your task.

Workflow 3: Cross-LLM Task Routing

Use case: Route different parts of a task to the best LLM for each subtask, tracking costs.

Tools used: mcp-cross-llm-bridge

Step 1: Process a natural language command
→ process_semantic_command(command="analyze security vulnerabilities in auth module")

Step 2: Route to optimal LLM based on task type
→ route_task_to_optimal_llm(
task_description="security code review",
priority="quality"
)

Step 3: Check token spending
→ get_spending_report(period="daily")

Step 4: Compare provider costs
→ compare_provider_costs(task_type="code_review")

What you learn: The optimal provider for each task type, estimated cost per provider, and your daily/weekly/monthly token spending vs budget.

Workflow 4: Skill Discovery and Loading

Use case: Find the right skill for a task and load it with minimal context window usage.

Tools used: mcp-skill-server

Step 1: Search for relevant skills by intent
→ search_skills_semantic(query="deploy to kubernetes")

Step 2: List skills in a specific category
→ list_skills(category="devops")

Step 3: Load the skill (Level 2 - full instructions)
→ load_skill(skill_name="container-orchestration")

Step 4: Load a CEF experience pack for your role
→ activate_experience_pack(pack_name="devops-engineer")

What you learn: The 315+ available skills organized by category, with 94% less context usage than loading all skills at once. CEF experience packs pre-load the right skills for your role.

Workflow 5: Backup and Recovery

Use case: Back up databases before a risky operation, verify backup health.

Tools used: mcp-backup

Step 1: Check backup health
→ backup_status(mode="health")

Step 2: Validate databases before backup
→ backup_validate()

Step 3: Create a new backup (dry-run first)
→ backup_create(dry_run=true)

Step 4: Create actual backup
→ backup_create(dry_run=false)

Step 5: List available backups for verification
→ backup_list(limit=5)

What you learn: Last backup time, storage usage, database integrity status, and a verified new backup in GCS.


Tool-to-Use-Case Decision Matrix

Use this matrix to find the right tool for your task.

I want to...Use This ToolKey Function
Search past conversationsmcp-semantic-searchhybrid_search
Find error solutionsmcp-semantic-searchsearch_errors
Find architectural decisionsmcp-semantic-search or mcp-impact-analysissearch_decisions / find_decisions
See who calls a functionmcp-call-graphget_callers
See what a function callsmcp-call-graphget_callees
Trace call path A→Bmcp-call-graphcall_chain
Assess risk of changing codemcp-impact-analysisassess_risk
Get blast radius of a changemcp-impact-analysisanalyze_impact
Build task-specific contextmcp-context-graphbuild_context_graph
Inject context into an agentmcp-context-graphget_agent_context
Track multi-step workflowmcp-context-graphstart_workflow / resume_workflow
Route task to best LLMmcp-cross-llm-bridgeroute_task_to_optimal_llm
Parse natural language commandmcp-cross-llm-bridgeprocess_semantic_command
Track LLM spendingmcp-cross-llm-bridgeget_spending_report
Find a skill for my taskmcp-skill-serversearch_skills_semantic
Load a skill into contextmcp-skill-serverload_skill
Activate role-based skill packmcp-skill-serveractivate_experience_pack
Back up databasesmcp-backupbackup_create
Restore from backupmcp-backupbackup_restore
Check backup healthmcp-backupbackup_status
Normalize transcriptstranscript-normalizationCLI: python3 normalize.py
Use all tools in one connectionmcp-unified-gatewayAll 74 tools via one MCP endpoint

Gateway vs Individual Servers

ConsiderationUnified GatewayIndividual Servers
Configuration1 entry in .mcp.json7-8 entries
Startup memory20MB (lazy loading)250MB (all loaded)
Processes17-8
Routing overhead<2ms per callNone
Failure isolationGraceful degradation per backendFull isolation
DebuggingCheck server.py statsCheck each server independently
RecommendationUse for most usersUse when debugging a specific tool

Recommendation: Start with the unified gateway. Switch to individual servers only if you need to debug a specific tool or want full process isolation.


Troubleshooting

Tools Not Appearing in Claude Code

# Check MCP server starts without errors
python3 ~/.coditect/tools/mcp-unified-gateway/server.py stats

# Check PYTHONPATH includes coditect-core
echo $PYTHONPATH

# Verify .mcp.json syntax
python3 -c "import json; json.load(open('.mcp.json'))"

"Module not found" Errors

# Ensure you're using the CODITECT venv Python
which python3 # Should point to ~/.coditect/.venv/bin/python3

# Install missing dependencies
source ~/.coditect/.venv/bin/activate
pip install mcp sentence-transformers tree-sitter

Vector search scans all embeddings (143K+). For queries under 50ms:

  • Use hybrid_search (FTS5 pre-filters before vector)
  • Reduce limit parameter
  • Use keyword_search for exact term matching

Database Locked Errors

CODITECT databases use SQLite WAL mode. If you see lock errors:

# Check for stuck processes
lsof ~/PROJECTS/.coditect-data/context-storage/org.db

# WAL checkpoint (safe to run)
python3 -c "
import sqlite3
conn = sqlite3.connect('~/PROJECTS/.coditect-data/context-storage/org.db')
conn.execute('PRAGMA wal_checkpoint(TRUNCATE)')
conn.close()
"

Further Reading


Task ID: F.6.5 Author: Claude (Opus 4.6) Created: 2026-02-07