/trajectory Command
View and analyze tool execution trajectories for debugging agent workflows. Data is stored in sessions.db (ADR-118 Tier 3):tool_analytics table.
v2.0.0 Update: Trajectory data now stored in database (not files). See ADR-079.
Usage
/trajectory # Show recent tool calls
/trajectory --stats # Show trajectory statistics
/trajectory --session <id> # Filter by session
/trajectory --tool <name> # Filter by tool name
/trajectory --errors # Show failed tool calls
/trajectory --refresh # Regenerate data.json (--timeframe all) and launch dashboard
/trajectory --refresh --timeframe week # Regenerate with specific timeframe
/trajectory --server # Launch dashboard at http://localhost:5174/ (uses existing data.json)
Options
| Option | Description |
|---|---|
--stats | Show aggregate statistics |
--session <id> | Filter to specific session |
--tool <name> | Filter to specific tool (Bash, Read, Edit, etc.) |
--task <id> | Filter to specific task ID (A.1.1, etc.) |
--errors | Show only failed tool calls |
--limit N | Limit results (default: 20) |
--today | Show only today's tool calls |
--refresh | Regenerate data.json from sessions.db + org.db and launch dashboard |
--timeframe <period> | Time range for --refresh: today, week, sprint, month, all (default: all) |
--server | Launch the trajectory dashboard at http://localhost:5174/ (no data refresh) |
Examples
Show Recent Tool Calls
/trajectory --limit 10
SELECT
tool_name,
tool_category,
task_id,
status,
execution_time_ms,
created_at
FROM tool_analytics
ORDER BY created_at DESC
LIMIT 10;
Show Statistics
/trajectory --stats
-- Tool usage summary
SELECT
tool_name,
tool_category,
COUNT(*) as total_calls,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as successes,
ROUND(100.0 * SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) / COUNT(*), 1) as success_rate,
ROUND(AVG(execution_time_ms), 0) as avg_time_ms
FROM tool_analytics
WHERE created_at > datetime('now', '-7 days')
GROUP BY tool_name, tool_category
ORDER BY total_calls DESC
LIMIT 20;
Filter by Session
/trajectory --session abc123
SELECT tool_name, task_id, status, execution_time_ms, created_at
FROM tool_analytics
WHERE session_id = 'abc123'
ORDER BY created_at;
Filter by Tool
/trajectory --tool Bash
SELECT session_id, task_id, status, execution_time_ms, created_at
FROM tool_analytics
WHERE tool_name = 'Bash'
ORDER BY created_at DESC
LIMIT 20;
Show Errors
/trajectory --errors
SELECT
tool_name,
error_type,
error_message,
task_id,
session_id,
created_at
FROM tool_analytics
WHERE status = 'failed'
ORDER BY created_at DESC
LIMIT 20;
Tool Sequences (What Follows What)
-- Common tool sequences within sessions
WITH tool_sequence AS (
SELECT
session_id,
tool_name,
LAG(tool_name) OVER (PARTITION BY session_id ORDER BY created_at) as prev_tool
FROM tool_analytics
WHERE trajectory_hash IS NOT NULL
)
SELECT
prev_tool || ' → ' || tool_name as sequence,
COUNT(*) as frequency
FROM tool_sequence
WHERE prev_tool IS NOT NULL
GROUP BY prev_tool, tool_name
HAVING COUNT(*) > 50
ORDER BY frequency DESC
LIMIT 20;
Refresh Data and Launch Dashboard
/trajectory --refresh
Regenerates data.json by running the full Python adapter pipeline against sessions.db and org.db, then launches the Vite dev server. This is the recommended way to get fresh data into the trajectory dashboard.
Equivalent to:
cd tools/trajectory-dashboard
python3 ../../scripts/trajectory/dashboard_data_adapter.py \
--timeframe all \
--output public/data.json \
--pretty
npm run dev -- --port 5174
open http://localhost:5174
With a specific timeframe:
/trajectory --refresh --timeframe week
What --refresh does:
- Runs
dashboard_data_adapter.pywith the selected--timeframe(default:all) - Parses TRACK files for task descriptions and % complete
- Parses session logs for monthly digests
- Generates executive summary (overall progress, health score)
- Writes
data.jsontotools/trajectory-dashboard/public/ - Starts Vite dev server on port 5174
- Opens browser
Launch Dashboard Server (No Refresh)
/trajectory --server
Starts the trajectory dashboard on http://localhost:5174/ using existing data.json. Does NOT regenerate data. Use --refresh if data is stale.
Equivalent to:
cd tools/trajectory-dashboard && npm run dev
Dashboard panels: Timeline chart, agent activity graph, token usage bars, error heatmap, tool call waterfall, executive snapshot, track progress, session logs.
See /trajectory-dashboard for full dashboard options.
Implementation
import sqlite3
from pathlib import Path
CONTEXT_DB = Path.home() / ".coditect" / "context-storage" / "sessions.db (ADR-118 Tier 3)"
def get_trajectory_stats(days=7):
conn = sqlite3.connect(str(CONTEXT_DB))
cursor = conn.execute("""
SELECT
tool_name,
COUNT(*) as total,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as successes,
ROUND(AVG(execution_time_ms), 0) as avg_ms
FROM tool_analytics
WHERE created_at > datetime('now', ?)
GROUP BY tool_name
ORDER BY total DESC
""", (f'-{days} days',))
return cursor.fetchall()
def get_recent_calls(limit=20, tool=None, session=None):
conn = sqlite3.connect(str(CONTEXT_DB))
query = "SELECT * FROM tool_analytics WHERE 1=1"
params = []
if tool:
query += " AND tool_name = ?"
params.append(tool)
if session:
query += " AND session_id = ?"
params.append(session)
query += " ORDER BY created_at DESC LIMIT ?"
params.append(limit)
cursor = conn.execute(query, params)
return cursor.fetchall()
Data Storage
Location: ~/PROJECTS/.coditect-data/context-storage/sessions.db (ADR-118 Tier 3)
Table: tool_analytics
| Column | Description |
|---|---|
session_id | Claude Code session ID |
tool_name | Tool name (Bash, Read, Edit, Task, etc.) |
tool_category | Category (file_ops, search, execution, etc.) |
task_id | Track nomenclature (A.1.1, etc.) |
status | success, failed, timeout, interrupted |
execution_time_ms | Tool execution duration |
trajectory_hash | Unique hash for deduplication |
created_at | Timestamp |
Populated by: /cx pipeline via trajectory_extractor.py
Migration from v1.0
The file-based trajectory system (~/.coditect/trajectories/*.jsonl) has been deprecated:
| v1.0 (Files) | v2.0 (Database) |
|---|---|
| 7,956 individual files | Single SQLite table |
| 36MB storage | ~50KB index overhead |
| Not queryable | Full SQL support |
| Per-session files | Unified storage |
All trajectory data is now extracted via /cx and stored in tool_analytics with hash-based deduplication.
Related
- Dashboard: /trajectory-dashboard — Full dashboard with data refresh options
- ADR: ADR-079
- ADR: ADR-101
- ADR: ADR-163
- Schema: DATABASE-SCHEMA.md
- Extractor: trajectory_extractor.py