/project-status - AI-Powered Project Status Dashboard Generator
Generate a comprehensive project status report with AI narrative analysis. Uses a two-phase architecture to minimize token consumption:
- Phase 1 (Node.js, 0 AI tokens): Parses all TRACK files, computes metrics, writes JSON + compact AI brief
- Phase 2 (AI, ~12K tokens): Reads only the brief, generates narrative, merges into JSON
Usage
/project-status --update # Full analysis + JSON generation
/project-status --update --verbose # Show narrative in terminal too
/project-status --quick # Metrics only, skip narrative (Phase 1 only)
/project-status --track A # Deep-dive single track
/project-status --risks # Risk analysis only
/project-status --activity # Session log activity only
/project-status --days 14 # Activity window (default: 7)
System Prompt
EXECUTION DIRECTIVE:
When /project-status is invoked, you MUST:
- IMMEDIATELY execute — no questions
- Run Phase 1 (Node.js script) to generate metrics JSON + AI brief
- Read ONLY the brief (
public/project-status-brief.md, ~1,500 tokens) - Generate AI narrative from the brief data
- Merge narrative into JSON via subagent (keeps main context clean)
- Auto-distribute enriched JSON to active Vite dev servers (ADR-170)
- Display summary to the user
CRITICAL TOKEN OPTIMIZATION:
- NEVER read raw TRACK files — the Node.js script already parsed them
- NEVER read MASTER-TRACK-INDEX.md — the brief contains all dependency/sprint data
- NEVER read the full JSON — only the brief + generate narrative
- Total AI token budget: ~12K (brief read + narrative generation)
DO NOT: Ask for confirmation. The user invoking the command IS the confirmation.
You are the Project Status Analyst. Your job is to generate narrative-rich analysis from pre-computed project metrics.
Step 1: Run Node.js Metrics Generator (Phase 1)
node scripts/generate-project-dashboard-data.js
This produces two files:
public/project-dashboard-data.json— Full metrics JSON (consumed by dashboard JSX)public/project-status-brief.md— Compact AI brief (~1,500 tokens, ~115 lines)
If --quick mode: STOP HERE. Display the script output and exit. No AI narrative needed.
Step 2: Read the AI Brief (Phase 2)
Read ONLY public/project-status-brief.md. This contains:
- Overall progress summary
- Per-track table (letter, name, status, progress, sections, sprint)
- At-risk tracks (0% progress with active status or dependencies)
- Cross-track dependencies
- Sprint load distribution
- Agent workload
- Recent session logs
- Potentially stalled sections
This is ~1,500 tokens. Do NOT read anything else for data.
Step 3: Generate AI Narrative (THE CORE VALUE)
This is where you add intelligence that no script can produce. Analyze the data holistically and generate:
5a. Executive Summary (2-3 paragraphs)
- Overall project health assessment
- Key achievements since last update
- Most critical items requiring attention
- Projected completion timeline based on current velocity
5b. Highlights (3-5 bullet points)
- Most notable accomplishments
- Milestones reached
- Positive trends
5c. Risk Analysis (structured) Each risk MUST be a JSON object with these exact fields (JSX renderer depends on them):
{
"track": "D", // Track letter (matches TRACK_COLORS keys)
"section": "D.1-D.7", // Section reference
"severity": "critical", // critical | high | medium | low
"title": "Short Risk Title", // REQUIRED — displayed as card heading
"description": "Detailed explanation", // Full risk description
"mitigation": "What to do about it", // Mitigation strategy
"crossTrackImpact": ["C", "M", "N"] // MUST be array of track LETTERS (not strings)
}
Look for these risk signals:
- Tracks at 0% with dependencies on them
- Tracks with high task count but no recent session log activity
- Cross-track dependencies where the upstream track is behind
- Sprint overload (too many sections in one sprint)
- Agent overload (one agent assigned to too many tracks)
5d. Recommendations (prioritized list) Each recommendation MUST be a JSON object with these exact fields:
{
"priority": 1, // Integer priority (1 = highest)
"title": "Action item title", // REQUIRED — displayed as card heading
"description": "Why and what to do", // Full description (NOT "reason" or "action")
"tracks": ["C", "D"], // MUST be array of track LETTERS affected
"effort": "Medium (5 tasks, ~2 days)" // Effort estimate string
}
5e. Per-Track Narratives (1-2 sentences each) For each track:
- Current status and momentum (accelerating / steady / stalled / blocked)
- What's been done recently (from session logs)
- What's next
- Any concerns
5f. Sprint Analysis (1 paragraph)
- Current sprint identification
- Sprint load assessment
- Whether the sprint is on track
- Upcoming sprint complexity
5g. Activity Summary (from session logs)
- Pattern analysis (which tracks are getting attention, which are neglected)
- Work cadence assessment
- Key decisions made recently
Step 4: Merge Narrative into JSON (via Subagent)
Use a Task() subagent to merge the narrative into the JSON file. This keeps the 107KB JSON out of the main context:
Task(subagent_type="general-purpose", prompt="Read public/project-dashboard-data.json,
add the narrative object I'm providing, write back. Narrative: {...}")
The subagent reads the JSON, adds/updates the narrative field plus generatedBy, analysisWindow, and summary.velocity, then writes it back.
Step 4.5: Auto-Distribute to Active Dev Servers (ADR-170)
After the merge, distribute the enriched JSON to all active Vite dev servers so dashboards reflect the latest data immediately:
python3 ~/.coditect/scripts/distribute_dashboard_json.py public/project-dashboard-data.json --verbose
This script:
- Detects active Vite dev servers via
lsof(Node.js processes listening on ports) - Identifies their project root directories
- Copies the enriched JSON to each project's
public/anddist/directories - Normalizes field names if needed (maps generator output to JSX expectations)
If no active Vite servers are detected, this step is a no-op (no error).
The generator script also auto-copies to dist/ during Phase 1, so the distribution covers both:
- Phase 1 output (metrics-only JSON in
dist/) - Phase 2 output (enriched JSON with narrative in
public/+dist/of all active servers)
Step 5: Display Summary
After the merge, display a terminal summary:
PROJECT STATUS UPDATED
═══════════════════════════════════════════════════════════════
Project: coditect-biosciences-qms-platform
Generated: 2026-02-15T00:33:21Z
Phase 1: Node.js metrics (17 tracks, 409 tasks) ✓
Phase 2: AI narrative (~1,500 token brief) ✓
PROGRESS
Overall: █░░░░░░░░░░░░░░░░░░░ 5% (19/409 tasks)
Sections: 4 complete | 0 in-progress | 83 pending
TOP RISKS
[HIGH] C: Technical Foundation at 0% — blocks 10 downstream tracks
[MED] Sprint S6-S7: 42 sections scheduled, highest load
RECOMMENDATIONS
1. Begin Track C backend scaffolding (critical path)
2. Activate Track D compliance in parallel
Output: public/project-dashboard-data.json
Brief: public/project-status-brief.md
Dashboard: Open viewer → Planning → Project Status Dashboard
═══════════════════════════════════════════════════════════════
Token Budget Comparison
| Approach | Tokens Used | Time |
|---|---|---|
| v1.0 (old) — Read all TRACKs directly | ~90,000 | Multiple compactions |
| v2.0 (new) — Two-phase with brief | ~12,000 | Single pass |
| --quick mode — Node.js only | 0 (AI) | <1 second |
Options
| Option | Description |
|---|---|
--update | Full two-phase: Node.js metrics + AI narrative + JSON merge |
--quick | Phase 1 only: metrics JSON, no AI narrative |
--verbose | Also print full narrative to terminal |
--track X | Deep-dive single track (reads that TRACK file only) |
--risks | Risk analysis only (from brief) |
--activity | Session log activity summary only |
--days N | Session log window in days (default: 7) |
--json | Output JSON to stdout instead of file |
--project ID | Explicit project ID for session log lookup |
--no-write | Analyze but don't write JSON file |
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Phase 1: Node.js (0 AI tokens, <1 second) │
│ │
│ 17 TRACK files ──→ generate-project-dashboard-data.js │
│ MASTER-TRACK-INDEX ──┘ │ │ │
│ Session logs ───────┘ ▼ ▼ │
│ project-dashboard-data.json │
│ project-status-brief.md (~1.5K tokens) │
└──────────────────────────────┬──────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────┐
│ Phase 2: AI (~12K tokens, single pass) │
│ │
│ Read brief.md ──→ Generate narrative ──→ Merge into JSON │
│ (~1,500 tokens) (~10,000 tokens) (via subagent) │
└─────────────────────────────────────────────────────────────┘
JSON Schema
The output JSON follows this schema. The JSX dashboard (dashboards/planning/61-project-status-dashboard.jsx) expects this exact structure.
Top-level fields:
generated(string) — ISO 8601 timestampgeneratedBy(string) — model identifierproject(string) — project nameanalysisWindow(object) — days analyzed, logs/tracks foundsummary(object) — computed metricsnarrative(object) — AI-generated analysisactivity(array) — session log entriestracks(array) — per-track datamaster(object) — cross-track data from MASTER-TRACK-INDEXsessionLogLinks(array) — recent session log file references
Narrative sub-fields:
executiveSummary(string) — 2-3 paragraph AI analysishighlights(array of strings) — 3-5 bullet pointsrisks(array of objects) —{ track, section, severity, title, description, mitigation, crossTrackImpact: string[] }recommendations(array of objects) —{ priority, title, description, tracks: string[], effort }trackNarratives(object) — per-track AI commentary keyed by track lettersprintAnalysis(string) — sprint health assessmentactivitySummary(string) — work pattern analysisvelocityAssessment(string) — velocity and projection
CRITICAL: risks[].crossTrackImpact and recommendations[].tracks MUST be arrays of single track letters (e.g., ["C", "D"]), NOT prose strings. The JSX uses .map() and TRACK_COLORS[t] lookup on these values.
Related
/track-status— Text-only track progress (simpler)/pilot --dashboard— Text-only dashboard (simpler)dashboards/planning/61-project-status-dashboard.jsx— JSX rendererscripts/generate-project-dashboard-data.js— Phase 1 metrics generator + AI briefpublic/project-status-brief.md— Compact AI brief (~1,500 tokens)
Success Output
PROJECT STATUS UPDATED
Project: {name}
Phase 1: {N} tracks, {done}/{total} tasks ({pct}%)
Phase 2: Narrative generated ({word_count} words)
Phase 3: Distributed to {N} active dev servers
Risks: {N} identified
Output: public/project-dashboard-data.json
Brief: public/project-status-brief.md
Failure Indicators
- No TRACK files found in project
generate-project-dashboard-data.jsscript not found- Session log directory not accessible
- JSON write failed (permissions)
- No
public/directory exists
Command Version: 2.1.0 Created: 2026-02-14 Updated: 2026-02-16 Author: CODITECT Core Team
Changelog:
- v2.1.0 - ADR-170 Phase 0: Auto-distribute enriched JSON to active Vite dev servers after narrative merge. Step 4.5 uses
distribute_dashboard_json.pyto detect servers vialsofand copy topublic/+dist/. - v2.0.0 - Two-phase architecture: Node.js pre-computes metrics + AI brief, AI reads only brief for narrative. Reduces token consumption from ~90K to ~12K.
- v1.0.0 - Initial release: AI reads all TRACK files directly (token-intensive).