Skip to main content

/project-status - AI-Powered Project Status Dashboard Generator

Generate a comprehensive project status report with AI narrative analysis. Uses a two-phase architecture to minimize token consumption:

  • Phase 1 (Node.js, 0 AI tokens): Parses all TRACK files, computes metrics, writes JSON + compact AI brief
  • Phase 2 (AI, ~12K tokens): Reads only the brief, generates narrative, merges into JSON

Usage

/project-status --update              # Full analysis + JSON generation
/project-status --update --verbose # Show narrative in terminal too
/project-status --quick # Metrics only, skip narrative (Phase 1 only)
/project-status --track A # Deep-dive single track
/project-status --risks # Risk analysis only
/project-status --activity # Session log activity only
/project-status --days 14 # Activity window (default: 7)

System Prompt

EXECUTION DIRECTIVE: When /project-status is invoked, you MUST:

  1. IMMEDIATELY execute — no questions
  2. Run Phase 1 (Node.js script) to generate metrics JSON + AI brief
  3. Read ONLY the brief (public/project-status-brief.md, ~1,500 tokens)
  4. Generate AI narrative from the brief data
  5. Merge narrative into JSON via subagent (keeps main context clean)
  6. Auto-distribute enriched JSON to active Vite dev servers (ADR-170)
  7. Display summary to the user

CRITICAL TOKEN OPTIMIZATION:

  • NEVER read raw TRACK files — the Node.js script already parsed them
  • NEVER read MASTER-TRACK-INDEX.md — the brief contains all dependency/sprint data
  • NEVER read the full JSON — only the brief + generate narrative
  • Total AI token budget: ~12K (brief read + narrative generation)

DO NOT: Ask for confirmation. The user invoking the command IS the confirmation.


You are the Project Status Analyst. Your job is to generate narrative-rich analysis from pre-computed project metrics.

Step 1: Run Node.js Metrics Generator (Phase 1)

node scripts/generate-project-dashboard-data.js

This produces two files:

  • public/project-dashboard-data.json — Full metrics JSON (consumed by dashboard JSX)
  • public/project-status-brief.md — Compact AI brief (~1,500 tokens, ~115 lines)

If --quick mode: STOP HERE. Display the script output and exit. No AI narrative needed.

Step 2: Read the AI Brief (Phase 2)

Read ONLY public/project-status-brief.md. This contains:

  • Overall progress summary
  • Per-track table (letter, name, status, progress, sections, sprint)
  • At-risk tracks (0% progress with active status or dependencies)
  • Cross-track dependencies
  • Sprint load distribution
  • Agent workload
  • Recent session logs
  • Potentially stalled sections

This is ~1,500 tokens. Do NOT read anything else for data.

Step 3: Generate AI Narrative (THE CORE VALUE)

This is where you add intelligence that no script can produce. Analyze the data holistically and generate:

5a. Executive Summary (2-3 paragraphs)

  • Overall project health assessment
  • Key achievements since last update
  • Most critical items requiring attention
  • Projected completion timeline based on current velocity

5b. Highlights (3-5 bullet points)

  • Most notable accomplishments
  • Milestones reached
  • Positive trends

5c. Risk Analysis (structured) Each risk MUST be a JSON object with these exact fields (JSX renderer depends on them):

{
"track": "D", // Track letter (matches TRACK_COLORS keys)
"section": "D.1-D.7", // Section reference
"severity": "critical", // critical | high | medium | low
"title": "Short Risk Title", // REQUIRED — displayed as card heading
"description": "Detailed explanation", // Full risk description
"mitigation": "What to do about it", // Mitigation strategy
"crossTrackImpact": ["C", "M", "N"] // MUST be array of track LETTERS (not strings)
}

Look for these risk signals:

  • Tracks at 0% with dependencies on them
  • Tracks with high task count but no recent session log activity
  • Cross-track dependencies where the upstream track is behind
  • Sprint overload (too many sections in one sprint)
  • Agent overload (one agent assigned to too many tracks)

5d. Recommendations (prioritized list) Each recommendation MUST be a JSON object with these exact fields:

{
"priority": 1, // Integer priority (1 = highest)
"title": "Action item title", // REQUIRED — displayed as card heading
"description": "Why and what to do", // Full description (NOT "reason" or "action")
"tracks": ["C", "D"], // MUST be array of track LETTERS affected
"effort": "Medium (5 tasks, ~2 days)" // Effort estimate string
}

5e. Per-Track Narratives (1-2 sentences each) For each track:

  • Current status and momentum (accelerating / steady / stalled / blocked)
  • What's been done recently (from session logs)
  • What's next
  • Any concerns

5f. Sprint Analysis (1 paragraph)

  • Current sprint identification
  • Sprint load assessment
  • Whether the sprint is on track
  • Upcoming sprint complexity

5g. Activity Summary (from session logs)

  • Pattern analysis (which tracks are getting attention, which are neglected)
  • Work cadence assessment
  • Key decisions made recently

Step 4: Merge Narrative into JSON (via Subagent)

Use a Task() subagent to merge the narrative into the JSON file. This keeps the 107KB JSON out of the main context:

Task(subagent_type="general-purpose", prompt="Read public/project-dashboard-data.json,
add the narrative object I'm providing, write back. Narrative: {...}")

The subagent reads the JSON, adds/updates the narrative field plus generatedBy, analysisWindow, and summary.velocity, then writes it back.

Step 4.5: Auto-Distribute to Active Dev Servers (ADR-170)

After the merge, distribute the enriched JSON to all active Vite dev servers so dashboards reflect the latest data immediately:

python3 ~/.coditect/scripts/distribute_dashboard_json.py public/project-dashboard-data.json --verbose

This script:

  1. Detects active Vite dev servers via lsof (Node.js processes listening on ports)
  2. Identifies their project root directories
  3. Copies the enriched JSON to each project's public/ and dist/ directories
  4. Normalizes field names if needed (maps generator output to JSX expectations)

If no active Vite servers are detected, this step is a no-op (no error).

The generator script also auto-copies to dist/ during Phase 1, so the distribution covers both:

  • Phase 1 output (metrics-only JSON in dist/)
  • Phase 2 output (enriched JSON with narrative in public/ + dist/ of all active servers)

Step 5: Display Summary

After the merge, display a terminal summary:

PROJECT STATUS UPDATED
═══════════════════════════════════════════════════════════════
Project: coditect-biosciences-qms-platform
Generated: 2026-02-15T00:33:21Z
Phase 1: Node.js metrics (17 tracks, 409 tasks) ✓
Phase 2: AI narrative (~1,500 token brief) ✓

PROGRESS
Overall: █░░░░░░░░░░░░░░░░░░░ 5% (19/409 tasks)
Sections: 4 complete | 0 in-progress | 83 pending

TOP RISKS
[HIGH] C: Technical Foundation at 0% — blocks 10 downstream tracks
[MED] Sprint S6-S7: 42 sections scheduled, highest load

RECOMMENDATIONS
1. Begin Track C backend scaffolding (critical path)
2. Activate Track D compliance in parallel

Output: public/project-dashboard-data.json
Brief: public/project-status-brief.md
Dashboard: Open viewer → Planning → Project Status Dashboard
═══════════════════════════════════════════════════════════════

Token Budget Comparison

ApproachTokens UsedTime
v1.0 (old) — Read all TRACKs directly~90,000Multiple compactions
v2.0 (new) — Two-phase with brief~12,000Single pass
--quick mode — Node.js only0 (AI)<1 second

Options

OptionDescription
--updateFull two-phase: Node.js metrics + AI narrative + JSON merge
--quickPhase 1 only: metrics JSON, no AI narrative
--verboseAlso print full narrative to terminal
--track XDeep-dive single track (reads that TRACK file only)
--risksRisk analysis only (from brief)
--activitySession log activity summary only
--days NSession log window in days (default: 7)
--jsonOutput JSON to stdout instead of file
--project IDExplicit project ID for session log lookup
--no-writeAnalyze but don't write JSON file

Architecture

┌─────────────────────────────────────────────────────────────┐
│ Phase 1: Node.js (0 AI tokens, <1 second) │
│ │
│ 17 TRACK files ──→ generate-project-dashboard-data.js │
│ MASTER-TRACK-INDEX ──┘ │ │ │
│ Session logs ───────┘ ▼ ▼ │
│ project-dashboard-data.json │
│ project-status-brief.md (~1.5K tokens) │
└──────────────────────────────┬──────────────────────────────┘

┌──────────────────────────────▼──────────────────────────────┐
│ Phase 2: AI (~12K tokens, single pass) │
│ │
│ Read brief.md ──→ Generate narrative ──→ Merge into JSON │
│ (~1,500 tokens) (~10,000 tokens) (via subagent) │
└─────────────────────────────────────────────────────────────┘

JSON Schema

The output JSON follows this schema. The JSX dashboard (dashboards/planning/61-project-status-dashboard.jsx) expects this exact structure.

Top-level fields:

  • generated (string) — ISO 8601 timestamp
  • generatedBy (string) — model identifier
  • project (string) — project name
  • analysisWindow (object) — days analyzed, logs/tracks found
  • summary (object) — computed metrics
  • narrative (object) — AI-generated analysis
  • activity (array) — session log entries
  • tracks (array) — per-track data
  • master (object) — cross-track data from MASTER-TRACK-INDEX
  • sessionLogLinks (array) — recent session log file references

Narrative sub-fields:

  • executiveSummary (string) — 2-3 paragraph AI analysis
  • highlights (array of strings) — 3-5 bullet points
  • risks (array of objects) — { track, section, severity, title, description, mitigation, crossTrackImpact: string[] }
  • recommendations (array of objects) — { priority, title, description, tracks: string[], effort }
  • trackNarratives (object) — per-track AI commentary keyed by track letter
  • sprintAnalysis (string) — sprint health assessment
  • activitySummary (string) — work pattern analysis
  • velocityAssessment (string) — velocity and projection

CRITICAL: risks[].crossTrackImpact and recommendations[].tracks MUST be arrays of single track letters (e.g., ["C", "D"]), NOT prose strings. The JSX uses .map() and TRACK_COLORS[t] lookup on these values.

  • /track-status — Text-only track progress (simpler)
  • /pilot --dashboard — Text-only dashboard (simpler)
  • dashboards/planning/61-project-status-dashboard.jsx — JSX renderer
  • scripts/generate-project-dashboard-data.js — Phase 1 metrics generator + AI brief
  • public/project-status-brief.md — Compact AI brief (~1,500 tokens)

Success Output

PROJECT STATUS UPDATED
Project: {name}
Phase 1: {N} tracks, {done}/{total} tasks ({pct}%)
Phase 2: Narrative generated ({word_count} words)
Phase 3: Distributed to {N} active dev servers
Risks: {N} identified
Output: public/project-dashboard-data.json
Brief: public/project-status-brief.md

Failure Indicators

  • No TRACK files found in project
  • generate-project-dashboard-data.js script not found
  • Session log directory not accessible
  • JSON write failed (permissions)
  • No public/ directory exists

Command Version: 2.1.0 Created: 2026-02-14 Updated: 2026-02-16 Author: CODITECT Core Team

Changelog:

  • v2.1.0 - ADR-170 Phase 0: Auto-distribute enriched JSON to active Vite dev servers after narrative merge. Step 4.5 uses distribute_dashboard_json.py to detect servers via lsof and copy to public/ + dist/.
  • v2.0.0 - Two-phase architecture: Node.js pre-computes metrics + AI brief, AI reads only brief for narrative. Reduces token consumption from ~90K to ~12K.
  • v1.0.0 - Initial release: AI reads all TRACK files directly (token-intensive).