Skip to main content

ADR-163: Agentic Trajectory Dashboard Architecture

Status

ACCEPTED (2026-02-08) | Build Status: Complete through Phase 10

Executive Summary

The Agentic Trajectory Dashboard is a local-first executive BI platform that transforms raw AI development session data into interactive, drillable visualizations. Built with React 18 + TypeScript + D3.js + Tailwind CSS, it renders 12 pages and 37 components from 14,987 lines of TypeScript, fed by a Python data adapter that queries 29M+ tool calls (sessions.db, 26GB), 18K+ knowledge graph nodes (org.db, 280MB), 36 TRACK markdown files, and 40+ session logs.

Design philosophy: Snapshots first, then drillable data with rich visualizations.


1. Problem Statement

1.1 The Visibility Gap

Enterprise organizations adopting AI-augmented development tools face a critical visibility gap: 95% of enterprise AI pilot programs fail to deliver measurable financial returns (PwC, 2026). The root cause is not technology failure but measurement failure -- executives cannot see what AI agents are doing, how effectively they work, or what value they produce.

CODITECT captures rich trajectory data across every development session -- tool calls, agent dispatches, token usage, error patterns, decision points, file modifications, and timing data -- stored in sessions.db and org.db (ADR-118). Before this dashboard, this data was accessible only through CLI commands (/trajectory, /session-log) producing text output. There was no interactive, visual, queryable interface.

1.2 The Market Gap

Web search and competitive analysis (February 2026) reveals a fragmented landscape with no integrated solution:

AI Agent Observability Platforms

PlatformTracesVisualizationsGenerative UIInteractive Q&AExecutive Reports
LangSmithYesStatic trace treesNoNoNo
LangFuseYesBasic timelineNoNoNo
AgentOpsYesSession replayNoNoNo
Arize AIYesSpan waterfallNoNoNo
Weights & BiasesYesML experiment chartsNoNoLimited
HeliconeYesSession replayNoNoNo
Datadog LLMYesAPM-style tracesNoNoYes (static)

Gap: All platforms provide developer-oriented trace inspection. None dynamically generate interactive visualizations from trajectory data. None support natural-language Q&A over agent sessions. None produce executive-ready reports.

Engineering Intelligence Platforms

PlatformAI Agent DataDynamic VizExecutive DashboardsAI-Generated Insights
LinearBNo (Git metrics)StaticYesLimited
JellyfishNo (Jira/Git)StaticYesLimited
Pluralsight FlowNo (Git)StaticYesNo
SwarmiaNo (Git/CI)StaticYesNo

Gap: These platforms track engineering productivity through git commits and ticket velocity. None capture or visualize AI agent behavior, tool usage, or decision-making patterns.

1.3 CODITECT's Unique Position

CODITECT is the only platform that:

  1. Owns the trajectory data -- sessions.db captures every agent dispatch, tool call, and decision
  2. Has an agent ecosystem -- 776 agents, 445 skills, 118 hooks, 377 commands
  3. Can dog-food -- CODITECT agents analyze CODITECT agent sessions
  4. Runs local-first -- no cloud dependency for sensitive enterprise data
  5. Multi-agent orchestration -- the dashboard itself is built by coordinating multiple specialized agents

No competitor combines: trajectory data + AI analysis + D3.js visualizations + data-driven narratives + executive snapshots + local dev server.


2. Decision

Implement the Agentic Trajectory Dashboard as a standalone React application within CODITECT (tools/trajectory-dashboard/) using a data pipeline architecture that transforms raw SQLite data into interactive executive BI dashboards.

2.1 Architecture Overview

Data Sources                    Pipeline                     Presentation
───────────── ──────── ────────────
sessions.db (26GB) ┌─────────────────┐ ┌──────────────────┐
- tool_analytics (29.5M) │ │ │ React 18 + TS │
- token_economics (52M) │ Python Data │ JSON │ ─────────────── │
- agent_dispatches (183K) │ Adapter │ ──────► │ 12 Pages │
- session_events (449) │ (811 lines) │ │ 37 Components │
│ │ │ 14 D3 Viz │
org.db (280MB) │ Aggregation │ │ Tailwind CSS │
- kg_nodes (18,323) │ + Analysis │ │ Dark Mode │
- decisions (1,856) │ + Patterns │ │ │
- skill_learnings (1,029) └─────────────────┘ │ Vite 6 HMR │
│ Port 5174 │
TRACK Files (36 .md files) │ │
Session Logs (40+ .md files) └──────────────────┘
projects.db (project registry) │

Browser (auto-launch)

2.2 Key Architecture Decisions

#DecisionRationaleAlternatives Rejected
1Vite 6 + React 18 + TypeScript strictIndustry-standard, fast HMR (<100ms), strong type safety, zero TS errors enforcedNext.js (too heavy for local tool), Svelte (team familiarity), plain HTML (insufficient interactivity)
2D3.js for all custom visualizationsFull control over SVG rendering, animation, interactivity; supports force-directed graphs, sunbursts, treemaps, radial chartsRecharts (insufficient for force graphs, sunbursts), Chart.js (limited customization), Nivo (too opinionated)
3Tailwind CSS for layout and stylingUtility-first enables rapid iteration, dark mode via dark: prefix, consistent design system, no CSS-in-JS runtime costStyled Components (runtime overhead), CSS Modules (slower iteration), Material UI (too opinionated)
4Python data adapter (not API server)Pre-computes all data into single JSON file; eliminates runtime SQLite queries, CORS config, and server management; instant page loadsExpress API (complexity, CORS, process management), direct SQLite from browser (WASM bloat), streaming (premature)
5JSON file as data transportSimple, debuggable, works with Vite's static serving; data regenerated on demand via CLIWebSocket (premature for v1), REST API (unnecessary process), GraphQL (over-engineered)
6Local-first, no cloud dependencyEnterprise data sovereignty; sensitive session data never leaves the machine; works offlineCloud dashboard (data sovereignty concerns), hybrid (complexity), SaaS (lock-in)
7Snapshot-first progressive disclosureExecutives see KPIs instantly, then drill into details; reduces cognitive load; matches BI best practicesDetail-first (overwhelming), flat layout (no hierarchy), wizard (too linear)
8Data-driven narrative generationuseMemo computes insights from raw data (hottest track, balance score, velocity trend); narratives generated client-side, not pre-bakedLLM-generated narratives (latency, cost), static text (stale), no narrative (raw data only)
9Lucide React iconsConsistent, tree-shakeable, 1,000+ icons, MIT license, matches Tailwind aestheticHeroicons (fewer icons), Font Awesome (larger bundle), custom SVGs (maintenance burden)
10React Router v6 with layout routesAppShell wraps all pages via <Outlet />; deep linking works; sidebar nav auto-highlights active routeHash router (ugly URLs), manual routing (no deep linking), tabs-only (no bookmarkable views)

3. User Stories and Value Propositions

3.1 Primary Persona: Engineering Executive / VP of Engineering

"As a VP of Engineering, I want to see at a glance whether our AI-augmented development program is on track, so I can report confidently to the board and identify where to invest more."

Value delivered:

  • HomePage CompletionGauge shows overall project health (44.2% complete, 1,233/2,789 tasks) with health indicator (On Track / Attention / At Risk)
  • Track Progress Strip shows all 36 tracks sorted by % complete -- instantly reveals lagging areas
  • Animated metrics (sessions, tool calls, tokens, success rate) provide visceral sense of scale
  • One-click drill-down from any metric to its dedicated page

Design decisions supporting this story:

  • CompletionGauge uses D3 semicircular arc with gradient coloring (red → amber → green)
  • AnimatedCounter uses requestAnimationFrame with easeOutCubic for 1.2s count-up animation
  • Track Progress Strip bars are color-coded by completion percentage and clickable
  • StatusBadge grid shows Complete/In Progress/Deferred/Remaining at a glance

3.2 Secondary Persona: Engineering Manager

"As an Engineering Manager, I want to understand how AI agents are spending their time across my team's tracks, so I can identify bottlenecks and rebalance effort."

Value delivered:

  • ProjectsPage (4 tabs: Story, Velocity, Focus, Balance) provides deep project-level analysis
  • ProjectCharter generates a narrative "story" of the project using data-driven prose
  • ProjectVelocityChart (D3 stacked area) shows monthly activity trends per project
  • ProjectFocusSunburst (D3) reveals tool call concentration across tracks -- highlights over-investment
  • ProjectTrackRadar (D3) plots multi-axis track comparison for balance assessment
  • Focus Interpretation panel identifies the "hottest track" (highest tool call share) and gap tracks (<40% complete)
  • Balance Interpretation panel computes standard deviation of completion percentages and assigns a balance score (balanced/moderate/lopsided)

Design decisions supporting this story:

  • highlightProject prop scopes all 4 D3 charts to the selected project, dimming others
  • useMemo computes focusInsights and balanceInsights from track data
  • Narrative panels use conditional prose generation ("your highest-effort track absorbs X% of all activity")
  • Balance score uses standard deviation thresholds: ≤15% = balanced, ≤30% = moderate, >30% = lopsided

3.3 Tertiary Persona: Technical Lead / Staff Engineer

"As a Technical Lead, I want to explore error patterns, tool performance, and knowledge graph relationships to identify systemic issues and improvement opportunities."

Value delivered:

  • ErrorsPage shows error type distribution with severity indicators and recovery rates
  • ToolsPage shows all 34 tool types with success rates, call counts, and I/O metrics
  • KnowledgePage with force-directed D3 graph visualization of 18K+ nodes across 13 types
  • KnowledgeGraphViz supports toggleable node types, click-to-detail panels, and search
  • TracksPage with hierarchical task drill-down showing actual task descriptions from TRACK files
  • AgentsPage with inferred agent roles (DevOps, Code Author, Explorer) from tool usage patterns

Design decisions supporting this story:

  • KnowledgeGraphViz uses D3 force simulation with collision detection and node type coloring
  • PageTabs on every page include an "About This View" tab explaining data sources and interpretation
  • ChartPicker component allows switching between visualization types (bar, sunburst, treemap, bubbles, radial, heatmap)
  • ExplainerCard at the top of each page provides collapsible context about what the data means

3.4 Operations Persona: Development Session Historian

"As a development team member, I want to review historical session activity to understand what was accomplished, when, and track our development cadence."

Value delivered:

  • SessionLogsPage shows monthly digests first (newest on top), with entry counts and expandable daily entries
  • CalendarHeatmap (D3, GitHub-style) shows activity density by day with color intensity
  • SkillsPage tracks 758K+ skill learnings with effectiveness radar chart
  • SkillsCatalogPage shows 445 registered CODITECT skills searchable by track letter

3.5 Token Economics Persona: Cost-Conscious Engineering Leader

"As someone responsible for AI tool costs, I want to understand token consumption patterns by model and optimize our spend."

Value delivered:

  • TokensPage breaks down consumption across Opus 4.6, Sonnet 4.5, and Haiku 4.5
  • TokenSunburst (D3) shows hierarchical token distribution by model → prompt/completion
  • Per-model cost breakdown: input, output, cache read/write with USD totals
  • Sparkline components show 7-30 day trends inline with metric cards
  • Summary metrics show 4.8T+ total tokens across all sessions

4. Data Architecture

4.1 Data Sources

sessions.db (Tier 3 -- Regenerable)

TableRowsPurposeDashboard Usage
tool_analytics29.5M+Every tool invocation: tool name, success, agent, task_idTimeline, Tools, Agents, Tracks, Errors
token_economics52M+Token counts per request: input, output, cache, model, costTokens, Summary metrics, Cost analysis
messages2.1M+Raw conversation messages with role, content, timestampsSession count aggregation
agent_dispatches183K+Agent session dispatches with type, status, durationAgents page, agent identity resolution
session_events449+Session-level metadata: start/end times, project, machineTimeline, session count

org.db (Tier 2 -- Irreplaceable)

TableRowsPurposeDashboard Usage
kg_nodes18,323Knowledge graph: functions, components, decisions, ADRs, tracksKnowledge page, graph viz
kg_edges11,756Relationships: CALLS, USES, DEPENDS_ON, SUPERSEDES, SOLVESKnowledge graph edges
decisions1,856Architecture and design decisions with rationaleDecision activity timeline
skill_learnings1,029Learned skills with effectiveness scoresSkills page, effectiveness radar
error_solutions475Error-to-solution pairs for reuseError recovery analysis

TRACK Files (36 markdown files)

  • Source: internal/project/plans/tracks/TRACK-*.md
  • Parsed by: parse_track_files() in data adapter
  • Extracts: task IDs, descriptions, completion status ([x]/[ ]), **Progress:** XX%
  • Provides: task descriptions, % complete, completed/total task counts

Session Logs (40+ markdown files)

  • Source: ~/.coditect-data/session-logs/projects/PILOT/*/SESSION-LOG-*.md
  • Parsed by: parse_session_logs() in data adapter
  • Extracts: date, entry count, topics, file sizes
  • Provides: monthly activity digest, daily entry details

projects.db (Project Registry)

  • Source: ~/.coditect-data/context-storage/projects.db
  • Provides: project metadata (name, type, path, framework, language)
  • Enriches: session log data with project association

4.2 Data Pipeline

┌──────────────────────────────────────────────────────────────────┐
│ dashboard_data_adapter.py (811 lines Python) │
│ │
│ CLI: python3 scripts/trajectory/dashboard_data_adapter.py │
│ --timeframe all --pretty │
│ │
│ Functions: │
│ ├── get_session_summary() → SessionSummary │
│ ├── get_timeline_data() → TimelineDataPoint[] │
│ ├── get_agent_activity() → AgentActivityEntry[] │
│ ├── get_token_usage() → TokenUsageEntry[] │
│ ├── get_error_distribution() → ErrorEntry[] │
│ ├── get_tool_performance() → ToolPerformanceEntry[] │
│ ├── get_knowledge_base() → KnowledgeBaseData │
│ ├── get_track_activity() → TrackEntry[] │
│ ├── get_skill_breakdown() → SkillBreakdownEntry[] │
│ ├── parse_track_files() → TrackEntry[] (merged) │
│ ├── parse_session_logs() → SessionLogMonth[] │
│ ├── get_projects() → ProjectEntry[] │
│ ├── generate_executive_summary() → ExecutiveSummary │
│ └── detect_patterns() → PatternDetection │
│ │
│ Output: tools/trajectory-dashboard/src/generated/data.json │
└──────────────────────────────────────────────────────────────────┘

4.3 TypeScript Type System

All data interfaces defined in src/types/dashboard.ts (319 lines, 28 interfaces):

InterfacePropertiesPurpose
DashboardData12 fieldsRoot container for all dashboard data
SessionSummary12 fieldsAggregate metrics (sessions, tokens, cost, success rate)
TimelineDataPoint6 fieldsPer-day time series (date, sessions, tokens, cost, tool calls, errors)
AgentActivityEntry12 fieldsPer-agent stats with inferred role and top tools
TokenUsageEntry10 fieldsPer-model token breakdown with cost
ErrorEntry4 fieldsError type distribution
ToolPerformanceEntry12 fieldsTool duration, success, I/O metrics
PatternDetection3 fieldsBottlenecks, error clusters, token spend insights
KnowledgeBaseData6 fieldsKG summary, distributions, decisions, skills, errors
TrackEntry11 fieldsTrack with % complete, tasks, domain, status
TaskEntry6 fieldsIndividual task with description and status
ProjectEntry13 fieldsProject with metadata, activity, session log months
SessionLogMonth3 fieldsMonthly session log grouping
ExecutiveSummary11 fieldsOverall health, track status counts, top active tracks
ViewOptions5 fieldsUser preferences for chart engine, mode, toggles
FilterOptions6 fieldsData filtering (tools, agents, models, thresholds)

5. Component Architecture

5.1 Design System

Visual Language:

  • Background: gray-50 (light) / gray-950 (dark)
  • Cards: white / gray-900 with gray-200 / gray-800 borders
  • Interactive: hover border color by semantic category (sky, blue, green, cyan, purple, red, indigo, orange, amber, teal, lime)
  • Typography: system font stack, text-sm body, text-xs labels, text-[10px] secondary
  • Icons: Lucide React, 4x4 standard size, semantic coloring per domain

Progressive Disclosure Pattern (3 levels):

  1. Snapshot -- HomePage with CompletionGauge, animated metrics, track progress strip
  2. Drill-down -- Dedicated pages with tabbed views, ChartPicker, filter panels
  3. Detail -- DrillDownModal, DetailSlideOver, click-to-expand task descriptions

Interactivity Principles:

  • Every metric animated on mount (AnimatedCounter, ProgressRing, CompletionGauge)
  • Every data point hoverable (D3 tooltips with formatted detail)
  • Every track/task clickable (navigation or modal expansion)
  • Keyboard accessible (Cmd+K search, Escape to close modals)

5.2 Page Inventory (12 pages, 3,524 lines)

PageLinesRouteTabsPurposeUser Story
ProjectsPage870/projectsStory, Velocity, Focus, Balance, AboutExecutive project intelligence with 4 D3 story visualizations and data-driven narratives3.2
TracksPage550/tracksExecutive, Cards, Tasks, Viz, AboutTrack management with % complete, task descriptions, sunburst + treemap visualizations3.2, 3.3
HomePage319/Executive snapshot: CompletionGauge, track strip, animated metrics, navigation cards3.1
AgentsPage287/agentsOverview, Bubbles, AboutAgent identity resolution, role inference, tool profiles, bubble chart3.3
SkillsPage272/skillsOverview, Effectiveness, AboutSkill learnings (758K+), effectiveness radar, per-skill breakdown3.4
SessionLogsPage241/session-logsMonthly, Heatmap, Full, AboutMonthly digest, CalendarHeatmap, expandable daily entries3.4
KnowledgePage225/knowledgeGraph, Inventory, AboutInteractive D3 force graph (18K nodes), knowledge inventory3.3
SkillsCatalogPage193/skills-catalogOverview, About445 registered skills searchable by track letter3.4
TimelinePage163/timelineChart, Heatmap, AboutSession activity over time, heatmap calendar3.4
TokensPage160/tokensOverview, Sunburst, AboutToken consumption by model, cost analysis3.5
ToolsPage128/toolsOverview, AboutTool performance: success rates, call counts, I/O3.3
ErrorsPage116/errorsOverview, AboutError type distribution, recovery rates, trends3.3

5.3 Component Inventory (37 components, 7,456 lines)

D3.js Visualizations (14 components)

ComponentLinesD3 PatternInteractivityData Source
ProjectCharter586Multi-section narrative + D3 mini-chartsTooltip, section expandprojects, tracks
KnowledgeGraphViz373Force-directed simulationNode toggle, click-to-detail, drag, zoomkg_nodes, kg_edges
ProjectVelocityChart312Stacked area + trend lineHover tooltips, highlight scopingsessionLogMonths
ProjectFocusSunburst306Partition sunburstHover expand, sector labelstrackActivity (tool calls)
ProjectTrackRadar288Radial/spider chartAxis labels, hover valuestrack multi-axis metrics
HeatmapCalendar238Calendar heatmap (GitHub-style)Cell hover, day clicktimeline daily activity
SkillRadial218Radial bar chartHover detail per skillskillBreakdown
TrackSunburst216Hierarchical sunburstClick to zoom, breadcrumbtrack → section → task
SessionTimelineD3213Zoomable timeline with brushBrush to zoom, pantimeline data points
TokenSunburst189Sunburst partitionHover to see model/typetoken breakdown
TrackTreemap172Squarified treemapHover, click to navigatetrack tool call volumes
AgentBubbles167Circle pack layoutHover to see agent detailsagent dispatches
CalendarHeatmap135Calendar grid with color scaleDay hover, cell clicksession log entries
CompletionGauge112Semicircular arc gaugeAnimated fill, center textexecutive summary %

Recharts Components (5 components)

ComponentLinesChart TypePurpose
Dashboard560Composite layoutLegacy aggregated view with all chart types
TimelineChart96Area chartSessions, tokens, errors over time
ErrorHeatmap85List with severity indicatorsError types and recovery rates
TokenUsageBar78Stacked barPrompt vs completion tokens per model
ToolCallWaterfall116Horizontal barTool performance comparison
AgentActivityGraph62Horizontal barAgent dispatch frequency

Interactive UI Components (8 components)

ComponentLinesPurposeDesign Decision
AnimatedCounter91Count-up animation from 0 → targetrequestAnimationFrame + easeOutCubic over 1.2s; supports K/M/T suffixes, %, commas
ProgressRing86SVG animated arc for % completeAuto-coloring: green ≥80%, amber ≥50%, red <50%; size variants sm/md/lg
Sparkline82Inline D3 mini trend line (60x24px)Green = up trend, red = down; 7-30 data points; used inline in metric cards
DrillDownModal94Slide-over detail panel (right side)60% width, Escape to close, breadcrumb trail, overlay click to dismiss
DetailSlideOver209Extended detail panelFull data tables, sub-charts, contextual information
FilterPanel252Multi-faceted data filteringTool, agent, model, error type filters with min thresholds
ViewToggle189Chart/table view switchingRender prop pattern for flexible chart swapping
ChartPicker56Visualization type selectorPill buttons to switch between D3 chart types per page

Shell & Navigation Components (6 components)

ComponentLinesPurposeDesign Decision
AppShell340Layout: header + sidebar + footerCollapsible sidebar (w-56 ↔ w-12), localStorage persistence, grouped nav sections
SearchModal297Global Cmd+K search overlaySearches across all data dimensions, fuzzy matching, keyboard navigation
DateRangePicker361Two-month calendar with time inputsPortal-rendered, custom date range with hour:minute precision, preset buttons
NotificationPanel220Alert/notification panelPositioned relative to bell icon anchor, shows bottlenecks + error clusters
PageTabs106Tabbed containerAuto-generates "About This View" tab for every page; URL hash sync
MermaidDiagram162Mermaid.js rendererDynamic diagram generation from data, dark mode support

Contextual Components (4 components)

ComponentLinesPurposeDesign Decision
PatternInsights249AI pattern detection displayBottlenecks, error clusters, token spend — actionable insight cards
InfoTooltip72Contextual (i) icon tooltipAppears on hover, positioned smartly, explains individual metrics
ExplainerCard68Collapsible explanation cardTop of every page, explains what data means and where it comes from
PageTabs (shared)Built-in About tabEvery page auto-gets an "About This View" tab with data source explanation

5.4 Library Layer (14 files, 4,007 lines)

FileLinesPurpose
DashboardContext.tsx~200React context: data, timeFrame, filters, patterns, data source detection
data-loader.ts~150Fetches data.json, detects live vs sample, validates structure
filters.ts~180Client-side data filtering engine (tool, agent, model, thresholds)
patterns.ts~200Pattern detection: bottlenecks, error clusters, token spend analysis
format.ts~60Number formatting: compact (1.2K, 4.8T), currency, percentage
utils.ts~100cn() utility, buildTrackProjectMap(), getTrackProject()
theme.tsx~80Dark mode context: system preference detection, toggle, localStorage
sample-data.ts~400Realistic development sample data for offline/demo usage
a2ui-tree.ts~120A2UI component tree generation from data
a2ui-codegen.ts~100A2UI → JSX code generation
a2ui-pipeline.ts~80Full A2UI generation pipeline orchestration
mermaid-generator.ts~100Dynamic Mermaid diagram generation from dashboard data
Tests (5 files)~600Unit tests: a2ui-codegen, a2ui-tree, data-loader, filters, patterns, performance

6. Navigation Architecture

6.1 Sidebar Navigation (Grouped)

Overview      [LayoutDashboard]
──────────────────────────────
PROJECT
Projects [FolderKanban]
Tracks [GitBranch]
Session Logs [FileText]
──────────────────────────────
ANALYTICS
Timeline [Activity]
Agents [Bot]
Tools [Wrench]
Tokens [Coins]
Errors [AlertTriangle]
──────────────────────────────
KNOWLEDGE
Knowledge [Brain]
Skill Learnings [Sparkles]
Skills Catalog [Library]

Design decisions:

  • 4 sections with semantic headings (Project, Analytics, Knowledge)
  • Section headings shown when sidebar expanded, divider lines when collapsed
  • Active route highlighted with blue accent (bg-blue-50 dark:bg-blue-900/20)
  • Collapsed mode shows icons only (w-12), expanded shows labels (w-56)
  • State persisted in localStorage for cross-session consistency

6.2 Header

  • Logo + title (hidden on mobile)
  • Search button (Cmd+K shortcut)
  • Time frame selector (Today, This Week, Sprint, This Month, All Time)
  • Custom date range picker (ChevronDown trigger)
  • Data source indicator (Live / Sample with colored dot)
  • Alert bell with count badge (bottlenecks + error clusters)
  • Dark/light mode toggle

6.3 HomePage Navigation Cards

11 cards in a responsive grid (grid-cols-2 sm:grid-cols-3 lg:grid-cols-4 xl:grid-cols-6) linking to all pages with:

  • Semantic color per domain (sky=projects, blue=timeline, green=agents, etc.)
  • Icon, title, description, live metric count
  • Hover border color accent

7. Design Principles and Patterns

7.1 Snapshot-First Information Hierarchy

Every page follows a 3-level disclosure pattern:

LevelWhat the User SeesInteraction
L1: SnapshotKey metric, health indicator, trend directionVisible on page load
L2: BreakdownTabbed views with charts, tables, distributionsClick tab or card
L3: DetailIndividual record, full context, related dataClick item → modal/panel

Example flow (Tracks):

  1. L1: Track Progress Strip on HomePage shows all 36 tracks with % bars
  2. L2: Click track → TracksPage Executive View with task counts, domain, agent
  3. L3: Click task → DrillDownModal with description, tool calls, agents, sessions

7.2 Data-Driven Narrative Generation

Instead of static labels, components compute interpretive prose from data at render time:

// ProjectsPage FocusTab — computed narrative
const focusInsights = useMemo(() => {
const hottestShare = (hottestTrack.toolCalls / totalCalls) * 100;
const gapTracks = tracks.filter(t => t.percentComplete < 40 && t.totalTasks > 0);
return { hottestTrack, hottestShare, gapTracks };
}, [projectTracks]);

// Renders: "Track H (Framework Autonomy) absorbs 34% of all tool activity.
// 3 tracks below 40% completion may need attention: G, K, L"

Design decision: Client-side narrative generation (not LLM) ensures:

  • Zero latency (computed in useMemo)
  • Deterministic output (same data → same narrative)
  • No API cost per render
  • Works offline

7.3 D3.js Integration Pattern

All D3 visualizations follow a consistent pattern:

export function SomeChart({ data, width, height }: Props) {
const svgRef = useRef<SVGSVGElement>(null);

useEffect(() => {
if (!svgRef.current) return;
const svg = d3.select(svgRef.current);
svg.selectAll("*").remove(); // Clean slate
// ... D3 rendering logic
const tooltip = d3.select("body").append("div")...; // Body-mounted tooltip
return () => { tooltip.remove(); }; // Cleanup
}, [data, width, height]); // Re-render on data/size change

return <svg ref={svgRef} width={width} height={height} />;
}

Design decisions:

  • useRef for SVG element access (no D3 DOM creation)
  • Full clear + redraw on dependency change (simpler than D3 enter/update/exit)
  • Body-mounted tooltips (escape overflow:hidden containers)
  • Cleanup function removes tooltips on unmount

7.4 Animation System

ComponentTechniqueDurationEasing
AnimatedCounterrequestAnimationFrame1.2seaseOutCubic
ProgressRingSVG stroke-dasharray transition1sease-out
CompletionGaugeD3 arc interpolation1.5sd3.easeCubicOut
SparklineD3 path animation0.8slinear
Track barsCSS transition-all duration-7000.7sdefault

Design decision: Animations provide visceral feedback that data is real and dynamic. Count-up animations on every page load reinforce that metrics are computed fresh, not cached. Animation duration is fast enough to not delay comprehension (all <1.5s).

7.5 Dark Mode Implementation

  • System preference detection via matchMedia('(prefers-color-scheme: dark)')
  • Manual toggle persisted to localStorage
  • Tailwind dark: prefix on all color utilities
  • D3 charts: dark mode classes applied to SVG elements (dark:stroke-gray-700)
  • Consistent palette: gray-50gray-950 background, gray-200gray-800 borders

8. Analytics Dimensions

The dashboard enables exploration across eight analytical dimensions:

8.1 Time

  • Daily/weekly/monthly/sprint/all session activity trends
  • Heatmap calendar showing activity density by day-of-week and hour
  • Custom date range with hour:minute precision
  • Sprint-aligned (14-day) analysis windows

8.2 Project

  • Multi-project comparison with association badges
  • Per-project narrative ("story") with charter, velocity, focus, balance
  • Session log months grouped by project
  • Project type, language, framework metadata

8.3 Track (37 tracks, A-AK)

  • Task-level drill-down with descriptions from TRACK markdown files
  • % complete from parsed [x]/[ ] checkboxes
  • Hierarchical task tree: Track → Section → Task → Subtask
  • Domain metadata from knowledge graph (Backend API, Frontend UI, DevOps, etc.)

8.4 Agent

  • Agent identity resolution from session hashes to inferred roles
  • Tool usage profiling: DevOps/Build, Code Author, Explorer/Researcher, Web Researcher
  • Success rate tracking per agent
  • "Unknown" agent attribution: 88.5% of tool calls predate agent tracking

8.5 Tool (34 tool types)

  • Tool call frequency and success rates
  • Tool category grouping (file ops, search, execution, AI)
  • I/O volume metrics (input bytes, output bytes)
  • Error rates per tool

8.6 Token

  • Model-level consumption: Opus 4.6, Sonnet 4.5, Haiku 4.5
  • Prompt vs completion ratio analysis
  • Cache efficiency: read vs write token breakdown
  • Cost analysis per model with USD totals

8.7 Error

  • Error type distribution and frequency
  • Recovery rate tracking
  • Error clustering by time period and trend direction
  • Correlation with agent and tool usage

8.8 Knowledge

  • Interactive force-directed graph with 13 node types
  • Edge relationship visualization (7 edge types)
  • Node type toggling for focused exploration
  • Decision activity timeline and confidence scores

9. Consequences

9.1 Positive

  • Executive visibility: Directly addresses the "95% of AI pilots fail to show ROI" problem with at-a-glance project health
  • Dog-fooding showcase: CODITECT agents building CODITECT intelligence dashboards -- proves the platform's own value
  • Data sovereignty: Local-first architecture avoids cloud observability vendor lock-in; sensitive session data never leaves the machine
  • New product category: AI Development Intelligence (ADI) -- no direct competitor provides trajectory + visualization + narrative
  • Enterprise sales enabler: Executive dashboards are a procurement decision accelerator; visual proof of AI development ROI
  • Compliance/audit trail: Visual evidence of AI-assisted development decisions and knowledge growth
  • Zero-config deployment: Single npm run dev launches everything; auto-detected data sources

9.2 Negative

  • Bundle size: 916 KB JS (gzipped 265 KB) exceeds Vite's 500KB warning; D3.js is the primary contributor
  • Data staleness: JSON file must be regenerated manually (python3 adapter.py --timeframe all); not real-time
  • Python dependency: Data adapter requires Python 3 + SQLite; adds installation complexity
  • No authentication: Local-only tool; no multi-user access control
  • Limited mobile support: D3 visualizations not optimized for touch/small screens

9.3 Known Limitations

LimitationImpactStatus
Agent names are session hashesNo human-readable agent namesKnown; requires agent naming protocol
88.5% of tool calls lack agent attributionAgent analytics incompleteKnown; pre-tracking historical data
Duration data unavailable in tool_analyticsNo latency analysisKnown; column not instrumented
Skill learnings 98% needs_workSkills page shows skewed effectivenessKnown; extraction calibration issue
Bundle exceeds 500KBVite build warningDeferred to Phase 14 (code splitting)
No real-time updatesData must be regenerated via CLIDeferred to Phase 15 (WebSocket)
No ARIA labels or skip linksScreen readers cannot navigate D3 chartsPlanned Phase 11
No responsive breakpointsD3 charts overflow on <1024px viewportsPlanned Phase 11
No prefers-reduced-motionAnimations run for motion-sensitive usersPlanned Phase 11
No design tokens fileColors/spacing hardcoded in componentsPlanned Phase 14

9.4 Stakeholder Review Findings (2026-02-08)

Two structured evaluations were conducted against this ADR: a UI Designer review and a Product Manager review.

UI Designer Review: B+ (83/100)

DimensionScoreKey Findings
Visual Hierarchy4/5CompletionGauge + AnimatedCounter create strong executive focus
Design System3.5/5No centralized design tokens file; colors/spacing embedded in components
Interactivity4.5/5D3 transitions, drill-down modals, and hover details are best-in-class
Navigation3/512 pages in sidebar risks cognitive overload; needs progressive disclosure grouping
Accessibility2/5CRITICAL -- no ARIA labels, no skip links, no prefers-reduced-motion, D3 charts are opaque to screen readers
Responsive2.5/5CRITICAL -- D3 charts use fixed viewBox; no mobile layouts; no touch targets
Data Visualization4/514 D3 + 5 Recharts covers all executive needs; colorblind mode missing
Completeness3.5/5Strong analytics coverage; export and feedback loops absent

Priority recommendations: Accessibility + Responsive (P0), Design Tokens + Nav Hierarchy (P1), Colorblind Mode + Export (P2).

Product Manager Review: 4/5 Stars

DimensionScoreKey Findings
Persona Coverage3/5VP Eng, Manager, Tech Lead covered; missing Board Member, Customer Success, Sales Engineer, Platform Engineer
Value Proposition3.5/5Feature-focused ("14 D3 visualizations") rather than outcome-focused ("reduce AI pilot review time by 80%")
Success Metrics1/5CRITICAL -- no KPIs defined for user adoption, engagement, or business impact
Roadmap Prioritization2/5Technology-driven (bundle optimization before PDF export); should be value-driven
Competitive Positioning4/5ADI category creation is strong; needs sharper "so what" for procurement decisions
Risk Mitigation2.5/5User adoption (60%) and data quality (100%) risks are unmitigated

Priority recommendations: Define success metrics (immediate), reorder roadmap to value-driven sequence, add PDF export before bundle optimization, add user feedback mechanism.

9.5 Success Metrics (Post-Review Addition)

Addressing the PM review's critical gap (1/5), the following KPIs define success for the dashboard:

Adoption Metrics

MetricTarget (90 days)Measurement
Weekly active users5+ (internal team) → 50+ (pilot customers)Track npm run dev invocations or page loads
Pages viewed per session4+ (out of 12)Client-side analytics event
Return rate60%+ users return within 7 daysSession cookie tracking
Time to first insight<30 seconds from dashboard loadCompletionGauge + Track Strip render time

Engagement Metrics

MetricTargetMeasurement
Drill-down usage40%+ of sessions click through to detail viewsClick event tracking on DrillDownModal
Search usage20%+ of sessions use Cmd+K or page filtersSearch modal open events
Date range changes30%+ of sessions change from default 7-day windowDateRangePicker interaction events
D3 chart interactions2+ hover/click events per session per chartD3 event listener callbacks

Business Impact Metrics

MetricTargetMeasurement
Executive review time reduction80% vs manual CLI queriesBefore/after time study
Pilot-to-paid conversion liftAttributable 10%+ improvementA/B: customers with dashboard access vs without
Support ticket reduction30% fewer "what did the AI do?" questionsSupport ticket categorization
Sales cycle impactDashboard demo reduces evaluation time by 2+ weeksCRM deal stage duration

Data Quality Metrics

MetricTargetMeasurement
Agent attribution coverage>50% (currently 11.5%)agent_analytics table fill rate
Data freshness<1 hour for active projectsAdapter last-run timestamp vs current time
Zero-value metric rate<5% (currently addressed in Phase 8)Null/zero audit on generated JSON

10. Build Status

10.1 Metrics

MetricValue
TypeScript errors0 (strict mode)
Build time~2.3s
Bundle size (JS)916 KB (gzipped: 265 KB)
Bundle size (CSS)43 KB (gzipped: 7 KB)
Source lines14,987
Pages12
Components37
D3 visualizations14
Recharts charts5
TypeScript interfaces28
Library modules14
Unit tests5 files
Data adapter811 lines Python

10.2 Implementation Phases

PhaseTaskStatusDeliverables
1Core Infrastructure (J.29.1)COMPLETEVite scaffold, slash command, data adapter, agent/skill definitions
2AI Analysis Layer (J.29.2)COMPLETEPattern detection, narrative generation, time-frame selector
3Generative UI Engine (J.29.3)COMPLETE27 initial components, D3 charts, dark mode, AppShell
4Live Data Integration (J.29.4)COMPLETEtoken_economics fix, agent identity, track correlation
5Deep Analytics (J.29.5)COMPLETEReact Router, 9 pages, ChartPicker, KG viz, PageTabs
6Documentation & Testing (J.29.6)COMPLETECLAUDE.md, unit tests, performance benchmarks
7Executive BI Dashboard (J.29.7)COMPLETECompletionGauge, AnimatedCounter, track strip, session logs page, skills catalog
8Data Adapter Enrichment (J.29.8)COMPLETEReal I/O, cost, category, status data; fixed all zero-value metrics
9Projects Page (J.29.9)COMPLETEProject registry integration, multi-project awareness
10Executive Story (J.29.10)COMPLETE4 D3 story components, narratives, focus/balance insights, nav grouping

10.3 Governing ADRs

ADRRelevance to Dashboard
ADR-054Track nomenclature -- task ID format (A.1.1.3) used in drill-down
ADR-116Track architecture -- 37 tracks across A-AK, domain classification
ADR-118Database architecture -- sessions.db and org.db as data sources
ADR-136Experience framework -- skill metadata and effectiveness tracking
ADR-155Session log management -- project-scoped log paths and format
ADR-162Progressive component disclosure -- snapshot-first design pattern

11. Future Vision

Roadmap reordered 2026-02-08 based on UI Designer and PM stakeholder reviews. Value-driven prioritization: accessibility and user-facing value before internal optimization.

Phase 11: Accessibility + Responsive Design (P0 -- Next)

  • ARIA labels and roles on all D3 chart containers
  • Skip links for keyboard navigation between page sections
  • prefers-reduced-motion media query to disable AnimatedCounter/ProgressRing/CompletionGauge animations
  • Screen reader text alternatives for D3 visualizations (data table fallback)
  • Responsive breakpoints: 1440px (full), 1024px (collapsed sidebar), 768px (stacked layout), 480px (mobile)
  • Touch-friendly D3 interactions (44px minimum tap targets)
  • Colorblind-safe palette with pattern overlays for D3 charts

Phase 12: Export & Reporting (Moved from Phase 15 -- PM Priority)

  • One-click PDF export of executive summary (CompletionGauge + Track Strip + key metrics)
  • CSV data export for any chart or table
  • Scheduled report generation (weekly digest email)
  • Board-ready presentation slide generation
  • Shareable dashboard snapshots (static HTML export)

Phase 13: User Feedback + Analytics Instrumentation (PM Priority)

  • Client-side event tracking for success metrics (see Section 9.5)
  • In-dashboard feedback widget ("Was this insight helpful?")
  • Page load and interaction timing for performance monitoring
  • User session recording for UX optimization (privacy-safe, local-only)

Phase 14: Bundle Optimization + Design System (Deferred from Phase 11)

  • Code splitting with React.lazy() for page-level chunks
  • D3 tree-shaking (import only needed modules)
  • Target: <500KB main bundle (currently 916KB)
  • Centralized design tokens file (tokens.ts): colors, spacing, typography, shadows
  • Component style migration from inline Tailwind to token references
  • Navigation progressive disclosure (collapsible groups, recently visited)

Phase 15: Predictive Analytics + Real-Time (Deferred)

  • Completion date forecasting from velocity trends
  • Anomaly detection on tool call patterns
  • Cost forecasting based on project track complexity
  • Burndown-style progress estimation
  • WebSocket connection to active Claude Code sessions
  • Live metric updates without page refresh
  • Multi-project comparison and tenant-level aggregate views

12. Competitive Positioning Statement

CODITECT Agentic Trajectory Dashboard reduces executive AI pilot review time by 80% -- from hours of CLI queries to 30 seconds of visual comprehension. It is the first AI Development Intelligence platform that answers the question every VP of Engineering asks: "What is the AI actually doing, and is it worth the investment?" While LangSmith shows developers traces and LinearB tracks git metrics, CODITECT transforms 29M+ tool calls into executive-first BI dashboards with 12 drillable pages, 14 D3.js visualizations, and data-driven narratives that explain what happened, why it matters, and what to do next. Snapshot first. Drill down second. Local-first. Zero cloud lock-in. The dashboard that turns AI pilot skeptics into advocates.


References

Market Research (February 2026)

Technical References

CODITECT Internal

  • ADR-118: Database Architecture (sessions.db, org.db)
  • ADR-155: Session Log Management
  • ADR-136: Experience Framework (CEF)
  • ADR-054: Track Nomenclature
  • ADR-116: Track Architecture
  • ADR-162: Progressive Component Disclosure
  • J.22: Trajectory Visualizer (predecessor tool)
  • J.29: Agentic Trajectory Dashboard (TRACK-J task specification)

Author: Claude (Opus 4.6) Track: J (Memory Intelligence) Task: J.29 Updated: 2026-02-08 Lines: 14,987 TypeScript | 811 Python | 12 pages | 37 components | 14 D3 visualizations