Architecture Decision Records (ADRs)
Coditect Activity Dashboard
Document Version: 1.0
Date: November 27, 2025
Index of ADRsβ
| ID | Title | Status | Date |
|---|---|---|---|
| ADR-001 | Task-Centric Dashboard Architecture | Accepted | 2025-11-27 |
| ADR-002 | Checkbox as Source of Truth for Progress | Accepted | 2025-11-27 |
| ADR-003 | Exception-Based Display Pattern | Accepted | 2025-11-27 |
| ADR-004 | Session-to-Task Linking Strategy | Accepted | 2025-11-27 |
| ADR-005 | Real-Time Update Architecture | Accepted | 2025-11-27 |
| ADR-006 | Three-Level Information Hierarchy | Accepted | 2025-11-27 |
| ADR-007 | Activity Feed Prioritization Algorithm | Accepted | 2025-11-27 |
| ADR-008 | Multi-Project Portfolio Architecture | Accepted | 2025-11-27 |
ADR-001: Task-Centric Dashboard Architectureβ
Statusβ
Accepted
Contextβ
The initial Activity Dashboard implementation displayed metrics centered around LLM sessions and conversations: message counts, session durations, token usage, and QA scores. User research revealed that stakeholders found this information to be "engine diagnostics" rather than actionable intelligence.
Stakeholder quote: "The gold isn't in how is it doing it... the real stuff is features, deliverables, targets reached."
Users need to understand project progress in terms of deliverables (tasks/features) rather than the machinery used to produce them (sessions/messages).
Decisionβ
We will architect the dashboard around tasks as the primary entity, with sessions, commits, and other activities serving as supporting evidence linked to tasks.
The information hierarchy will be:
TASK (Primary)
βββ Status (pending/in_progress/review/blocked/done)
βββ Checkbox (checked/unchecked)
βββ Linked Sessions (evidence)
βββ Linked Commits (evidence)
All dashboard views will be organized by task status and progress, not by session or conversation metrics.
Consequencesβ
Positive:
- Dashboard directly answers "What has been accomplished?"
- Users can track deliverables without understanding session mechanics
- Progress metrics are meaningful and actionable
- Aligns with existing mental models (Kanban, JIRA, etc.)
Negative:
- Requires building task-linking infrastructure
- Session/message metrics are hidden (may frustrate some users)
- Additional complexity in data transformation layer
Risks:
- Linking accuracy may impact trust in the system
- Tasks without linked evidence may appear incomplete
Alternatives Consideredβ
-
Session-centric with task tags: Keep session focus, add task tags
- Rejected: Still requires users to think in terms of sessions
-
Dual view (sessions + tasks): Provide both perspectives
- Rejected: Adds complexity, dilutes focus, violates "GPS not engine" principle
-
Hybrid timeline: Chronological view mixing tasks and sessions
- Rejected: Doesn't provide clear progress indication
ADR-002: Checkbox as Source of Truth for Progressβ
Statusβ
Accepted
Contextβ
Progress tracking can be derived from multiple signals:
- Task status field (pending/in_progress/etc.)
- Checkbox state (checked/unchecked)
- Linked commit count
- Session activity
- Time-based estimates
We need a single, unambiguous source of truth for "task completion" that is:
- Simple to understand
- Easy to update
- Binary (complete or not)
- User-controlled
Decisionβ
The checked boolean field on tasks will be the authoritative source for completion status.
Progress calculation: progress_pct = checked_tasks / total_tasks * 100
The status field provides additional workflow information but does not determine completion:
- A task can be
status='review'butchecked=false(not yet complete) - A task with
checked=trueshould havestatus='done'(enforced by application logic)
Consequencesβ
Positive:
- Unambiguous: task is either done (checked) or not done (unchecked)
- User has explicit control over completion state
- Familiar UX pattern (checkbox lists)
- Simple progress calculation
- Works offline (no need for complex calculations)
Negative:
- Binary view may not capture partial progress
- Status field becomes secondary, potentially confusing
- Requires discipline to check tasks appropriately
Mitigation:
- UI shows both checkbox and status for context
- In-progress tasks can show sub-progress indicators separately
- Validation ensures
checked=trueimpliesstatus='done'
Alternatives Consideredβ
-
Status-based completion:
status='done'means complete- Rejected: Status is mutable for workflow; checkbox is definitive
-
Points/effort-based: Calculate from story points completed
- Rejected: Requires estimation, adds complexity
-
Commit-based: Complete when PR merged
- Rejected: Not all tasks have code; tight coupling to git
ADR-003: Exception-Based Display Patternβ
Statusβ
Accepted
Contextβ
Dashboard displays often show all available metrics regardless of their informational value. This leads to:
- Visual clutter
- Important information buried in noise
- User fatigue from scanning uniform data
Stakeholder quote: "If it's 40, 40, 40, 40, 40 every time, don't show it."
The principle of "assumptive baseline" suggests that uniform data provides no signal and should be hidden, with only deviations (exceptions) surfaced.
Decisionβ
We will implement exception-based display where uniform/expected values are hidden and only anomalies are shown.
Implementation rules:
| Metric | Baseline | Show If |
|---|---|---|
| QA Score | 40 (passing) | score < 40 (failing) |
| Task Status | pending/in_progress/done | status = 'blocked' |
| Version | Latest | version differs from expected |
| Build Status | Success | status = 'failed' |
The "Attention Required" panel will exclusively show exceptions:
- Blocked tasks
- Failed builds
- Low QA scores
- Overdue tasks
Consequencesβ
Positive:
- Dramatically reduced visual noise
- Important issues immediately visible
- Faster scanning and decision-making
- Cleaner, more professional UI
Negative:
- Users may wonder "where is my data?"
- Need to define baselines for each metric
- Edge cases where uniform data is actually informative
Mitigation:
- Drill-down views show all data for those who want it
- Tooltips explain why data is hidden
- Configurable baselines per organization
Alternatives Consideredβ
-
Show everything, highlight exceptions: Display all, use color coding
- Rejected: Still cluttered; highlighting loses impact
-
Collapsible sections: Hide by default, expand to see
- Rejected: Adds interaction overhead; violates "non-scrolling" goal
-
Threshold indicators: Show sparklines/trends instead of values
- Partially adopted: Trends shown for velocity, not raw counts
ADR-004: Session-to-Task Linking Strategyβ
Statusβ
Accepted
Contextβ
LLM sessions contain conversations that work toward completing tasks, but there is no explicit link between them. To provide task-centric views, we need to infer which tasks a session was working on.
Linking strategies considered:
- Explicit user tagging
- Keyword extraction and matching
- NLP-based semantic similarity
- File path overlap with commits
Decisionβ
We will use a multi-strategy linking approach with confidence scores:
Link Strategies (in priority order):
1. Explicit references (confidence: 1.0)
- Task ID mentioned in session: "working on TASK-123"
2. Title similarity (confidence: 0.5-0.9)
- Session content matches task title (threshold: 0.7)
3. Keyword extraction (confidence: 0.3-0.8)
- NLP keywords match task description
- Minimum 3 keyword matches required
4. File overlap (confidence: 0.3-0.6)
- Session mentions files also in task-linked commits
Links with confidence < 0.3 are discarded. Multiple strategies can combine to increase confidence (capped at 0.95 for inferred links).
Consequencesβ
Positive:
- Automatic linking reduces user burden
- Confidence scores allow filtering unreliable links
- Multiple strategies improve coverage
- Explicit references provide escape hatch for users
Negative:
- Inference may produce false positives
- NLP processing adds latency
- Confidence calibration requires tuning
- Users may not understand why links appear
Mitigation:
- Show confidence indicators in drill-down views
- Allow users to confirm/reject inferred links
- Log linking decisions for debugging
- Periodic accuracy audits
Alternatives Consideredβ
-
Explicit tagging only: Require users to tag sessions
- Rejected: High friction, users won't do it consistently
-
LLM-based linking: Use Claude to analyze and link
- Rejected: Expensive, adds latency, recursive complexity
-
No linking: Show sessions separately from tasks
- Rejected: Defeats the task-centric architecture goal
ADR-005: Real-Time Update Architectureβ
Statusβ
Accepted
Contextβ
The dashboard displays time-sensitive information (task status, progress, blockers) that can change frequently. Users expect to see updates without manual refresh, especially in collaborative environments.
Options for real-time updates:
- Polling (client periodically fetches)
- Server-Sent Events (SSE)
- WebSockets
- GraphQL Subscriptions
Decisionβ
We will use WebSockets for real-time dashboard updates with a pub/sub model.
Architecture:
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Client βββββββΊβ WebSocket βββββββΊβ Event Bus β
β β β Manager β β (Pub/Sub) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β²
β
βββββββββββββββββββββββββββββ€
β β β
ββββββββ΄βββββββ ββββ΄ββββ ββββββββββ΄βββββββββ
βTask Service β βGit β βSession Service β
β β βHook β β β
βββββββββββββββ ββββββββ βββββββββββββββββββ
Event types:
task.updated: Task status or checkbox changedtask.completed: Task checked offtask.blocked: Task became blockedcommit.new: New commit receivedsession.ended: Session completedprogress.changed: Project progress recalculated
Clients subscribe to specific project IDs to receive relevant updates.
Consequencesβ
Positive:
- Instant updates without polling overhead
- Efficient bandwidth usage (push-only)
- Supports collaborative workflows
- Scalable with connection pooling
Negative:
- More complex than polling
- Requires connection management
- Stateful connections harder to scale
- Fallback needed for WebSocket-blocked networks
Mitigation:
- Implement automatic reconnection with exponential backoff
- Fall back to polling if WebSocket fails
- Use Redis pub/sub for horizontal scaling
- Heartbeat to detect stale connections
Alternatives Consideredβ
-
Polling every 30 seconds: Simple, stateless
- Rejected: Delays updates, wastes bandwidth on no-change
-
SSE (Server-Sent Events): Simpler than WebSockets
- Rejected: One-directional; we need client-to-server for subscriptions
-
GraphQL Subscriptions: Integrated with query layer
- Rejected: Adds GraphQL dependency; overkill for our needs
ADR-006: Three-Level Information Hierarchyβ
Statusβ
Accepted
Contextβ
Users have different information needs at different times:
- Quick status check: "Am I on track?"
- Working session: "What should I work on next?"
- Deep investigation: "Why did this task take so long?"
A single view cannot serve all these needs without becoming cluttered or requiring excessive scrolling.
Stakeholder quote: "We need actual information on a non-scrolling screen."
Decisionβ
We will implement a three-level information hierarchy with progressive disclosure:
LEVEL 1: Primary View (Non-scrolling)
βββ Portfolio summary (all projects)
βββ Task Kanban (selected project)
βββ 5 activity highlights
βββ Blocked tasks (exceptions only)
βββ Work distribution chart
LEVEL 2: Drill-Down (On-demand)
βββ Full task list with details
βββ Commit timeline
βββ Session summaries
βββ Progress burndown
βββ All activity (paginated)
LEVEL 3: Deep Inspection (Hidden)
βββ Session message history
βββ Commit diffs
βββ Task change history
βββ Raw activity logs
βββ Token/cost analytics
Level 1 must fit on one screen without scrolling (approximately 800px viewport height).
Consequencesβ
Positive:
- Primary view provides immediate answers
- Detail available for those who need it
- Reduced cognitive load for daily use
- Cleaner visual design
Negative:
- Important details may be hidden
- More clicks to access deep information
- Need to carefully curate Level 1 content
- Responsive design challenges
Mitigation:
- User research to validate Level 1 content selection
- Keyboard shortcuts for quick drill-down
- Breadcrumb navigation between levels
- Mobile-specific layouts for Level 1
Alternatives Consideredβ
-
Single scrolling view: Everything on one page
- Rejected: Violates "non-scrolling" requirement
-
Tabbed interface: Tabs for different aspects
- Rejected: Hides information, adds navigation overhead
-
Dashboard widgets: Customizable widget layout
- Future consideration: Too complex for MVP
ADR-007: Activity Feed Prioritization Algorithmβ
Statusβ
Accepted
Contextβ
The activity feed shows recent events but is limited to 5 items (per ADR-006). With multiple activity sources (tasks, commits, sessions, status changes), we need a principled way to select which items appear.
Not all activities are equally important:
- Task completion = high value (deliverable achieved)
- Task blocked = high value (needs attention)
- Commit = medium value (work evidence)
- Session started = low value (just process)
Decisionβ
We will use weighted prioritization with recency as secondary sort:
PRIORITY_WEIGHTS = {
'task_completed': 100, # Highest: deliverable achieved
'task_blocked': 90, # High: needs attention
'commit_merged': 60, # Medium-high: code shipped
'commit_pushed': 50, # Medium: work evidence
'status_change': 40, # Medium-low: workflow update
'session_ended': 30, # Low: just process
'session_started': 10, # Lowest: minimal value
}
Algorithm:
1. Collect activities from last 7 days
2. Assign priority score based on type
3. Sort by (priority DESC, timestamp DESC)
4. Take top 5
Additionally:
- Deduplicate: Only show latest event per task
- Cap commits: Max 2 commits in top 5
- Boost blockers: Always show if any exist (even if older)
Consequencesβ
Positive:
- Important events reliably surface
- Balanced representation across activity types
- Blockers always visible (exception-based)
- Predictable, explainable algorithm
Negative:
- Lower-priority events may never appear
- Weight tuning may need iteration
- Edge cases (many completions at once)
Mitigation:
- "View all activity" link for full list
- Weights configurable per organization
- Monitoring for coverage metrics
Alternatives Consideredβ
-
Pure recency: Most recent 5 activities
- Rejected: Low-value events dominate
-
Per-type slots: 2 tasks + 2 commits + 1 other
- Rejected: Rigid; doesn't adapt to actual activity
-
ML-based ranking: Learn from user engagement
- Future consideration: Insufficient data for MVP
ADR-008: Multi-Project Portfolio Architectureβ
Statusβ
Accepted
Contextβ
Users work on multiple projects simultaneously and need a unified view across all of them. The dashboard must support:
- Overview of all projects at once
- Drill-down to single project
- Cross-project activity feed
- Cross-project blocker detection
Decisionβ
We will implement a portfolio model with project-level isolation and cross-project aggregation:
Portfolio (User/Org level)
βββ Project A
β βββ Git Repo(s)
β βββ Project Plan
β β βββ Task Lists
β β βββ Tasks
β βββ LLM Sessions
βββ Project B
β βββ ...
βββ Project N
βββ ...
Dashboard modes:
-
Portfolio mode (default): All projects visible
- Summary bar shows all projects
- Activity feed aggregates across projects
- Blockers shown from all projects
-
Project mode (on selection): Single project focus
- Kanban board for selected project
- Activity filtered to project
- Project-specific metrics
Data isolation:
- Tasks belong to exactly one project (via task_list β project_plan β project)
- Sessions belong to exactly one project
- Commits belong to repo β project (many repos per project possible)
Consequencesβ
Positive:
- Clear organizational structure
- Supports both overview and detail needs
- Natural project-level access control
- Scalable to many projects
Negative:
- Cross-project tasks not supported
- Project switching adds interaction
- Portfolio view may be crowded with many projects
Mitigation:
- Project filtering/favorites for portfolio view
- Quick project switcher (keyboard shortcut)
- Consider "workspace" level above portfolio for enterprises
Alternatives Consideredβ
-
Flat task list: All tasks in one pool, tagged by project
- Rejected: Harder to manage, loses project context
-
Project silos: Completely separate dashboards per project
- Rejected: No unified view, high switching cost
-
Hierarchical workspaces: Org β Workspace β Project
- Future consideration: Over-engineering for MVP
Appendix: ADR Templateβ
## ADR-XXX: [Title]
### Status
[Proposed | Accepted | Deprecated | Superseded by ADR-YYY]
### Context
[Describe the issue or question being addressed. What problem are we solving?]
### Decision
[Describe the decision made and the reasoning behind it.]
### Consequences
**Positive:**
- [Benefit 1]
- [Benefit 2]
**Negative:**
- [Drawback 1]
- [Drawback 2]
**Mitigation:**
- [How we address the drawbacks]
### Alternatives Considered
1. **[Alternative 1]**: [Description]
- Rejected: [Reason]
Document Historyβ
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-11-27 | Architecture Team | Initial 8 ADRs |