Skip to main content

Architecture Decision Records (ADRs)

Coditect Activity Dashboard

Document Version: 1.0
Date: November 27, 2025


Index of ADRs​

IDTitleStatusDate
ADR-001Task-Centric Dashboard ArchitectureAccepted2025-11-27
ADR-002Checkbox as Source of Truth for ProgressAccepted2025-11-27
ADR-003Exception-Based Display PatternAccepted2025-11-27
ADR-004Session-to-Task Linking StrategyAccepted2025-11-27
ADR-005Real-Time Update ArchitectureAccepted2025-11-27
ADR-006Three-Level Information HierarchyAccepted2025-11-27
ADR-007Activity Feed Prioritization AlgorithmAccepted2025-11-27
ADR-008Multi-Project Portfolio ArchitectureAccepted2025-11-27

ADR-001: Task-Centric Dashboard Architecture​

Status​

Accepted

Context​

The initial Activity Dashboard implementation displayed metrics centered around LLM sessions and conversations: message counts, session durations, token usage, and QA scores. User research revealed that stakeholders found this information to be "engine diagnostics" rather than actionable intelligence.

Stakeholder quote: "The gold isn't in how is it doing it... the real stuff is features, deliverables, targets reached."

Users need to understand project progress in terms of deliverables (tasks/features) rather than the machinery used to produce them (sessions/messages).

Decision​

We will architect the dashboard around tasks as the primary entity, with sessions, commits, and other activities serving as supporting evidence linked to tasks.

The information hierarchy will be:

TASK (Primary)
β”œβ”€β”€ Status (pending/in_progress/review/blocked/done)
β”œβ”€β”€ Checkbox (checked/unchecked)
β”œβ”€β”€ Linked Sessions (evidence)
└── Linked Commits (evidence)

All dashboard views will be organized by task status and progress, not by session or conversation metrics.

Consequences​

Positive:

  • Dashboard directly answers "What has been accomplished?"
  • Users can track deliverables without understanding session mechanics
  • Progress metrics are meaningful and actionable
  • Aligns with existing mental models (Kanban, JIRA, etc.)

Negative:

  • Requires building task-linking infrastructure
  • Session/message metrics are hidden (may frustrate some users)
  • Additional complexity in data transformation layer

Risks:

  • Linking accuracy may impact trust in the system
  • Tasks without linked evidence may appear incomplete

Alternatives Considered​

  1. Session-centric with task tags: Keep session focus, add task tags

    • Rejected: Still requires users to think in terms of sessions
  2. Dual view (sessions + tasks): Provide both perspectives

    • Rejected: Adds complexity, dilutes focus, violates "GPS not engine" principle
  3. Hybrid timeline: Chronological view mixing tasks and sessions

    • Rejected: Doesn't provide clear progress indication

ADR-002: Checkbox as Source of Truth for Progress​

Status​

Accepted

Context​

Progress tracking can be derived from multiple signals:

  • Task status field (pending/in_progress/etc.)
  • Checkbox state (checked/unchecked)
  • Linked commit count
  • Session activity
  • Time-based estimates

We need a single, unambiguous source of truth for "task completion" that is:

  • Simple to understand
  • Easy to update
  • Binary (complete or not)
  • User-controlled

Decision​

The checked boolean field on tasks will be the authoritative source for completion status.

Progress calculation: progress_pct = checked_tasks / total_tasks * 100

The status field provides additional workflow information but does not determine completion:

  • A task can be status='review' but checked=false (not yet complete)
  • A task with checked=true should have status='done' (enforced by application logic)

Consequences​

Positive:

  • Unambiguous: task is either done (checked) or not done (unchecked)
  • User has explicit control over completion state
  • Familiar UX pattern (checkbox lists)
  • Simple progress calculation
  • Works offline (no need for complex calculations)

Negative:

  • Binary view may not capture partial progress
  • Status field becomes secondary, potentially confusing
  • Requires discipline to check tasks appropriately

Mitigation:

  • UI shows both checkbox and status for context
  • In-progress tasks can show sub-progress indicators separately
  • Validation ensures checked=true implies status='done'

Alternatives Considered​

  1. Status-based completion: status='done' means complete

    • Rejected: Status is mutable for workflow; checkbox is definitive
  2. Points/effort-based: Calculate from story points completed

    • Rejected: Requires estimation, adds complexity
  3. Commit-based: Complete when PR merged

    • Rejected: Not all tasks have code; tight coupling to git

ADR-003: Exception-Based Display Pattern​

Status​

Accepted

Context​

Dashboard displays often show all available metrics regardless of their informational value. This leads to:

  • Visual clutter
  • Important information buried in noise
  • User fatigue from scanning uniform data

Stakeholder quote: "If it's 40, 40, 40, 40, 40 every time, don't show it."

The principle of "assumptive baseline" suggests that uniform data provides no signal and should be hidden, with only deviations (exceptions) surfaced.

Decision​

We will implement exception-based display where uniform/expected values are hidden and only anomalies are shown.

Implementation rules:

MetricBaselineShow If
QA Score40 (passing)score < 40 (failing)
Task Statuspending/in_progress/donestatus = 'blocked'
VersionLatestversion differs from expected
Build StatusSuccessstatus = 'failed'

The "Attention Required" panel will exclusively show exceptions:

  • Blocked tasks
  • Failed builds
  • Low QA scores
  • Overdue tasks

Consequences​

Positive:

  • Dramatically reduced visual noise
  • Important issues immediately visible
  • Faster scanning and decision-making
  • Cleaner, more professional UI

Negative:

  • Users may wonder "where is my data?"
  • Need to define baselines for each metric
  • Edge cases where uniform data is actually informative

Mitigation:

  • Drill-down views show all data for those who want it
  • Tooltips explain why data is hidden
  • Configurable baselines per organization

Alternatives Considered​

  1. Show everything, highlight exceptions: Display all, use color coding

    • Rejected: Still cluttered; highlighting loses impact
  2. Collapsible sections: Hide by default, expand to see

    • Rejected: Adds interaction overhead; violates "non-scrolling" goal
  3. Threshold indicators: Show sparklines/trends instead of values

    • Partially adopted: Trends shown for velocity, not raw counts

ADR-004: Session-to-Task Linking Strategy​

Status​

Accepted

Context​

LLM sessions contain conversations that work toward completing tasks, but there is no explicit link between them. To provide task-centric views, we need to infer which tasks a session was working on.

Linking strategies considered:

  1. Explicit user tagging
  2. Keyword extraction and matching
  3. NLP-based semantic similarity
  4. File path overlap with commits

Decision​

We will use a multi-strategy linking approach with confidence scores:

Link Strategies (in priority order):
1. Explicit references (confidence: 1.0)
- Task ID mentioned in session: "working on TASK-123"

2. Title similarity (confidence: 0.5-0.9)
- Session content matches task title (threshold: 0.7)

3. Keyword extraction (confidence: 0.3-0.8)
- NLP keywords match task description
- Minimum 3 keyword matches required

4. File overlap (confidence: 0.3-0.6)
- Session mentions files also in task-linked commits

Links with confidence < 0.3 are discarded. Multiple strategies can combine to increase confidence (capped at 0.95 for inferred links).

Consequences​

Positive:

  • Automatic linking reduces user burden
  • Confidence scores allow filtering unreliable links
  • Multiple strategies improve coverage
  • Explicit references provide escape hatch for users

Negative:

  • Inference may produce false positives
  • NLP processing adds latency
  • Confidence calibration requires tuning
  • Users may not understand why links appear

Mitigation:

  • Show confidence indicators in drill-down views
  • Allow users to confirm/reject inferred links
  • Log linking decisions for debugging
  • Periodic accuracy audits

Alternatives Considered​

  1. Explicit tagging only: Require users to tag sessions

    • Rejected: High friction, users won't do it consistently
  2. LLM-based linking: Use Claude to analyze and link

    • Rejected: Expensive, adds latency, recursive complexity
  3. No linking: Show sessions separately from tasks

    • Rejected: Defeats the task-centric architecture goal

ADR-005: Real-Time Update Architecture​

Status​

Accepted

Context​

The dashboard displays time-sensitive information (task status, progress, blockers) that can change frequently. Users expect to see updates without manual refresh, especially in collaborative environments.

Options for real-time updates:

  1. Polling (client periodically fetches)
  2. Server-Sent Events (SSE)
  3. WebSockets
  4. GraphQL Subscriptions

Decision​

We will use WebSockets for real-time dashboard updates with a pub/sub model.

Architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Client │◄────►│ WebSocket │◄────►│ Event Bus β”‚
β”‚ β”‚ β”‚ Manager β”‚ β”‚ (Pub/Sub) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β–²
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”΄β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Task Service β”‚ β”‚Git β”‚ β”‚Session Service β”‚
β”‚ β”‚ β”‚Hook β”‚ β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Event types:

  • task.updated: Task status or checkbox changed
  • task.completed: Task checked off
  • task.blocked: Task became blocked
  • commit.new: New commit received
  • session.ended: Session completed
  • progress.changed: Project progress recalculated

Clients subscribe to specific project IDs to receive relevant updates.

Consequences​

Positive:

  • Instant updates without polling overhead
  • Efficient bandwidth usage (push-only)
  • Supports collaborative workflows
  • Scalable with connection pooling

Negative:

  • More complex than polling
  • Requires connection management
  • Stateful connections harder to scale
  • Fallback needed for WebSocket-blocked networks

Mitigation:

  • Implement automatic reconnection with exponential backoff
  • Fall back to polling if WebSocket fails
  • Use Redis pub/sub for horizontal scaling
  • Heartbeat to detect stale connections

Alternatives Considered​

  1. Polling every 30 seconds: Simple, stateless

    • Rejected: Delays updates, wastes bandwidth on no-change
  2. SSE (Server-Sent Events): Simpler than WebSockets

    • Rejected: One-directional; we need client-to-server for subscriptions
  3. GraphQL Subscriptions: Integrated with query layer

    • Rejected: Adds GraphQL dependency; overkill for our needs

ADR-006: Three-Level Information Hierarchy​

Status​

Accepted

Context​

Users have different information needs at different times:

  • Quick status check: "Am I on track?"
  • Working session: "What should I work on next?"
  • Deep investigation: "Why did this task take so long?"

A single view cannot serve all these needs without becoming cluttered or requiring excessive scrolling.

Stakeholder quote: "We need actual information on a non-scrolling screen."

Decision​

We will implement a three-level information hierarchy with progressive disclosure:

LEVEL 1: Primary View (Non-scrolling)
β”œβ”€β”€ Portfolio summary (all projects)
β”œβ”€β”€ Task Kanban (selected project)
β”œβ”€β”€ 5 activity highlights
β”œβ”€β”€ Blocked tasks (exceptions only)
└── Work distribution chart

LEVEL 2: Drill-Down (On-demand)
β”œβ”€β”€ Full task list with details
β”œβ”€β”€ Commit timeline
β”œβ”€β”€ Session summaries
β”œβ”€β”€ Progress burndown
└── All activity (paginated)

LEVEL 3: Deep Inspection (Hidden)
β”œβ”€β”€ Session message history
β”œβ”€β”€ Commit diffs
β”œβ”€β”€ Task change history
β”œβ”€β”€ Raw activity logs
└── Token/cost analytics

Level 1 must fit on one screen without scrolling (approximately 800px viewport height).

Consequences​

Positive:

  • Primary view provides immediate answers
  • Detail available for those who need it
  • Reduced cognitive load for daily use
  • Cleaner visual design

Negative:

  • Important details may be hidden
  • More clicks to access deep information
  • Need to carefully curate Level 1 content
  • Responsive design challenges

Mitigation:

  • User research to validate Level 1 content selection
  • Keyboard shortcuts for quick drill-down
  • Breadcrumb navigation between levels
  • Mobile-specific layouts for Level 1

Alternatives Considered​

  1. Single scrolling view: Everything on one page

    • Rejected: Violates "non-scrolling" requirement
  2. Tabbed interface: Tabs for different aspects

    • Rejected: Hides information, adds navigation overhead
  3. Dashboard widgets: Customizable widget layout

    • Future consideration: Too complex for MVP

ADR-007: Activity Feed Prioritization Algorithm​

Status​

Accepted

Context​

The activity feed shows recent events but is limited to 5 items (per ADR-006). With multiple activity sources (tasks, commits, sessions, status changes), we need a principled way to select which items appear.

Not all activities are equally important:

  • Task completion = high value (deliverable achieved)
  • Task blocked = high value (needs attention)
  • Commit = medium value (work evidence)
  • Session started = low value (just process)

Decision​

We will use weighted prioritization with recency as secondary sort:

PRIORITY_WEIGHTS = {
'task_completed': 100, # Highest: deliverable achieved
'task_blocked': 90, # High: needs attention
'commit_merged': 60, # Medium-high: code shipped
'commit_pushed': 50, # Medium: work evidence
'status_change': 40, # Medium-low: workflow update
'session_ended': 30, # Low: just process
'session_started': 10, # Lowest: minimal value
}

Algorithm:
1. Collect activities from last 7 days
2. Assign priority score based on type
3. Sort by (priority DESC, timestamp DESC)
4. Take top 5

Additionally:

  • Deduplicate: Only show latest event per task
  • Cap commits: Max 2 commits in top 5
  • Boost blockers: Always show if any exist (even if older)

Consequences​

Positive:

  • Important events reliably surface
  • Balanced representation across activity types
  • Blockers always visible (exception-based)
  • Predictable, explainable algorithm

Negative:

  • Lower-priority events may never appear
  • Weight tuning may need iteration
  • Edge cases (many completions at once)

Mitigation:

  • "View all activity" link for full list
  • Weights configurable per organization
  • Monitoring for coverage metrics

Alternatives Considered​

  1. Pure recency: Most recent 5 activities

    • Rejected: Low-value events dominate
  2. Per-type slots: 2 tasks + 2 commits + 1 other

    • Rejected: Rigid; doesn't adapt to actual activity
  3. ML-based ranking: Learn from user engagement

    • Future consideration: Insufficient data for MVP

ADR-008: Multi-Project Portfolio Architecture​

Status​

Accepted

Context​

Users work on multiple projects simultaneously and need a unified view across all of them. The dashboard must support:

  • Overview of all projects at once
  • Drill-down to single project
  • Cross-project activity feed
  • Cross-project blocker detection

Decision​

We will implement a portfolio model with project-level isolation and cross-project aggregation:

Portfolio (User/Org level)
β”œβ”€β”€ Project A
β”‚ β”œβ”€β”€ Git Repo(s)
β”‚ β”œβ”€β”€ Project Plan
β”‚ β”‚ └── Task Lists
β”‚ β”‚ └── Tasks
β”‚ └── LLM Sessions
β”œβ”€β”€ Project B
β”‚ └── ...
└── Project N
└── ...

Dashboard modes:

  1. Portfolio mode (default): All projects visible

    • Summary bar shows all projects
    • Activity feed aggregates across projects
    • Blockers shown from all projects
  2. Project mode (on selection): Single project focus

    • Kanban board for selected project
    • Activity filtered to project
    • Project-specific metrics

Data isolation:

  • Tasks belong to exactly one project (via task_list β†’ project_plan β†’ project)
  • Sessions belong to exactly one project
  • Commits belong to repo β†’ project (many repos per project possible)

Consequences​

Positive:

  • Clear organizational structure
  • Supports both overview and detail needs
  • Natural project-level access control
  • Scalable to many projects

Negative:

  • Cross-project tasks not supported
  • Project switching adds interaction
  • Portfolio view may be crowded with many projects

Mitigation:

  • Project filtering/favorites for portfolio view
  • Quick project switcher (keyboard shortcut)
  • Consider "workspace" level above portfolio for enterprises

Alternatives Considered​

  1. Flat task list: All tasks in one pool, tagged by project

    • Rejected: Harder to manage, loses project context
  2. Project silos: Completely separate dashboards per project

    • Rejected: No unified view, high switching cost
  3. Hierarchical workspaces: Org β†’ Workspace β†’ Project

    • Future consideration: Over-engineering for MVP

Appendix: ADR Template​

## ADR-XXX: [Title]

### Status
[Proposed | Accepted | Deprecated | Superseded by ADR-YYY]

### Context
[Describe the issue or question being addressed. What problem are we solving?]

### Decision
[Describe the decision made and the reasoning behind it.]

### Consequences
**Positive:**
- [Benefit 1]
- [Benefit 2]

**Negative:**
- [Drawback 1]
- [Drawback 2]

**Mitigation:**
- [How we address the drawbacks]

### Alternatives Considered
1. **[Alternative 1]**: [Description]
- Rejected: [Reason]

Document History​

VersionDateAuthorChanges
1.02025-11-27Architecture TeamInitial 8 ADRs