Skip to main content

Executive Briefing Schema Convention Validation

Task: J.19.4.1 | Date: 2026-02-17 | ADR: ADR-209

1. Summary

The init-database.sql schema (v3.0.0, 22 tables + 5 views + 14 indexes) is structurally sound and aligns with ADR-118 (4-tier DB) and ADR-119 (multi-tenant). The primary integration concern is a naming convention mismatch between SQL output (snake_case) and WPP JSON consumption (camelCase). This is a known, bounded problem that the data adapter (J.19.4.4) will resolve.

Verdict: Schema APPROVED with 3 action items for J.19.4.2-J.19.4.5.

2. Convention Analysis

2.1 SQL Schema Convention (init-database.sql)

All columns use snake_case consistently:

Table/ViewExample Columns
projectsproject_uuid, project_type, created_at, tenant_id, sync_status
taskstrack_id, sprint_id, effort_hours, agent_hours, agentic_model
v_track_progressdone_tasks, total_tasks, completion_pct, total_traditional_hours, total_agent_hours
v_velocity_metricscompleted_tasks, velocity_ratio, traditional_cost_usd, agentic_cost_usd
v_sprint_summarysprint_number, done_tasks, total_effort_hours, tracks_involved
v_decision_statustotal_decisions, decided_count, high_pending, medium_pending
v_risk_summarytotal_risks, open_risks, mitigated_risks, max_risk_score, avg_risk_score

2.2 WPP JSON Convention (generate-project-dashboard-data.js)

All JSON keys use camelCase consistently:

JSON PathExample Keys
summary.*overallPercent, doneTasks, totalTasks, totalSections, sectionStatusCounts, statusCounts, agentCounts
summary.velocity{ tasksPerDay, trend, remainingTasks, estimatedDays }
tracks[].progress{ percent, done, total }
activity[]{ date, taskIds, author, summary, tracks, sessionLog, filesModified }
master.*{ trackOverview, crossDependencies, sprints }

2.3 JSX Consumer Convention (executive-landing.jsx)

JSX accesses camelCase keys from project-dashboard-data.json:

const completePct = s.overallProgress || 0;    // camelCase
const velocityRaw = s.velocity; // camelCase
const sprint = s.currentSprint || "---"; // camelCase
const trackCount = s.totalTracks || tracks.length; // camelCase

2.4 distribute_dashboard_json.py Normalization

The distribution script already bridges some naming mismatches:

# Maps aliases to canonical camelCase keys
s["overallPercent"] = s["overallProgress"] # camelCase -> camelCase
s["doneTasks"] = s["completedTasks"] # camelCase -> camelCase

Note: This normalizer handles camelCase-to-camelCase variations only, not snake_case-to-camelCase.

3. Gap: snake_case SQL -> camelCase JSON

The data adapter (J.19.4.4) must bridge this gap. The mapping is deterministic:

SQL Column (snake_case)JSON Key (camelCase)Source
done_tasksdoneTasksv_track_progress, v_sprint_summary
total_taskstotalTasksv_track_progress, v_sprint_summary
completion_pctcompletionPctv_track_progress
total_traditional_hourstotalTraditionalHoursv_track_progress, v_velocity_metrics
total_agent_hourstotalAgentHoursv_track_progress, v_velocity_metrics
velocity_ratiovelocityRatiov_velocity_metrics
traditional_cost_usdtraditionalCostUsdv_velocity_metrics
agentic_cost_usdagenticCostUsdv_velocity_metrics
completed_taskscompletedTasksv_velocity_metrics
total_decisionstotalDecisionsv_decision_status
decided_countdecidedCountv_decision_status
high_pendinghighPendingv_decision_status
medium_pendingmediumPendingv_decision_status
low_pendinglowPendingv_decision_status
total_riskstotalRisksv_risk_summary
open_risksopenRisksv_risk_summary
mitigated_risksmitigatedRisksv_risk_summary
max_risk_scoremaxRiskScorev_risk_summary
avg_risk_scoreavgRiskScorev_risk_summary
sprint_numbersprintNumberv_sprint_summary
total_effort_hourstotalEffortHoursv_sprint_summary
total_agent_hourstotalAgentHoursv_sprint_summary
tracks_involvedtracksInvolvedv_sprint_summary
owner_roleownerRolev_track_progress
first_sprintfirstSprintv_track_progress
last_sprintlastSprintv_track_progress
project_idprojectIdall views

Adapter implementation: A single snake_to_camel() function applied to all dict keys from SQLite query results. Python implementation:

import re
def snake_to_camel(name: str) -> str:
components = name.split('_')
return components[0] + ''.join(x.title() for x in components[1:])

def transform_row(row: dict) -> dict:
return {snake_to_camel(k): v for k, v in row.items()}

4. Schema Quality Assessment

4.1 Structural Integrity: PASS

  • All 22 tables have PRIMARY KEY
  • Foreign keys properly reference parent tables
  • CHECK constraints on status/type columns (4 tables)
  • NOT NULL on required fields
  • DEFAULT values on timestamps and status fields

4.2 ADR-119 Multi-Tenant Compliance: PASS

All core tables include the required columns:

  • tenant_id TEXT - Tenant isolation
  • user_id TEXT / team_id TEXT - User/team scoping (where applicable)
  • cloud_id TEXT - Cloud-assigned UUID
  • synced_at TEXT - Last sync timestamp
  • sync_status TEXT DEFAULT 'pending' - Sync state
  • scope TEXT DEFAULT 'global' CHECK(scope IN ('global','project','customer')) - Visibility scope

Cloud sync indexes present: idx_*_tenant, idx_*_sync (6 indexes)

4.3 ADR-118 Database Tier Alignment: PASS

The briefing database maps to Tier 2 (org-level, project-scoped):

  • Self-contained per project (all tables have project_id FK)
  • Portable — can be bundled with project artifacts
  • Cache-friendly — narrative_cache with SHA-256 hash keying
  • Snapshot-capable — briefing_snapshots for temporal comparisons

4.4 WPP Integration Readiness: PASS with notes

CriterionStatusNotes
JSON output compatiblePASSAfter snake_to_camel adapter
Separate from dashboard JSONPASSbriefing-data.json distinct from project-dashboard-data.json
Zero key overlapPASSBriefingValidator enforces BRIEFING_KEYS vs DASHBOARD_KEYS
scaffold.sh compatibleNEEDS WORKJ.19.4.3 — add briefing.db init to scaffold
generate-briefing.py locationNEEDS WORKJ.19.4.2 — move to WPP scripts/

4.5 Performance Considerations

  • WAL mode enabled (concurrent reads during writes)
  • 14 indexes cover primary query patterns
  • narrative_cache UNIQUE constraint enables upsert pattern
  • Views are pre-computed aggregations (no runtime overhead beyond query)
  • No full-text search tables (not needed for structured metric queries)

5. Action Items

TaskActionPriority
J.19.4.2Move generate-briefing.py to scripts/generate_briefing_data.py + validate_dedup.pyHigh
J.19.4.3Add --dashboards flag to scaffold.sh, init briefing.db from init-briefing-db.sqlHigh
J.19.4.4Implement snake_to_camel() + transform_row() in BriefingDB.query() / query_one()High
J.19.4.5Verify narrative_cache SHA-256 keying works with camelCase output keysMedium

6. Schema Diagram (Key Relationships)

projects ──< sessions ──< session_events

├──< tracks ──< tasks (FK: track_id, sprint_id)

├──< sprints

├──< decisions ──< decision_options
│ │
│ └──< decision_dependencies

├──< risks

├──< cost_inputs

├──< competitors

├──< market_data

├──< unit_economics

├──< gtm_readiness

├──< revenue_milestones

├──< architecture_axes

├──< methodology_sections

├──< references_table

├──< narrative_cache

└──< briefing_snapshots

All tables radiate from projects as the root entity. This is correct for the project-scoped briefing use case.


7. J.19.4.5 Verification: narrative_cache SHA-256 with camelCase

Finding: The adapter is transparent to narrative caching. Analysis:

  1. _data_hash(data) hashes the input data dict passed to NarrativeGenerator.generate(). Since BriefingDB.query() now returns camelCase keys, the hash input changes from snake_case to camelCase.
  2. Cache miss on first run after adapter integration — expected and correct. Old snake_case-hashed entries won't match new camelCase-hashed lookups. This forces narrative regeneration, which is desired since the surrounding JSON context also changed to camelCase.
  3. No collision risksnake_to_camel() is a bijective mapping; json.dumps(sort_keys=True) ensures deterministic ordering regardless of key format.
  4. get_cached_narrative() returns row["narrative"] — the column name narrative has no underscore, so snake_to_camel("narrative") = "narrative". No breakage.
  5. cache_narrative() uses self.conn.execute() directly (INSERT), not query(), so writes are unaffected by the adapter.

Verdict: PASS — no code changes needed. One-time cache rebuild on first run is the only side effect.


Conclusion: The schema is production-ready for WPP integration. The snake_case/camelCase gap is the single transformation needed, and it's a mechanical, lossless operation handled entirely in the data adapter layer.