Executive Briefing Schema Convention Validation
Task: J.19.4.1 | Date: 2026-02-17 | ADR: ADR-209
1. Summary
The init-database.sql schema (v3.0.0, 22 tables + 5 views + 14 indexes) is structurally sound and aligns with ADR-118 (4-tier DB) and ADR-119 (multi-tenant). The primary integration concern is a naming convention mismatch between SQL output (snake_case) and WPP JSON consumption (camelCase). This is a known, bounded problem that the data adapter (J.19.4.4) will resolve.
Verdict: Schema APPROVED with 3 action items for J.19.4.2-J.19.4.5.
2. Convention Analysis
2.1 SQL Schema Convention (init-database.sql)
All columns use snake_case consistently:
| Table/View | Example Columns |
|---|---|
projects | project_uuid, project_type, created_at, tenant_id, sync_status |
tasks | track_id, sprint_id, effort_hours, agent_hours, agentic_model |
v_track_progress | done_tasks, total_tasks, completion_pct, total_traditional_hours, total_agent_hours |
v_velocity_metrics | completed_tasks, velocity_ratio, traditional_cost_usd, agentic_cost_usd |
v_sprint_summary | sprint_number, done_tasks, total_effort_hours, tracks_involved |
v_decision_status | total_decisions, decided_count, high_pending, medium_pending |
v_risk_summary | total_risks, open_risks, mitigated_risks, max_risk_score, avg_risk_score |
2.2 WPP JSON Convention (generate-project-dashboard-data.js)
All JSON keys use camelCase consistently:
| JSON Path | Example Keys |
|---|---|
summary.* | overallPercent, doneTasks, totalTasks, totalSections, sectionStatusCounts, statusCounts, agentCounts |
summary.velocity | { tasksPerDay, trend, remainingTasks, estimatedDays } |
tracks[].progress | { percent, done, total } |
activity[] | { date, taskIds, author, summary, tracks, sessionLog, filesModified } |
master.* | { trackOverview, crossDependencies, sprints } |
2.3 JSX Consumer Convention (executive-landing.jsx)
JSX accesses camelCase keys from project-dashboard-data.json:
const completePct = s.overallProgress || 0; // camelCase
const velocityRaw = s.velocity; // camelCase
const sprint = s.currentSprint || "---"; // camelCase
const trackCount = s.totalTracks || tracks.length; // camelCase
2.4 distribute_dashboard_json.py Normalization
The distribution script already bridges some naming mismatches:
# Maps aliases to canonical camelCase keys
s["overallPercent"] = s["overallProgress"] # camelCase -> camelCase
s["doneTasks"] = s["completedTasks"] # camelCase -> camelCase
Note: This normalizer handles camelCase-to-camelCase variations only, not snake_case-to-camelCase.
3. Gap: snake_case SQL -> camelCase JSON
The data adapter (J.19.4.4) must bridge this gap. The mapping is deterministic:
| SQL Column (snake_case) | JSON Key (camelCase) | Source |
|---|---|---|
done_tasks | doneTasks | v_track_progress, v_sprint_summary |
total_tasks | totalTasks | v_track_progress, v_sprint_summary |
completion_pct | completionPct | v_track_progress |
total_traditional_hours | totalTraditionalHours | v_track_progress, v_velocity_metrics |
total_agent_hours | totalAgentHours | v_track_progress, v_velocity_metrics |
velocity_ratio | velocityRatio | v_velocity_metrics |
traditional_cost_usd | traditionalCostUsd | v_velocity_metrics |
agentic_cost_usd | agenticCostUsd | v_velocity_metrics |
completed_tasks | completedTasks | v_velocity_metrics |
total_decisions | totalDecisions | v_decision_status |
decided_count | decidedCount | v_decision_status |
high_pending | highPending | v_decision_status |
medium_pending | mediumPending | v_decision_status |
low_pending | lowPending | v_decision_status |
total_risks | totalRisks | v_risk_summary |
open_risks | openRisks | v_risk_summary |
mitigated_risks | mitigatedRisks | v_risk_summary |
max_risk_score | maxRiskScore | v_risk_summary |
avg_risk_score | avgRiskScore | v_risk_summary |
sprint_number | sprintNumber | v_sprint_summary |
total_effort_hours | totalEffortHours | v_sprint_summary |
total_agent_hours | totalAgentHours | v_sprint_summary |
tracks_involved | tracksInvolved | v_sprint_summary |
owner_role | ownerRole | v_track_progress |
first_sprint | firstSprint | v_track_progress |
last_sprint | lastSprint | v_track_progress |
project_id | projectId | all views |
Adapter implementation: A single snake_to_camel() function applied to all dict keys from SQLite query results. Python implementation:
import re
def snake_to_camel(name: str) -> str:
components = name.split('_')
return components[0] + ''.join(x.title() for x in components[1:])
def transform_row(row: dict) -> dict:
return {snake_to_camel(k): v for k, v in row.items()}
4. Schema Quality Assessment
4.1 Structural Integrity: PASS
- All 22 tables have PRIMARY KEY
- Foreign keys properly reference parent tables
- CHECK constraints on status/type columns (4 tables)
- NOT NULL on required fields
- DEFAULT values on timestamps and status fields
4.2 ADR-119 Multi-Tenant Compliance: PASS
All core tables include the required columns:
tenant_id TEXT- Tenant isolationuser_id TEXT/team_id TEXT- User/team scoping (where applicable)cloud_id TEXT- Cloud-assigned UUIDsynced_at TEXT- Last sync timestampsync_status TEXT DEFAULT 'pending'- Sync statescope TEXT DEFAULT 'global' CHECK(scope IN ('global','project','customer'))- Visibility scope
Cloud sync indexes present: idx_*_tenant, idx_*_sync (6 indexes)
4.3 ADR-118 Database Tier Alignment: PASS
The briefing database maps to Tier 2 (org-level, project-scoped):
- Self-contained per project (all tables have
project_idFK) - Portable — can be bundled with project artifacts
- Cache-friendly —
narrative_cachewith SHA-256 hash keying - Snapshot-capable —
briefing_snapshotsfor temporal comparisons
4.4 WPP Integration Readiness: PASS with notes
| Criterion | Status | Notes |
|---|---|---|
| JSON output compatible | PASS | After snake_to_camel adapter |
| Separate from dashboard JSON | PASS | briefing-data.json distinct from project-dashboard-data.json |
| Zero key overlap | PASS | BriefingValidator enforces BRIEFING_KEYS vs DASHBOARD_KEYS |
| scaffold.sh compatible | NEEDS WORK | J.19.4.3 — add briefing.db init to scaffold |
| generate-briefing.py location | NEEDS WORK | J.19.4.2 — move to WPP scripts/ |
4.5 Performance Considerations
- WAL mode enabled (concurrent reads during writes)
- 14 indexes cover primary query patterns
narrative_cacheUNIQUE constraint enables upsert pattern- Views are pre-computed aggregations (no runtime overhead beyond query)
- No full-text search tables (not needed for structured metric queries)
5. Action Items
| Task | Action | Priority |
|---|---|---|
| J.19.4.2 | Move generate-briefing.py to scripts/generate_briefing_data.py + validate_dedup.py | High |
| J.19.4.3 | Add --dashboards flag to scaffold.sh, init briefing.db from init-briefing-db.sql | High |
| J.19.4.4 | Implement snake_to_camel() + transform_row() in BriefingDB.query() / query_one() | High |
| J.19.4.5 | Verify narrative_cache SHA-256 keying works with camelCase output keys | Medium |
6. Schema Diagram (Key Relationships)
projects ──< sessions ──< session_events
│
├──< tracks ──< tasks (FK: track_id, sprint_id)
│
├──< sprints
│
├──< decisions ──< decision_options
│ │
│ └──< decision_dependencies
│
├──< risks
│
├──< cost_inputs
│
├──< competitors
│
├──< market_data
│
├──< unit_economics
│
├──< gtm_readiness
│
├──< revenue_milestones
│
├──< architecture_axes
│
├──< methodology_sections
│
├──< references_table
│
├──< narrative_cache
│
└──< briefing_snapshots
All tables radiate from projects as the root entity. This is correct for the project-scoped briefing use case.
7. J.19.4.5 Verification: narrative_cache SHA-256 with camelCase
Finding: The adapter is transparent to narrative caching. Analysis:
_data_hash(data)hashes the input data dict passed toNarrativeGenerator.generate(). SinceBriefingDB.query()now returns camelCase keys, the hash input changes from snake_case to camelCase.- Cache miss on first run after adapter integration — expected and correct. Old snake_case-hashed entries won't match new camelCase-hashed lookups. This forces narrative regeneration, which is desired since the surrounding JSON context also changed to camelCase.
- No collision risk —
snake_to_camel()is a bijective mapping;json.dumps(sort_keys=True)ensures deterministic ordering regardless of key format. get_cached_narrative()returnsrow["narrative"]— the column namenarrativehas no underscore, sosnake_to_camel("narrative")="narrative". No breakage.cache_narrative()usesself.conn.execute()directly (INSERT), notquery(), so writes are unaffected by the adapter.
Verdict: PASS — no code changes needed. One-time cache rebuild on first run is the only side effect.
Conclusion: The schema is production-ready for WPP integration. The snake_case/camelCase gap is the single transformation needed, and it's a mechanical, lossless operation handled entirely in the data adapter layer.