/quality-dashboard - DORA Metrics and Quality KPIs
Generate comprehensive quality metrics report including DORA metrics (deploy frequency, lead time, MTTR, change failure rate), code quality KPIs, defect density, and quality scorecard with trends.
System Prompt
EXECUTION DIRECTIVE: When the user invokes this command, you MUST:
- IMMEDIATELY execute - no questions first
- Load the skill at
skills/quality-metrics-dashboard/SKILL.md - Query git history for deployment and change data
- Calculate DORA metrics for the specified period
- Extract code quality metrics from test results and linters
- Compute defect density from issue tracker
- Generate quality scorecard with letter grade
- Show trends if --trend flag provided
- Export in requested format (terminal/csv/json/html)
Usage
/quality-dashboard
/quality-dashboard --period 30d
/quality-dashboard --team backend --dora-only
/quality-dashboard --export html --trend
/quality-dashboard --compare 30d,90d
Options
| Option | Description |
|---|---|
--period <duration> | Time period for metrics: 7d, 30d, 90d (default: 30d) |
--team <name> | Filter metrics by team or component |
--dora-only | Show only DORA metrics (deploy freq, lead time, MTTR, CFR) |
--export <format> | Export format: csv, json, html (default: terminal) |
--trend | Show metric trends over time with sparklines |
--compare <periods> | Compare multiple periods (e.g., 30d,90d) |
--include-targets | Show metric targets and variance |
Related Commands
| Command | Relationship |
|---|---|
/release-gate | Quality gate decisions based on dashboard metrics |
/regression-check | Test metrics feed into quality dashboard |
/triage | Defect metrics sourced from triage data |
Success Output
Quality Dashboard - Last 30 Days
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DORA Metrics:
Deploy Frequency: 12.3 deploys/week ▲ +2.1 [Elite]
Lead Time for Changes: 2.4 hours ▼ -0.6 [High]
Mean Time to Restore: 45 minutes ▲ +12 [High]
Change Failure Rate: 4.2% ▼ -1.1 [Elite]
Code Quality:
Test Coverage: 87.3% ▲ +2.1
Code Smells (SonarQube): 23 ▼ -8
Technical Debt Ratio: 3.2% ▼ -0.4
Cyclomatic Complexity: 12.4 avg → 0.0
Defect Metrics:
Defect Density: 0.8 bugs/1k LOC ▼ -0.2
Escaped Defects: 2 ▼ -3
Mean Time to Fix: 18 hours ▼ -6
Quality Scorecard: A- (92/100) ▲ +4
Trends (30d): ▁▂▃▅▆▇█ (improving)
Export: metrics-2026-02-01.html
Completion Checklist
- DORA metrics calculated from git/deployment history
- Code quality metrics extracted from test/lint results
- Defect metrics computed from issue tracker
- Quality scorecard generated with letter grade
- Trends computed if requested
- Data exported in requested format
- Comparison across periods completed if requested
Failure Indicators
- "Insufficient deployment data" → Need at least 7 days of history
- "Git history unavailable" → Repository access issue
- "Issue tracker unreachable" → Network/auth problem
- "No test results found" → CI/test infrastructure not configured
When NOT to Use
- Real-time monitoring - Use APM/observability tools instead
- Individual PR quality - Use PR-specific linters/checks
- Security metrics - Use dedicated security scanning tools
- Cost metrics - Use cloud cost dashboards
Anti-Patterns
- Tracking vanity metrics (LOC, commit count) instead of outcomes
- Comparing teams without context (different domains, maturity)
- Setting unrealistic targets (all Elite DORA metrics immediately)
- Ignoring trends in favor of point-in-time snapshots
- Gaming metrics instead of improving processes
Principles
- #3 Complete Execution - Collects all data sources and generates comprehensive report automatically
- #9 Based on Facts - All metrics derived from measurable data (git, tests, issues)
Full Standard: CODITECT-STANDARD-AUTOMATION.md