/qa - Component Quality Assurance
Validate and review CODITECT components against established standards. Provides automated structural validation (Tier 1) and deep quality review (Tier 2) with scoring, grading, and actionable recommendations.
System Prompt
⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:
- IMMEDIATELY execute - no questions, no explanations first
- ALWAYS show full output from script/tool execution
- ALWAYS provide summary after execution completes
DO NOT:
- Say "I don't need to take action" - you ALWAYS execute when invoked
- Ask for confirmation unless
requires_confirmation: truein frontmatter - Skip execution even if it seems redundant - run it anyway
The user invoking the command IS the confirmation.
Usage
# Quick validation
/qa validate agents/new-agent.md
/qa validate skills/new-skill/SKILL.md
/qa validate commands/new-command.md
/qa validate scripts/new-script.py
# Validate by type
/qa validate --type agent
/qa validate --type skill
/qa validate --type command
/qa validate --type script
# Validate all
/qa validate --all
/qa validate --all --min-score 80
# Deep quality review
/qa review agents/new-agent.md
/qa review --all --release-check
# Auto-fix issues
/qa fix agents/new-agent.md
/qa fix agents/new-agent.md --dry-run
# Show standards
/qa standards
/qa standards --type agent
Commands
validate - Structural Validation (Tier 1)
Fast automated checks against component standards.
/qa validate [path] [options]
Options:
--type TYPE Validate all of type (agent|skill|command|script)
--all Validate all components
--min-score N Minimum passing score (default: 80)
--strict Use release-level thresholds (90%)
--json Output as JSON
--verbose Show all check details
Examples:
# Single component
/qa validate agents/memory-context-agent.md
# All agents
/qa validate --type agent
# All with minimum score
/qa validate --all --min-score 85
# JSON output for CI
/qa validate agents/new-agent.md --json
review - Quality Review (Tier 2)
Deep content quality assessment with recommendations.
/qa review [path] [options]
Options:
--focus DIM Focus on dimension (completeness|clarity|examples|integration|best_practices)
--compare Compare against similar components
--compare-to P Compare against specific component
--release-check Strict release-level review
--browser URL Add browser verification (Tier 3) against URL
--selectors S CSS selectors for element checks (comma-separated, requires --browser)
--baseline ID Visual baseline ID for regression check (requires --browser)
--json Output as JSON
Examples:
# Full review
/qa review agents/memory-context-agent.md
# Focus on examples
/qa review agents/new-agent.md --focus examples
# Compare to similar
/qa review agents/new-agent.md --compare
# Release check
/qa review --all --release-check
# Review with browser verification
/qa review skills/dashboard/SKILL.md --browser http://localhost:3000/dashboard
# Browser verification with element checks
/qa review skills/login-flow/SKILL.md --browser http://localhost:3000/login \
--selectors "#email,#password,button[type=submit]"
# Browser verification with visual regression
/qa review skills/dashboard/SKILL.md --browser http://localhost:3000/dashboard \
--baseline abc123-def456
fix - Auto-Fix Issues
Automatically fix common issues where possible.
/qa fix [path] [options]
Options:
--dry-run Show what would be fixed without changing
--issue TYPE Fix specific issue type
--all Fix all fixable issues
Examples:
# Preview fixes
/qa fix agents/new-agent.md --dry-run
# Fix all auto-fixable
/qa fix agents/new-agent.md
# Fix specific issue
/qa fix agents/new-agent.md --issue naming
standards - Show Standards
Display current standards for reference.
/qa standards [options]
Options:
--type TYPE Show standards for type
--checklist Show as validation checklist
Validation Checks
Agent Checks
| Check | Required | Description |
|---|---|---|
| yaml_frontmatter | ✅ | Valid YAML frontmatter present |
| name_field | ✅ | name field present and valid |
| name_kebab_case | ✅ | Name follows kebab-case |
| name_matches_file | ✅ | Name matches filename |
| description | ✅ | Description 10-200 chars |
| tools_field | ✅ | Valid tools list |
| model_field | ✅ | Model is sonnet or haiku |
| opening_statement | ✅ | Starts with "You are a..." |
| core_responsibilities | ✅ | 2-5 numbered items |
| when_to_use | ⭕ | ✅/❌ sections present |
| example_usage | ⭕ | Task tool example |
Skill Checks
| Check | Required | Description |
|---|---|---|
| skill_md_present | ✅ | SKILL.md exists |
| yaml_frontmatter | ✅ | Valid YAML frontmatter |
| name_field | ✅ | name matches directory |
| description | ✅ | Clear capability statement |
| license | ✅ | License field present |
| allowed_tools | ✅ | Tools list present |
| when_to_use | ✅ | ✅/❌ format |
| core_capabilities | ✅ | 2-4 numbered items |
| usage_pattern | ✅ | 2-4 steps |
Command Checks
| Check | Required | Description |
|---|---|---|
| file_present | ✅ | File exists |
| filename_kebab | ✅ | Kebab-case filename |
| title_present | ✅ | Clear title |
| usage_section | ✅ | Usage syntax |
| examples_section | ✅ | 2+ examples |
| when_to_use | ⭕ | ✅/❌ sections |
Script Checks
| Check | Required | Description |
|---|---|---|
| shebang | ✅ | Correct shebang line |
| docstring | ✅ | Module docstring |
| type_hints | ✅ | Type hints on functions |
| main_guard | ✅ | if __name__ == "__main__" |
| exit_codes | ✅ | Exit codes documented |
Scoring
Validation Score (Tier 1)
Score = (Passed Checks / Total Checks) × 100
| Score | Grade | Status |
|---|---|---|
| 90-100% | A | ✅ Excellent |
| 80-89% | B | ✅ Good |
| 70-79% | C | ⚠️ Acceptable |
| 60-69% | D | ❌ Poor |
| <60% | F | ❌ Failing |
Quality Score (Tier 2)
| Dimension | Weight |
|---|---|
| Completeness | 30% |
| Clarity | 25% |
| Examples | 20% |
| Integration | 15% |
| Best Practices | 10% |
Browser Verification (Tier 3)
When --browser URL is passed to /qa review, live browser checks run via QAAgentBrowserTools (ADR-109). This adds runtime verification that a component's associated UI works as expected.
| Check | Description |
|---|---|
| page_load | URL loads successfully within timeout |
| element_{selector} | Expected DOM elements exist (per --selectors) |
| console_errors | No JavaScript errors in console |
| visual_regression | Screenshot matches baseline (per --baseline) |
Scoring: Browser checks produce a separate Tier 3 score (pass/fail per check, percentage overall). The review report shows Tier 2 quality score + Tier 3 browser score when both are available.
Script: python3 scripts/qa/browser_verify.py --url URL [--selectors S] [--baseline-id ID]
Without --browser: Tier 3 is skipped entirely. No browser needed for standard structural/quality reviews.
Output Examples
Validation Report
QA VALIDATION: agents/memory-context-agent.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Score: 88% (B) ✅ PASS
✅ PASSED (18 checks)
• YAML frontmatter valid
• Name follows kebab-case
• Description present (186 chars)
⚠️ WARNINGS (2)
• license field missing (recommended)
• model is haiku, consider sonnet
❌ FAILURES (0)
Review Report
QUALITY REVIEW: agents/memory-context-agent.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Overall: 85% (B)
Dimensions:
Completeness: 90% ████████████████████
Clarity: 88% █████████████████▌
Examples: 75% ███████████████
Integration: 85% █████████████████
Best Practices: 82% ████████████████▌
Recommendations:
1. Add complete usage example with output
2. Include license field in frontmatter
3. Add token budgets documentation
Browser Verification Report
BROWSER VERIFICATION: http://localhost:3000/dashboard
==================================================
Score: 100.0% (A)
Checks: 3/3 passed
[PASS] page_load
[PASS] element_#sidebar
[PASS] console_errors
Integration
Pre-Commit Hook
# Automatically validate changed components
# See: hooks/pre-commit-qa.sh
CI Pipeline
# Run on PR
/qa validate --all --min-score 80 --json
Manual Review
# Before releasing new component
/qa validate agents/new-agent.md
/qa review agents/new-agent.md
When to Use
Use /qa validate when:
- Creating new components
- Before committing changes
- Running CI checks
- Quick compliance check
Use /qa review when:
- Preparing for release
- Deep quality assessment needed
- Getting improvement recommendations
- Comparing component quality
Don't use /qa when:
- Component is draft/WIP (use /qa validate --min-score 50)
- Just exploring standards (use /qa standards)
Related Commands
/qa validate- Fast structural validation/qa review- Deep quality review/qa fix- Auto-fix issues/qa standards- View standards
Related Components
- component-qa-validator - Validation agent
- component-qa-reviewer - Review agent
- validate-component.py - Implementation script
- STANDARDS-ENFORCEMENT.md - Enforcement framework
- browser_verify.py - Browser verification script (Tier 3)
- QAAgentBrowserTools - Browser automation (ADR-109)
ADR-161 Grader Scripts
The /qa command is backed by type-specific grading scripts per ADR-161:
# Individual type graders
python3 scripts/qa/grade-agents.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-skills.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-commands.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-hooks.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-scripts.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-workflows.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-tools.py [path] [--json output.json] [--verbose]
# Unified orchestrator (all 7 types)
python3 scripts/qa/grade-all.py [--type TYPE] [--json output.json] [--report report.md] [--verbose]
- Shared library:
scripts/qa/qa_common.py - ADR:
internal/architecture/adrs/ADR-161-component-quality-assurance-framework.md - Skill:
skills/qa-grading-framework/SKILL.md
Success Output
When QA validation completes:
✅ COMMAND COMPLETE: /qa
Mode: <validate|review|fix>
Target: <component-path>
Score: N% (Grade)
Passed: X/Y checks
Status: PASS|FAIL
Completion Checklist
Before marking complete:
- Component identified
- Checks executed
- Score calculated
- Report generated
- Recommendations provided
Failure Indicators
This command has FAILED if:
- ❌ Component not found
- ❌ Invalid component type
- ❌ Score below minimum
- ❌ No report generated
When NOT to Use
Do NOT use when:
- Component is draft/WIP (use
--min-score 50) - Just viewing standards (use
/qa standards) - Non-CODITECT component
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Skip validation before commit | Quality issues | Run /qa validate |
| Ignore warnings | Technical debt | Fix warnings promptly |
| Override score thresholds | False confidence | Use standard thresholds |
Principles
This command embodies:
- #8 Verification Required - Evidence-based quality
- #9 Based on Facts - Objective scoring
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Agents: component-qa-validator, component-qa-reviewer Scripts: scripts/validate-component.py, scripts/qa/browser_verify.py Version: 1.1.0 Last Updated: 2026-02-17