Skip to main content

/qa - Component Quality Assurance

Validate and review CODITECT components against established standards. Provides automated structural validation (Tier 1) and deep quality review (Tier 2) with scoring, grading, and actionable recommendations.

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

# Quick validation
/qa validate agents/new-agent.md
/qa validate skills/new-skill/SKILL.md
/qa validate commands/new-command.md
/qa validate scripts/new-script.py

# Validate by type
/qa validate --type agent
/qa validate --type skill
/qa validate --type command
/qa validate --type script

# Validate all
/qa validate --all
/qa validate --all --min-score 80

# Deep quality review
/qa review agents/new-agent.md
/qa review --all --release-check

# Auto-fix issues
/qa fix agents/new-agent.md
/qa fix agents/new-agent.md --dry-run

# Show standards
/qa standards
/qa standards --type agent

Commands

validate - Structural Validation (Tier 1)

Fast automated checks against component standards.

/qa validate [path] [options]

Options:
--type TYPE Validate all of type (agent|skill|command|script)
--all Validate all components
--min-score N Minimum passing score (default: 80)
--strict Use release-level thresholds (90%)
--json Output as JSON
--verbose Show all check details

Examples:

# Single component
/qa validate agents/memory-context-agent.md

# All agents
/qa validate --type agent

# All with minimum score
/qa validate --all --min-score 85

# JSON output for CI
/qa validate agents/new-agent.md --json

review - Quality Review (Tier 2)

Deep content quality assessment with recommendations.

/qa review [path] [options]

Options:
--focus DIM Focus on dimension (completeness|clarity|examples|integration|best_practices)
--compare Compare against similar components
--compare-to P Compare against specific component
--release-check Strict release-level review
--browser URL Add browser verification (Tier 3) against URL
--selectors S CSS selectors for element checks (comma-separated, requires --browser)
--baseline ID Visual baseline ID for regression check (requires --browser)
--json Output as JSON

Examples:

# Full review
/qa review agents/memory-context-agent.md

# Focus on examples
/qa review agents/new-agent.md --focus examples

# Compare to similar
/qa review agents/new-agent.md --compare

# Release check
/qa review --all --release-check

# Review with browser verification
/qa review skills/dashboard/SKILL.md --browser http://localhost:3000/dashboard

# Browser verification with element checks
/qa review skills/login-flow/SKILL.md --browser http://localhost:3000/login \
--selectors "#email,#password,button[type=submit]"

# Browser verification with visual regression
/qa review skills/dashboard/SKILL.md --browser http://localhost:3000/dashboard \
--baseline abc123-def456

fix - Auto-Fix Issues

Automatically fix common issues where possible.

/qa fix [path] [options]

Options:
--dry-run Show what would be fixed without changing
--issue TYPE Fix specific issue type
--all Fix all fixable issues

Examples:

# Preview fixes
/qa fix agents/new-agent.md --dry-run

# Fix all auto-fixable
/qa fix agents/new-agent.md

# Fix specific issue
/qa fix agents/new-agent.md --issue naming

standards - Show Standards

Display current standards for reference.

/qa standards [options]

Options:
--type TYPE Show standards for type
--checklist Show as validation checklist

Validation Checks

Agent Checks

CheckRequiredDescription
yaml_frontmatterValid YAML frontmatter present
name_fieldname field present and valid
name_kebab_caseName follows kebab-case
name_matches_fileName matches filename
descriptionDescription 10-200 chars
tools_fieldValid tools list
model_fieldModel is sonnet or haiku
opening_statementStarts with "You are a..."
core_responsibilities2-5 numbered items
when_to_use✅/❌ sections present
example_usageTask tool example

Skill Checks

CheckRequiredDescription
skill_md_presentSKILL.md exists
yaml_frontmatterValid YAML frontmatter
name_fieldname matches directory
descriptionClear capability statement
licenseLicense field present
allowed_toolsTools list present
when_to_use✅/❌ format
core_capabilities2-4 numbered items
usage_pattern2-4 steps

Command Checks

CheckRequiredDescription
file_presentFile exists
filename_kebabKebab-case filename
title_presentClear title
usage_sectionUsage syntax
examples_section2+ examples
when_to_use✅/❌ sections

Script Checks

CheckRequiredDescription
shebangCorrect shebang line
docstringModule docstring
type_hintsType hints on functions
main_guardif __name__ == "__main__"
exit_codesExit codes documented

Scoring

Validation Score (Tier 1)

Score = (Passed Checks / Total Checks) × 100
ScoreGradeStatus
90-100%A✅ Excellent
80-89%B✅ Good
70-79%C⚠️ Acceptable
60-69%D❌ Poor
<60%F❌ Failing

Quality Score (Tier 2)

DimensionWeight
Completeness30%
Clarity25%
Examples20%
Integration15%
Best Practices10%

Browser Verification (Tier 3)

When --browser URL is passed to /qa review, live browser checks run via QAAgentBrowserTools (ADR-109). This adds runtime verification that a component's associated UI works as expected.

CheckDescription
page_loadURL loads successfully within timeout
element_{selector}Expected DOM elements exist (per --selectors)
console_errorsNo JavaScript errors in console
visual_regressionScreenshot matches baseline (per --baseline)

Scoring: Browser checks produce a separate Tier 3 score (pass/fail per check, percentage overall). The review report shows Tier 2 quality score + Tier 3 browser score when both are available.

Script: python3 scripts/qa/browser_verify.py --url URL [--selectors S] [--baseline-id ID]

Without --browser: Tier 3 is skipped entirely. No browser needed for standard structural/quality reviews.

Output Examples

Validation Report

QA VALIDATION: agents/memory-context-agent.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Score: 88% (B) ✅ PASS

✅ PASSED (18 checks)
• YAML frontmatter valid
• Name follows kebab-case
• Description present (186 chars)

⚠️ WARNINGS (2)
• license field missing (recommended)
• model is haiku, consider sonnet

❌ FAILURES (0)

Review Report

QUALITY REVIEW: agents/memory-context-agent.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Overall: 85% (B)

Dimensions:
Completeness: 90% ████████████████████
Clarity: 88% █████████████████▌
Examples: 75% ███████████████
Integration: 85% █████████████████
Best Practices: 82% ████████████████▌

Recommendations:
1. Add complete usage example with output
2. Include license field in frontmatter
3. Add token budgets documentation

Browser Verification Report

BROWSER VERIFICATION: http://localhost:3000/dashboard
==================================================
Score: 100.0% (A)
Checks: 3/3 passed

[PASS] page_load
[PASS] element_#sidebar
[PASS] console_errors

Integration

Pre-Commit Hook

# Automatically validate changed components
# See: hooks/pre-commit-qa.sh

CI Pipeline

# Run on PR
/qa validate --all --min-score 80 --json

Manual Review

# Before releasing new component
/qa validate agents/new-agent.md
/qa review agents/new-agent.md

When to Use

Use /qa validate when:

  • Creating new components
  • Before committing changes
  • Running CI checks
  • Quick compliance check

Use /qa review when:

  • Preparing for release
  • Deep quality assessment needed
  • Getting improvement recommendations
  • Comparing component quality

Don't use /qa when:

  • Component is draft/WIP (use /qa validate --min-score 50)
  • Just exploring standards (use /qa standards)
  • /qa validate - Fast structural validation
  • /qa review - Deep quality review
  • /qa fix - Auto-fix issues
  • /qa standards - View standards
  • component-qa-validator - Validation agent
  • component-qa-reviewer - Review agent
  • validate-component.py - Implementation script
  • STANDARDS-ENFORCEMENT.md - Enforcement framework
  • browser_verify.py - Browser verification script (Tier 3)
  • QAAgentBrowserTools - Browser automation (ADR-109)

ADR-161 Grader Scripts

The /qa command is backed by type-specific grading scripts per ADR-161:

# Individual type graders
python3 scripts/qa/grade-agents.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-skills.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-commands.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-hooks.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-scripts.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-workflows.py [path] [--json output.json] [--verbose]
python3 scripts/qa/grade-tools.py [path] [--json output.json] [--verbose]

# Unified orchestrator (all 7 types)
python3 scripts/qa/grade-all.py [--type TYPE] [--json output.json] [--report report.md] [--verbose]
  • Shared library: scripts/qa/qa_common.py
  • ADR: internal/architecture/adrs/ADR-161-component-quality-assurance-framework.md
  • Skill: skills/qa-grading-framework/SKILL.md

Success Output

When QA validation completes:

✅ COMMAND COMPLETE: /qa
Mode: <validate|review|fix>
Target: <component-path>
Score: N% (Grade)
Passed: X/Y checks
Status: PASS|FAIL

Completion Checklist

Before marking complete:

  • Component identified
  • Checks executed
  • Score calculated
  • Report generated
  • Recommendations provided

Failure Indicators

This command has FAILED if:

  • ❌ Component not found
  • ❌ Invalid component type
  • ❌ Score below minimum
  • ❌ No report generated

When NOT to Use

Do NOT use when:

  • Component is draft/WIP (use --min-score 50)
  • Just viewing standards (use /qa standards)
  • Non-CODITECT component

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Skip validation before commitQuality issuesRun /qa validate
Ignore warningsTechnical debtFix warnings promptly
Override score thresholdsFalse confidenceUse standard thresholds

Principles

This command embodies:

  • #8 Verification Required - Evidence-based quality
  • #9 Based on Facts - Objective scoring

Full Standard: CODITECT-STANDARD-AUTOMATION.md


Agents: component-qa-validator, component-qa-reviewer Scripts: scripts/validate-component.py, scripts/qa/browser_verify.py Version: 1.1.0 Last Updated: 2026-02-17