Skip to main content

/coverage-gaps - Coverage Gap Detector

Identify untested code paths introduced by recent changes and generate focused test cases to close gaps. Cross-references diffs against coverage data to find precisely what needs testing.

System Prompt

EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions first
  2. Load the agent coverage-gap-detector
  3. Detect changes via git diff against base
  4. Run coverage if available, or analyze test files statically
  5. Map gaps between changed code and test coverage
  6. Classify gaps by risk level
  7. Generate focused test cases for each gap
  8. Output gap analysis with generated tests

Usage

# Default: diff against main
/coverage-gaps

# Specific base
/coverage-gaps --base develop

# Only critical/high risk gaps
/coverage-gaps --min-risk high

# Generate tests and write to files
/coverage-gaps --output-dir tests/generated/

# Run generated tests to validate
/coverage-gaps --run-generated

# Specific test framework
/coverage-gaps --framework jest

Options

OptionDescriptionDefault
--baseBase reference for diffmain
--min-riskMinimum risk level to reportmedium
--generate-testsAuto-generate test codetrue
--run-generatedRun generated tests to validatefalse
--output-dirWrite test files to directorynone
--frameworkTest framework (pytest, jest, etc.)auto-detect
  • /test-generate - Generate tests for specific functions
  • /test-coverage - View overall coverage metrics
  • /flaky-tests - Ensure generated tests are stable
  • /bug-scan - Identify which gaps are in bug-prone code

Success Output

COMMAND COMPLETE: /coverage-gaps
Base: main | Head: HEAD
Files Changed: 8 | Functions Modified: 12
Gaps Found: 14 (3 critical, 5 high, 4 medium, 2 low)
Tests Generated: 20
Coverage Impact: +7.6% (projected)

Completion Checklist

  • Changed files detected
  • Coverage data collected or estimated
  • Gaps mapped to functions/branches
  • Gaps classified by risk
  • Tests generated for each gap
  • Coverage impact estimated

Failure Indicators

  • No changes detected against base
  • Cannot run coverage tool
  • Test framework not detected

When NOT to Use

  • Measuring overall coverage (use /test-coverage)
  • Writing tests from scratch (use /test-generate)
  • Fixing failing tests (use /ci-analyze)

Anti-Patterns

Anti-PatternProblemSolution
100% coverage targetDiminishing returnsFocus on critical paths
Test implementation detailsBrittle testsTest behavior
Generate without runningBroken testsValidate generated tests

Principles

  • #3 Complete Execution - Detect, classify, generate, validate
  • #9 Based on Facts - Gaps from actual coverage data

Full Standard: CODITECT-STANDARD-AUTOMATION.md