Skip to main content

Validate Implementation Plan

System Prompt

⚠️ EXECUTION DIRECTIVE: When the user invokes this command, you MUST:

  1. IMMEDIATELY execute - no questions, no explanations first
  2. ALWAYS show full output from script/tool execution
  3. ALWAYS provide summary after execution completes

DO NOT:

  • Say "I don't need to take action" - you ALWAYS execute when invoked
  • Ask for confirmation unless requires_confirmation: true in frontmatter
  • Skip execution even if it seems redundant - run it anyway

The user invoking the command IS the confirmation.


Usage

/validate-plan [path]

Validate plan execution: $ARGUMENTS

Arguments

$ARGUMENTS - Validation Context (optional)

Specify what to validate:

  • Plan path: "thoughts/shared/plans/2025-01-08-ENG-1234-feature.md"
  • Auto-detect: No arguments - validates most recent plan from git history
  • From session: "Validate work completed in this session"

Default Behavior

If no arguments:

  • Searches recent commits for plan references
  • Validates implementation against plan
  • Runs automated verification commands
  • Generates comprehensive validation report

Validate Plan

You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.

Initial Setup

When invoked:

  1. Determine context - Are you in an existing conversation or starting fresh?

    • If existing: Review what was implemented in this session
    • If fresh: Need to discover what was done through git and codebase analysis
  2. Locate the plan:

    • If plan path provided, use it
    • Otherwise, search recent commits for plan references or ask user
  3. Gather implementation evidence:

    # Check recent commits
    git log --oneline -n 20
    git diff HEAD~N..HEAD # Where N covers implementation commits

    # Run comprehensive checks
    cd $(git rev-parse --show-toplevel) && make check test

Validation Process

Step 1: Context Discovery

If starting fresh or need more context:

  1. Read the implementation plan completely

  2. Identify what should have changed:

    • List all files that should be modified
    • Note all success criteria (automated and manual)
    • Identify key functionality to verify
  3. Spawn parallel research tasks to discover implementation:

    Task 1 - Verify database changes:
    Research if migration [N] was added and schema changes match plan.
    Check: migration files, schema version, table structure
    Return: What was implemented vs what plan specified

    Task 2 - Verify code changes:
    Find all modified files related to [feature].
    Compare actual changes to plan specifications.
    Return: File-by-file comparison of planned vs actual

    Task 3 - Verify test coverage:
    Check if tests were added/modified as specified.
    Run test commands and capture results.
    Return: Test status and any missing coverage

Step 2: Systematic Validation

For each phase in the plan:

  1. Check completion status:

    • Look for checkmarks in the plan (- [x])
    • Verify the actual code matches claimed completion
  2. Run automated verification:

    • Execute each command from "Automated Verification"
    • Document pass/fail status
    • If failures, investigate root cause
  3. Assess manual criteria:

    • List what needs manual testing
    • Provide clear steps for user verification
  4. Think deeply about edge cases:

    • Were error conditions handled?
    • Are there missing validations?
    • Could the implementation break existing functionality?

Step 3: Generate Validation Report

Create comprehensive validation summary:

## Validation Report: [Plan Name]

### Implementation Status
✓ Phase 1: [Name] - Fully implemented
✓ Phase 2: [Name] - Fully implemented
⚠️ Phase 3: [Name] - Partially implemented (see issues)

### Automated Verification Results
✓ Build passes: `make build`
✓ Tests pass: `make test`
✗ Linting issues: `make lint` (3 warnings)

### Code Review Findings

#### Matches Plan:
- Database migration correctly adds [table]
- API endpoints implement specified methods
- Error handling follows plan

#### Deviations from Plan:
- Used different variable names in [file:line]
- Added extra validation in [file:line] (improvement)

#### Potential Issues:
- Missing index on foreign key could impact performance
- No rollback handling in migration

### Manual Testing Required:
1. UI functionality:
- [ ] Verify [feature] appears correctly
- [ ] Test error states with invalid input

2. Integration:
- [ ] Confirm works with existing [component]
- [ ] Check performance with large datasets

### Recommendations:
- Address linting warnings before merge
- Consider adding integration test for [scenario]
- Document new API endpoints

Working with Existing Context

If you were part of the implementation:

  • Review the conversation history
  • Check your todo list for what was completed
  • Focus validation on work done in this session
  • Be honest about any shortcuts or incomplete items

Important Guidelines

  1. Be thorough but practical - Focus on what matters
  2. Run all automated checks - Don't skip verification commands
  3. Document everything - Both successes and issues
  4. Think critically - Question if the implementation truly solves the problem
  5. Consider maintenance - Will this be maintainable long-term?

Validation Checklist

Always verify:

  • All phases marked complete are actually done
  • Automated tests pass
  • Code follows existing patterns
  • No regressions introduced
  • Error handling is robust
  • Documentation updated if needed
  • Manual test steps are clear

Relationship to Other Commands

Recommended workflow:

  1. /implement-plan - Execute the implementation
  2. /commit - Create atomic commits for changes
  3. /validate-plan - Verify implementation correctness
  4. /describe-pr - Generate PR description

The validation works best after commits are made, as it can analyze the git history to understand what was implemented.

Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.

Action Policy

<default_behavior> This command analyzes and recommends without making changes. Provides:

  • Detailed analysis of current state
  • Specific recommendations with justification
  • Prioritized action items
  • Risk assessment

User decides which recommendations to implement. </default_behavior>

After analysis, provide: - Analysis completeness (all aspects covered) - Recommendation confidence levels - Specific examples from codebase - Clear next steps for user

Success Output

When plan validation completes:

✅ COMMAND COMPLETE: /validate-plan
Plan: <plan-name>
Phases: N validated
Automated Checks: X/Y passed
Deviations: N found
Overall: PASSED|WARNINGS|FAILED

Completion Checklist

Before marking complete:

  • Plan located and read
  • All phases verified
  • Automated checks executed
  • Deviations documented
  • Validation report generated

Failure Indicators

This command has FAILED if:

  • ❌ Plan file not found
  • ❌ Automated checks not run
  • ❌ Critical failures not documented
  • ❌ No validation report produced

When NOT to Use

Do NOT use when:

  • Plan not yet implemented (use /implement-plan)
  • Need to create plan (use /create-plan)
  • Just want git status

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Skip automated checksIssues missedRun all verification commands
Ignore warningsTech debtAddress before merge
No git analysisIncomplete validationCheck commits and diffs

Principles

This command embodies:

  • #9 Based on Facts - Verify actual implementation
  • #3 Complete Execution - Run all checks

Full Standard: CODITECT-STANDARD-AUTOMATION.md