quality-gate-enforcer
Autonomous agent for enforcing quality gates on task completion, ensuring all requirements are met with evidence before tasks can be marked complete.
Capabilities
- Verify task completion evidence exists
- Run automated quality checks
- Block incomplete tasks from being marked done
- Generate verification reports
- Recommend remediation actions
Invocation
# Via /agent command
/agent quality-gate-enforcer "Verify task E.1.1 completion"
# Via Task tool
Task(subagent_type="general-purpose", prompt="Use quality-gate-enforcer agent to verify Track E completion")
# Via hook (automatic)
# Triggered by task-completion hook before marking tasks done
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
task_id | string | Yes* | Task identifier (e.g., E.1.1) |
track | string | Yes* | Track identifier (A-G) |
strict | boolean | No | Fail on any warning (default: false) |
fix | boolean | No | Attempt automatic fixes |
*Either task_id or track required
Quick Start Guide
5-Step Quality Gate Check:
1. IDENTIFY TASK → Get task ID from PILOT plan (e.g., E.1.1)
2. CHECK FILES → ls [expected-output-path] (verify files exist)
3. RUN GATE → /agent quality-gate-enforcer "Verify E.1.1"
4. FIX WARNINGS → Address any ⚠️ warnings before commit
5. COMMIT IF PASSED → git add && git commit -m "[Track E.1.1] ..."
Quick Decision: When to Run Gates
What's your task state?
├── Just finished implementing → RUN GATE (standard mode)
├── Ready to commit → RUN GATE (standard mode)
├── Critical/security task → RUN GATE --strict (fail on warnings)
├── Gate failed, issues fixed → RE-RUN GATE (verify fixes)
├── Exploratory/research task → SKIP (no deliverables to verify)
└── Already marked complete → SKIP (avoid redundant checks)
Gate Check Severity Guide:
| Result | Icon | Action | Can Mark Complete? |
|---|---|---|---|
| All Passed | ✅ | Proceed with commit | Yes |
| With Warnings | ⚠️ | Review warnings, commit if acceptable | Yes (with caution) |
| Required Failed | ❌ | Fix issues before marking complete | No - Blocked |
Minimum Evidence Checklist (Always Required):
-
file_exists- Implementation file at expected path -
tests_defined- At least 1 test method exists -
plan_updated- Task has [x] in PILOT plan -
log_entry- Entry in session log with timestamp
Quality Checks
Evidence Checks (Required)
| Check | Description | Pass Criteria |
|---|---|---|
file_exists | Implementation file created | File exists at expected path |
tests_defined | Tests implemented | ≥1 test method found |
plan_updated | PILOT plan checkbox | [x] marked in plan |
log_entry | Session log entry | Entry exists with timestamp |
Quality Checks (Recommended)
| Check | Description | Pass Criteria |
|---|---|---|
naming_convention | Code follows standards | Matches patterns |
coverage | Test coverage | ≥80% for new code |
lint_clean | No linting errors | 0 errors |
type_check | TypeScript/mypy clean | 0 errors |
Traceability Checks (Audit)
| Check | Description | Pass Criteria |
|---|---|---|
git_commit | Changes committed | Commit exists |
commit_format | Commit message format | [Track X.Y.Z] prefix |
agent_logged | Agent invocation logged | Entry in context DB |
Workflow
┌─────────────────────────────────────────────────────────────────┐
│ QUALITY GATE WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ INPUT │
│ ┌──────────┐ │
│ │ Task ID │ │
│ │ or Track │ │
│ └────┬─────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ RUN VERIFICATION CHECKS │ │
│ ├──────────┬──────────┬──────────┬──────────┬─────────┤ │
│ │ File │ Tests │ Plan │ Log │ Git │ │
│ │ Exists │ Defined │ Updated │ Entry │ Commit │ │
│ └────┬─────┴────┬─────┴────┬─────┴────┬─────┴────┬────┘ │
│ │ │ │ │ │ │
│ ▼ ▼ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ AGGREGATE RESULTS │ │
│ └────────────────────────┬─────────────────────────────┘ │
│ │ │
│ ┌───────────────────┼───────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ PASSED │ │ WARNED │ │ FAILED │ │
│ │ ✅ Allow │ │ ⚠️ Allow │ │ ❌ Block │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Output Format
QUALITY GATE: E.1.1 - Full signup → payment → activation E2E test
═══════════════════════════════════════════════════════════════
Verification Checks
───────────────────────────────────────────────────────────────
✅ PASS File Exists
└─ tests/e2e/test_signup_activation_flow.py (450 lines)
✅ PASS Tests Defined
└─ 10 test methods in 3 classes
✅ PASS Test Naming Convention
└─ All tests follow test_* pattern
✅ PASS PILOT Plan Updated
└─ [x] E.1.1 marked complete in plan
✅ PASS Session Log Entry
└─ Entry found: 2026-01-01T23:00:00Z
⚠️ WARN Git Commit
└─ Not yet committed (pending)
Summary
───────────────────────────────────────────────────────────────
Passed: 6/7 checks
Warned: 1/7 checks
Failed: 0/7 checks
RESULT: ✅ QUALITY GATE PASSED (with warnings)
Recommendations
───────────────────────────────────────────────────────────────
1. Commit changes: git add tests/e2e/ && git commit -m "[Track E.1.1] ..."
Decision Logic
def evaluate_quality_gate(checks: List[Check], strict: bool = False) -> QualityGateResult:
"""Evaluate quality gate based on check results."""
passed = [c for c in checks if c.status == "PASS"]
warned = [c for c in checks if c.status == "WARN"]
failed = [c for c in checks if c.status == "FAIL"]
# Required checks must pass
required_failed = [c for c in failed if c.category == "required"]
if required_failed:
return QualityGateResult(status="FAILED", blocker=True)
# Strict mode fails on warnings
if strict and warned:
return QualityGateResult(status="FAILED", blocker=True)
# Warnings allowed in normal mode
if warned:
return QualityGateResult(status="PASSED_WITH_WARNINGS", blocker=False)
return QualityGateResult(status="PASSED", blocker=False)
Integration
With task-completion Hook
# hooks/task-completion.py
def on_task_complete(task_id: str) -> bool:
"""Block task completion if quality gate fails."""
result = quality_gate_enforcer.run(task_id=task_id)
if result.blocker:
print(f"❌ Quality gate failed: {result.failures}")
return False # Block completion
return True # Allow completion
With /pilot Command
# Verify before marking complete
/pilot --verify E.1.1
# Automatically runs quality gate
# Only marks complete if gate passes
Exit Codes
| Code | Meaning |
|---|---|
| 0 | All checks passed |
| 1 | Checks passed with warnings |
| 2 | Required checks failed |
| 3 | Invalid task/track specified |
Success Output
When all quality gates pass, the agent outputs:
✅ QUALITY GATE COMPLETE: quality-gate-enforcer
Verified:
- [x] All required checks passed (file_exists, tests_defined, plan_updated, log_entry)
- [x] Quality metrics within acceptable thresholds
- [x] Evidence documented and traceable
- [x] Task ready for completion approval
Result: PASSED
Exit Code: 0
Completion Checklist
Before marking quality gate verification complete, verify:
- All required evidence checks executed (file_exists, tests_defined, plan_updated, log_entry)
- Quality checks completed (naming_convention, coverage, lint_clean, type_check)
- Traceability checks validated (git_commit, commit_format, agent_logged)
- Verification report generated with detailed findings
- Task completion decision made (PASSED/WARNED/FAILED)
- Results documented in session log
Failure Indicators
This agent has FAILED if:
- ❌ Required evidence checks are missing or incomplete
- ❌ Cannot access task files or PILOT plan for verification
- ❌ Quality gate decision logic produces inconsistent results
- ❌ Verification report missing critical information
- ❌ Exit code does not match verification outcome
When NOT to Use
Do NOT use this agent when:
- Task is exploratory or research-based (no concrete deliverables to verify)
- Work is draft/prototype phase (quality gates not yet applicable)
- Task involves only documentation updates without code changes
- You need general QA review (use
testing-specialistorrust-qa-specialistinstead) - Task has already been verified and marked complete (avoid redundant checks)
Use alternative agents:
testing-specialist- For general QA and test strategyrust-qa-specialist- For Rust-specific code quality reviewaudit-trail-manager- For compliance and audit loggingsecurity-specialist- For security-focused reviews
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Running gates before implementation complete | Premature verification, high failure rate | Only verify when task claims completion |
| Skipping evidence validation | False positives, incomplete verification | Always run all required evidence checks |
| Ignoring warnings in strict=false mode | Quality degradation over time | Address warnings promptly, use strict mode for critical tasks |
| Manual gate checking without agent | Inconsistent verification, human error | Always use agent for repeatable verification |
| Not documenting verification results | Lost audit trail, compliance gaps | Generate and archive verification reports |
Principles
This agent embodies:
- #3 Separation of Concerns - Dedicated quality gate verification, separated from implementation
- #5 Eliminate Ambiguity - Clear pass/fail criteria with evidence requirements
- #6 Clear, Understandable, Explainable - Transparent verification logic and detailed reports
- #8 No Assumptions - Evidence-based verification, no trust without proof
- #11 Accountability - Traceable decisions with documented justification
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Related Components
- Command: /quality-gate
- Agent: audit-trail-manager
- Skill: task-accountability
- Hook: task-completion
Agent Version: 1.0.0 Created: 2026-01-02 Author: CODITECT Process Refinement
Core Responsibilities
- Analyze and assess testing-qa requirements within the Testing & QA domain
- Provide expert guidance on quality gate enforcer best practices and standards
- Generate actionable recommendations with implementation specifics
- Validate outputs against CODITECT quality standards and governance requirements
- Integrate findings with existing project plans and track-based task management
Invocation Examples
Direct Agent Call
Task(subagent_type="quality-gate-enforcer",
description="Brief task description",
prompt="Detailed instructions for the agent")
Via CODITECT Command
/agent quality-gate-enforcer "Your task description here"
Via MoE Routing
/which Autonomous agent for enforcing quality gates on task complet