Agent Skills Framework Extension
QA Validation Patterns Skill
When to Use This Skill
Use this skill when implementing qa validation patterns patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Automated compliance checks, test coverage gates, quality threshold enforcement, and validation automation for CI/CD pipelines.
Core Capabilities
- Quality Gates - Automated pass/fail criteria enforcement
- Coverage Gates - Code coverage thresholds, branch coverage
- Compliance Validation - Standards compliance, policy enforcement
- Automated Checks - Linting, formatting, static analysis
- CI/CD Integration - Pipeline integration, automated blocking
Quality Gate Framework
# scripts/quality_gates.py
from typing import List, Dict, Optional
from dataclasses import dataclass
from enum import Enum
class GateStatus(Enum):
"""Quality gate status."""
PASSED = "passed"
FAILED = "failed"
WARNING = "warning"
SKIPPED = "skipped"
@dataclass
class QualityGate:
"""Single quality gate definition."""
name: str
threshold: float
operator: str # '>=', '<=', '==', '!=', '>', '<'
severity: str # 'blocking', 'warning', 'info'
enabled: bool = True
@dataclass
class GateResult:
"""Result of quality gate evaluation."""
gate: QualityGate
actual_value: float
status: GateStatus
message: str
class QualityGateEnforcer:
"""Enforce quality gates in CI/CD pipeline."""
DEFAULT_GATES = [
QualityGate('code_coverage', 80.0, '>=', 'blocking'),
QualityGate('branch_coverage', 75.0, '>=', 'blocking'),
QualityGate('complexity', 15.0, '<=', 'warning'),
QualityGate('duplication', 5.0, '<=', 'warning'),
QualityGate('security_hotspots', 0.0, '==', 'blocking'),
QualityGate('critical_violations', 0.0, '==', 'blocking'),
QualityGate('tech_debt_ratio', 5.0, '<=', 'warning'),
QualityGate('maintainability_rating', 3.0, '<=', 'blocking')
]
def __init__(self, custom_gates: Optional[List[QualityGate]] = None):
self.gates = custom_gates or self.DEFAULT_GATES
def evaluate(self, metrics: Dict[str, float]) -> List[GateResult]:
"""Evaluate all quality gates against provided metrics."""
results = []
for gate in self.gates:
if not gate.enabled:
results.append(GateResult(
gate=gate,
actual_value=0.0,
status=GateStatus.SKIPPED,
message=f'Gate {gate.name} is disabled'
))
continue
actual = metrics.get(gate.name, 0.0)
passed = self._evaluate_condition(actual, gate.threshold, gate.operator)
if passed:
status = GateStatus.PASSED
message = f'{gate.name} passed: {actual} {gate.operator} {gate.threshold}'
else:
status = GateStatus.FAILED if gate.severity == 'blocking' else GateStatus.WARNING
message = f'{gate.name} failed: {actual} {gate.operator} {gate.threshold}'
results.append(GateResult(
gate=gate,
actual_value=actual,
status=status,
message=message
))
return results
def _evaluate_condition(self, actual: float, threshold: float, operator: str) -> bool:
"""Evaluate gate condition."""
ops = {
'>=': lambda a, t: a >= t,
'<=': lambda a, t: a <= t,
'>': lambda a, t: a > t,
'<': lambda a, t: a < t,
'==': lambda a, t: a == t,
'!=': lambda a, t: a != t
}
return ops[operator](#)
def should_block_deployment(self, results: List[GateResult]) -> bool:
"""Check if any blocking gate failed."""
return any(
r.status == GateStatus.FAILED and r.gate.severity == 'blocking'
for r in results
)
def generate_report(self, results: List[GateResult]) -> str:
"""Generate quality gate report."""
report = "# Quality Gate Results\n\n"
passed = [r for r in results if r.status == GateStatus.PASSED]
failed = [r for r in results if r.status == GateStatus.FAILED]
warnings = [r for r in results if r.status == GateStatus.WARNING]
report += f"**Summary:**\n"
report += f"- Passed: {len(passed)}\n"
report += f"- Failed: {len(failed)}\n"
report += f"- Warnings: {len(warnings)}\n\n"
if failed:
report += "## ❌ Failed Gates\n\n"
for r in failed:
report += f"- **{r.gate.name}**: {r.message}\n"
report += "\n"
if warnings:
report += "## ⚠️ Warning Gates\n\n"
for r in warnings:
report += f"- **{r.gate.name}**: {r.message}\n"
report += "\n"
return report
# Usage
enforcer = QualityGateEnforcer()
metrics = {
'code_coverage': 82.5,
'branch_coverage': 73.2,
'complexity': 12.4,
'duplication': 3.8,
'security_hotspots': 0,
'critical_violations': 0,
'tech_debt_ratio': 4.2,
'maintainability_rating': 2.8
}
results = enforcer.evaluate(metrics)
should_block = enforcer.should_block_deployment(results)
print(f"Should block deployment: {should_block}")
print("\n" + enforcer.generate_report(results))
CI/CD Pipeline Integration
# .github/workflows/quality-gates.yml
name: Quality Gates
on:
pull_request:
push:
branches: [main, develop]
jobs:
quality-gates:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tests with coverage
run: |
pytest --cov=src --cov-report=json --cov-report=term
echo "COVERAGE=$(python -c 'import json; print(json.load(open(\"coverage.json\"))[\"totals\"][\"percent_covered\"])')" >> $GITHUB_ENV
- name: Run complexity analysis
run: |
radon cc src/ -a -j > complexity.json
echo "COMPLEXITY=$(python -c 'import json; data=json.load(open(\"complexity.json\")); print(sum(f[\"complexity\"] for f in data.values()) / len(data))')" >> $GITHUB_ENV
- name: Run security scan
uses: snyk/actions/python@master
continue-on-error: true
with:
args: --json-file-output=snyk.json
- name: Evaluate quality gates
id: gates
run: |
python scripts/quality_gates.py \
--coverage ${{ env.COVERAGE }} \
--complexity ${{ env.COMPLEXITY }} \
--security snyk.json \
--output gates.json
- name: Post quality gate results
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const results = JSON.parse(fs.readFileSync('gates.json'));
const body = results.report;
const shouldBlock = results.should_block;
await github.rest.issues.createComment({
...context.repo,
issue_number: context.payload.pull_request.number,
body: body
});
if (shouldBlock) {
core.setFailed('Quality gates failed - blocking deployment');
}
- name: Block merge if gates fail
if: steps.gates.outputs.should_block == 'true'
run: exit 1
Coverage Gate Validator
# scripts/coverage_validator.py
from typing import Dict, List
import json
from pathlib import Path
class CoverageValidator:
"""Validate code coverage thresholds."""
def __init__(
self,
line_coverage: float = 80.0,
branch_coverage: float = 75.0,
function_coverage: float = 85.0
):
self.thresholds = {
'line': line_coverage,
'branch': branch_coverage,
'function': function_coverage
}
def validate(self, coverage_file: str) -> Dict:
"""Validate coverage report against thresholds."""
with open(coverage_file, 'r') as f:
coverage_data = json.load(f)
totals = coverage_data.get('totals', {})
results = {
'passed': True,
'coverage': {
'line': totals.get('percent_covered', 0),
'branch': totals.get('percent_covered_branches', 0),
'function': totals.get('percent_covered_functions', 0)
},
'violations': []
}
for metric, threshold in self.thresholds.items():
actual = results['coverage'][metric]
if actual < threshold:
results['passed'] = False
results['violations'].append({
'metric': f'{metric}_coverage',
'threshold': threshold,
'actual': actual,
'shortfall': threshold - actual
})
return results
def find_uncovered_files(self, coverage_file: str) -> List[Dict]:
"""Find files with low coverage."""
with open(coverage_file, 'r') as f:
coverage_data = json.load(f)
uncovered = []
for file_path, file_data in coverage_data.get('files', {}).items():
coverage = file_data.get('summary', {}).get('percent_covered', 0)
if coverage < self.thresholds['line']:
uncovered.append({
'file': file_path,
'coverage': coverage,
'threshold': self.thresholds['line'],
'gap': self.thresholds['line'] - coverage
})
return sorted(uncovered, key=lambda x: x['gap'], reverse=True)
# Usage
validator = CoverageValidator(line_coverage=80, branch_coverage=75)
results = validator.validate('coverage.json')
if not results['passed']:
print("Coverage validation failed:")
for violation in results['violations']:
print(f" {violation['metric']}: {violation['actual']:.1f}% < {violation['threshold']:.1f}%")
Usage Examples
Enforce Quality Gates
Apply qa-validation-patterns skill to evaluate all quality gates and block deployment if any fail
Validate Coverage Thresholds
Apply qa-validation-patterns skill to check coverage meets 80% line, 75% branch, 85% function thresholds
CI/CD Integration
Apply qa-validation-patterns skill to integrate quality gates into GitHub Actions pipeline
Integration Points
- qa-review-methodology - Quality scoring and assessment
- code-review-patterns - Review automation
- comprehensive-review-patterns - Multi-dimensional validation
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: qa-validation-patterns
Completed:
- [x] Quality gates configured and evaluated
- [x] Coverage thresholds validated (line, branch, function)
- [x] CI/CD pipeline integration verified
- [x] Quality gate report generated
Outputs:
- gates.json (quality gate results)
- coverage-validation-report.md
- .github/workflows/quality-gates.yml
Metrics:
- Gates evaluated: X/X passed
- Coverage: line X%, branch X%, function X%
- Violations: X blocking, X warnings
Completion Checklist
Before marking this skill as complete, verify:
- QualityGateEnforcer class instantiated with thresholds
- All quality gates evaluated against metrics
- should_block_deployment() returns correct boolean
- Coverage validator checks line/branch/function thresholds
- CI/CD workflow integrates quality gate script
- Quality gate report generated and posted
- Blocking gates configured to fail pipeline
- Metrics JSON output contains all expected fields
Failure Indicators
This skill has FAILED if:
- ❌ Quality gate evaluation throws unhandled exceptions
- ❌ Coverage metrics not found in coverage.json
- ❌ CI/CD pipeline does not block on gate failures
- ❌ Quality gate report missing critical information
- ❌ Gate thresholds not applied correctly (wrong operator logic)
- ❌ No gates configured or all gates skipped
- ❌ Coverage validation reports false positives/negatives
When NOT to Use
Do NOT use this skill when:
- No automated tests exist (use
testing-strategy-patternsfirst) - Coverage reports unavailable (configure coverage generation first)
- Simple project with no quality requirements (overkill)
- Single-developer hobby project (manual review sufficient)
- Prototyping phase (premature enforcement)
- CI/CD pipeline not set up (use
ci-cd-patternsfirst)
Use alternatives:
- manual-code-review - For small changes without automation
- testing-specialist agent - For test strategy before gates
- code-quality-tools - For configuring linters/formatters first
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Setting unrealistic thresholds (95%+) | Team bypasses gates, creates friction | Start at 70-80%, gradually increase |
| Blocking on warnings | Slows development velocity unnecessarily | Only block on critical/high severity |
| No coverage baseline | Gates fail on legacy code | Set coverage relative to changed lines |
| Single gate failure blocks all | One flaky gate blocks entire pipeline | Use severity levels (blocking vs warning) |
| No exemption process | Team disables gates entirely | Allow documented exemptions for edge cases |
| Applying gates retroactively | Breaks existing CI/CD jobs | Phase in gates gradually, warn first |
| Ignoring context | Same thresholds for POC and production | Adjust thresholds per environment/stage |
Principles
This skill embodies:
- #1 Search Before Create - Uses existing coverage/analysis tools, doesn't reinvent
- #5 Eliminate Ambiguity - Clear pass/fail criteria, no subjective judgments
- #6 Clear, Understandable, Explainable - Reports explain exactly which gates failed and why
- #8 No Assumptions - Explicitly validates metrics exist before evaluating
- Automation - Fully automated enforcement, no manual review steps
- Progressive Enhancement - Start with basic gates, add advanced as team matures
Standard: CODITECT-STANDARD-AUTOMATION.md
Version: 1.1.0 | Updated: 2026-01-04 | Quality Standard Applied