Codi QA Patterns
Codi QA Patterns
When to Use This Skill
Use this skill when implementing codi qa patterns patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Level 1: Quick Reference (Under 500 tokens)
Quality Gate Criteria
interface QualityGate {
name: string;
checks: Check[];
threshold: 'all' | 'majority';
}
interface Check {
name: string;
type: 'coverage' | 'lint' | 'security' | 'performance';
threshold: number;
blocking: boolean;
}
const defaultGate: QualityGate = {
name: 'Production Release',
threshold: 'all',
checks: [
{ name: 'Code Coverage', type: 'coverage', threshold: 80, blocking: true },
{ name: 'Lint Errors', type: 'lint', threshold: 0, blocking: true },
{ name: 'Security Issues', type: 'security', threshold: 0, blocking: true },
{ name: 'Performance Score', type: 'performance', threshold: 90, blocking: false },
],
};
Quick Validation
# Run all quality checks
npm run lint && npm run test:coverage && npm run security:check
Level 2: Implementation Details (Under 2000 tokens)
Automated Quality Check
interface QualityReport {
passed: boolean;
checks: CheckResult[];
blockers: string[];
warnings: string[];
}
async function runQualityGate(gate: QualityGate): Promise<QualityReport> {
const results: CheckResult[] = [];
for (const check of gate.checks) {
const result = await executeCheck(check);
results.push(result);
}
const blockers = results
.filter(r => !r.passed && r.check.blocking)
.map(r => r.check.name);
const warnings = results
.filter(r => !r.passed && !r.check.blocking)
.map(r => r.check.name);
return {
passed: blockers.length === 0,
checks: results,
blockers,
warnings,
};
}
async function executeCheck(check: Check): Promise<CheckResult> {
switch (check.type) {
case 'coverage':
const coverage = await getCoveragePercentage();
return {
check,
passed: coverage >= check.threshold,
value: coverage,
};
case 'lint':
const lintErrors = await getLintErrorCount();
return {
check,
passed: lintErrors <= check.threshold,
value: lintErrors,
};
// ... other check types
}
}
CI Integration
# .github/workflows/quality-gate.yml
name: Quality Gate
on: [pull_request]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Tests with Coverage
run: npm run test:coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
with:
fail_ci_if_error: true
thresholds: '80'
- name: Run Lint
run: npm run lint
- name: Security Scan
run: npm audit --production
Level 3: Complete Reference (Full tokens)
Regression Test Suite
interface RegressionTest {
id: string;
category: 'smoke' | 'critical' | 'full';
priority: number;
estimatedDuration: number; // seconds
}
class RegressionRunner {
private tests: RegressionTest[] = [];
async runSmoke(): Promise<TestResults> {
const smokeTests = this.tests.filter(t => t.category === 'smoke');
return this.execute(smokeTests);
}
async runCritical(): Promise<TestResults> {
const criticalTests = this.tests
.filter(t => ['smoke', 'critical'].includes(t.category))
.sort((a, b) => a.priority - b.priority);
return this.execute(criticalTests);
}
async runFull(): Promise<TestResults> {
return this.execute(this.tests);
}
private async execute(tests: RegressionTest[]): Promise<TestResults> {
const results = await Promise.all(tests.map(t => this.runTest(t)));
return {
total: tests.length,
passed: results.filter(r => r.passed).length,
failed: results.filter(r => !r.passed).length,
duration: results.reduce((sum, r) => sum + r.duration, 0),
};
}
}
Best Practices:
- Define clear quality thresholds
- Automate all quality checks
- Block merges on critical failures
- Track quality metrics over time
- Run smoke tests on every commit
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: codi-qa-patterns
Completed:
- [x] Quality gate defined with {count} checks
- [x] All checks executed: {passed}/{total}
- [x] Blockers identified: {blocker_count}
- [x] Warnings identified: {warning_count}
- [x] Overall gate status: {PASSED|FAILED}
Outputs:
- Gate Name: {gate_name}
- Total Checks: {total}
- Passed: {passed_count}
- Failed: {failed_count}
- Blockers: {blocker_list}
- Warnings: {warning_list}
- Quality Report: {report_path}
Completion Checklist
Before marking this skill as complete, verify:
- Quality gate defined with clear thresholds (coverage ≥80%, lint errors = 0, security issues = 0)
- All check types executed (coverage, lint, security, performance)
- Each check result includes: passed boolean, actual value, threshold, blocking status
- Blocking failures identified and reported separately
- Non-blocking warnings documented
- Overall gate pass/fail status determined correctly
- Quality report generated with detailed breakdown
- CI integration configured (GitHub Actions workflow or equivalent)
Failure Indicators
This skill has FAILED if:
- ❌ Quality gate threshold undefined or missing
- ❌ Any check type fails to execute
- ❌ Check result missing passed/value/threshold fields
- ❌ Blocking check failure not preventing merge/deployment
- ❌ Coverage percentage calculation returns NaN or invalid value
- ❌ Lint error count incorrect (false positives/negatives)
- ❌ Security scan skipped or incomplete
- ❌ Performance threshold ignored when exceeded
- ❌ Quality report not generated or empty
When NOT to Use
Do NOT use this skill when:
- Prototype or proof-of-concept code (premature quality gates slow iteration)
- Documentation-only changes (no code quality checks needed)
- Emergency hotfix requiring immediate deployment (use post-deployment review)
- Experimental feature branch not intended for merge
- Third-party dependency updates (quality is external concern)
- Generated code or migrations (different quality standards apply)
- Local development builds (developer discretion on quality)
Use alternatives:
- Manual review: For documentation and non-code changes
- Post-deployment QA: For emergency hotfixes
- Specialized checks: For migrations (schema validation) or generated code
- Developer discretion: For local/experimental work
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Zero-error policies on all checks | Blocks all progress, ignored | Make performance/style non-blocking warnings |
| No differentiation between blockers/warnings | Everything blocks or nothing blocks | Set blocking: true only for critical (security, lint errors, coverage) |
| Fixed thresholds across all projects | Unrealistic for legacy code | Allow project-specific threshold configuration |
| Running all checks sequentially | Slow CI pipeline | Execute independent checks in parallel |
| Ignoring flaky tests | Unreliable quality signal | Track flake rate, quarantine flaky tests |
| No quality trend tracking | Can't measure improvement | Store quality metrics over time in database |
| Bypassing gates for "urgent" work | Degrades code quality | Enforce gates, use feature flags for urgent deploys |
Principles
This skill embodies the following CODITECT principles:
- #5 Eliminate Ambiguity - Explicit pass/fail criteria with numeric thresholds
- #6 Clear, Understandable, Explainable - Detailed quality reports show exactly what failed
- #8 No Assumptions - All checks execute, no skipping based on assumptions
- Trust & Transparency - Quality metrics fully visible to all stakeholders
- Automation First - All quality checks automated in CI pipeline
- Separation of Concerns - Blockers (must fix) vs. warnings (should fix) clearly distinguished
Version: 1.1.0 | Created: 2025-12-22 | Updated: 2026-01-04 | Author: CODITECT Team