/api-compat - API Compatibility Analysis
Analyzes API compatibility by comparing current API surface against baseline to detect breaking changes in REST endpoints, GraphQL schemas, and protobuf definitions.
Analyzes API compatibility by comparing current API surface against baseline to detect breaking changes in REST endpoints, GraphQL schemas, and protobuf definitions.
Analyze uncommitted or staged changes for breaking impact on currently deployed production code. Categorizes changes by risk level and identifies API surface violations, schema migrations, config changes, and behavioral shifts.
Compares canary instance metrics against baseline using statistical tests (Mann-Whitney U, Kolmogorov-Smirnov) to determine whether to advance, hold, or rollback a progressive deployment.
Execute controlled chaos engineering experiments to validate system resilience. Defines steady-state hypotheses, injects faults across network, resource, application, and infrastructure layers, monitors impact, and generates resilience reports.
Inventories all feature flags, identifies stale/abandoned flags past TTL, flags at 100% rollout needing cleanup, and generates cleanup recommendations with optional ticket creation.
Generate structured, blameless incident postmortems. Guides through timeline construction, 5-Whys root cause analysis, corrective action categorization (detect/prevent/mitigate/process), and produces professional postmortem documents.
Generate comprehensive quality metrics report including DORA metrics (deploy frequency, lead time, MTTR, change failure rate), code quality KPIs, defect density, and quality scorecard with trends.
Run quality checks to verify task completion with evidence, ensuring all requirements are met before marking tasks complete.
Analyzes code changes to determine which test suites to run, in what order, and flags flaky tests. Uses impact analysis to optimize test execution time while maintaining coverage.
Aggregate all quality signals into a single GO/NO-GO release decision. Evaluates blocking gates (must pass), scoring gates (weighted), and info gates (advisory) to produce a deployment recommendation.
Executes post-deployment smoke tests to verify critical paths, health endpoints, and essential functionality. Provides fast feedback on deployment success and can trigger automatic rollback on failure.
Systematic defect triage workflow. Classifies defects by severity (S1-S4), assigns priority using impact/urgency matrix, performs root cause taxonomy analysis, and generates structured triage reports.
User acceptance testing workflow with stakeholder validation
Generate comprehensive documentation packages for institutional or program accreditation reviews and compliance
Status: Accepted
You are a Breaking Impact Analyst responsible for analyzing code changes for breaking impact on deployed production code, classifying risks, and generating migration plans. Your role ensures that depl
You are a Chaos Engineering Specialist responsible for designing and executing controlled fault injection experiments to verify system resilience, planning GameDay exercises, and generating resilience
Comprehensive code quality validation before merge/deploy
Generated ADR-161 QA Grading Framework | Total: 2,122 components
Purpose
You are a Defect Triage Specialist responsible for systematic defect classification, prioritization using severity and impact matrices, root cause taxonomy analysis, and QA escape point identification
Coordinate peer review process for research papers including reviewer selection and management
Purpose
Purpose
Purpose
Purpose
Purpose
Comprehensive workflow for ensuring task quality, evidence verification, and compliance with CODITECT standards.
Comprehensive workflow for ensuring task quality, evidence verification, and compliance with CODITECT standards.
Generate comprehensive quality metrics dashboard
Autonomous agent for enforcing quality gates on task completion, ensuring all requirements are met with evidence before tasks can be marked complete.
Full regression test suite execution with failure analysis
You are a Release Gate Evaluator responsible for aggregating all quality signals into a single GO/NO-GO release decision by evaluating blocking gates, calculating weighted scoring gates, and producing
Purpose
Standardized workflow ensuring every task completion includes evidence collection, verification, and proper documentation.
Standardized workflow ensuring every task completion includes evidence collection, verification, and proper documentation.
TDD cycle: write test, implement feature, refactor