Testing Documentation AI Agent Context
CODITECT Testing Directory - AI Agent Context
Status: Production Version: 2.0.0 Last Updated: December 22, 2025
Essential Reading Order
READ FIRST:
- README.md - Framework overview and quick start
- TEST-CATEGORIES.md - 34 test categories reference
- TEST-AUTOMATION.md - CI/CD integration and automation
For Specific Tasks:
- Understanding test results: TEST-RESULTS-GUIDE.md
- Using testing components: TEST-COMPONENTS.md
- Writing unit tests: UNIT-TEST-GUIDE.md (to be created)
- Integration testing: INTEGRATION-TEST-GUIDE.md (to be created)
- E2E testing: E2E-TEST-GUIDE.md (to be created)
Purpose
The internal/testing/ directory contains comprehensive documentation for the CODITECT testing framework, covering test automation, categories, components, and best practices for contributors and QA specialists.
Framework Metrics (Production):
- 34 test categories across core, infrastructure, and new tests
- 4,714+ individual tests with 100% pass rate
- ~17 second execution time for full test suite
- JSON + Markdown reporting for CI/CD integration
Directory Structure
internal/testing/
├── CLAUDE.md # This AI context file
├── README.md # Framework overview
│
├── Core Documentation
│ ├── TEST-CATEGORIES.md # 34 categories reference
│ ├── TEST-AUTOMATION.md # CI/CD and automation
│ ├── TEST-COMPONENTS.md # Agents, skills, commands
│ └── TEST-RESULTS-GUIDE.md # Result interpretation
│
├── Testing Guides (Planned)
│ ├── UNIT-TEST-GUIDE.md # pytest patterns (to be created)
│ ├── INTEGRATION-TEST-GUIDE.md # Component interactions (to be created)
│ └── E2E-TEST-GUIDE.md # Playwright/Cypress (to be created)
│
└── Analysis
└── TESTING-CONSOLIDATION-ANALYSIS.md # Gap analysis
Quick Reference
Running Tests
# Full test suite (from repo root)
python3 scripts/test-suite.py
# Specific category
python3 scripts/test-suite.py --category agents
# Multiple categories
python3 scripts/test-suite.py --category agents,commands,skills
# Verbose output
python3 scripts/test-suite.py --verbose
# Quick summary
python3 scripts/test-suite.py --quick
Test Results Location
# JSON report
cat test-results/test-results.json
# Markdown summary
cat test-results/TEST-RESULTS.md
# Check pass rate
jq '.summary.pass_rate' test-results/test-results.json
CI/CD Integration
# GitHub Actions example
- name: Run test suite
run: python3 scripts/test-suite.py
- name: Upload results
uses: actions/upload-artifact@v4
with:
name: test-results
path: test-results/
Test Categories Quick Reference
Core Component Tests (14 categories)
| Category | Tests | Description |
|---|---|---|
| AGENTS | 107 | Agent definitions and metadata |
| COMMANDS | 105 | Slash command validation |
| SKILLS | 62 | Skill definitions and patterns |
| SCRIPTS | 150 | Python/shell script validation |
| HOOKS | 23 | Event-driven automation hooks |
| REGISTRY | 453 | Component registry integrity |
| XREF | 97 | Cross-reference validation |
| CONFIG | 300 | Configuration file validation |
| DOCS | 13 | Documentation structure |
| STRUCTURE | 10 | Directory organization |
| QUALITY | 274 | Content quality checks |
| SECURITY | 135 | Security best practices |
| CONSISTENCY | 245 | Cross-component consistency |
| ACTIVATION | 407 | Component activation workflow |
Infrastructure Tests (6 categories)
| Category | Tests | Description |
|---|---|---|
| INTEGRATION | 810 | Component interactions |
| PERFORMANCE | 70 | Response time and efficiency |
| DEPENDENCIES | 655 | Import and module validation |
| SCHEMAS | 84 | JSON/YAML schema validation |
| DOC_COVERAGE | 288 | Documentation completeness |
| WORKFLOWS | 18 | CI/CD workflow validation |
New Test Categories (14 categories)
| Category | Tests | Description |
|---|---|---|
| UNIT_TESTS | 18 | pytest unit test execution |
| IMPORTS | 130 | Python import validation |
| SYMLINKS | 29 | Symlink chain validation |
| GIT_INTEGRITY | 5 | Repository and submodule status |
| LINT | 5 | Code quality (ruff) |
| CLI | 82 | CLI argument validation |
| LINKS | 11 | Markdown link validation |
| PARSE | 101 | JSON/YAML parsing |
| DOCKER | 3 | Docker configuration |
| TYPES | 2 | Type checking (mypy) |
| E2E | 2 | End-to-end test setup |
| COVERAGE | 4 | Test coverage metrics |
| SMOKE | 13 | Critical path validation |
| API_CONTRACT | 3 | API schema validation |
Total: 34 categories, 4,714+ tests
Testing Components Available
Specialized Testing Agents (5 total)
- codi-qa-specialist - Quality assurance and test strategies
- codi-test-engineer - Test engineering and TDD
- testing-specialist - Quality gate enforcement
- rust-qa-specialist - Rust-specific QA
- penetration-testing-agent - Security testing
Testing Skills (5 total)
- e2e-testing - Playwright/Cypress patterns
- visual-regression - Percy/Chromatic integration
- contract-testing - Pact consumer-driven contracts
- load-testing - k6, Artillery, Locust
- security-audit - OWASP Top 10 auditing
Testing Commands (4 total)
- /security-scan - Comprehensive security scanning
- /dependency-audit - Vulnerability auditing
- /perf-profile - Performance profiling
- /lint-docs - Documentation quality checks
Testing Hooks (2 total)
- pre-commit-quality - Quality checks before commits
- pre-push-submodule-check - Submodule validation
Test Status Interpretation
| Status | Symbol | Meaning | CI Behavior |
|---|---|---|---|
| PASS | ✓ | Test passed successfully | Continue |
| FAIL | ✗ | Critical failure | Block release |
| WARN | ⚠ | Informational warning | Continue |
| SKIP | ○ | Not applicable | Continue |
Exit Codes:
0- All tests passed (or warnings/skips only)1- One or more tests failed (blocks CI)
Common AI Agent Tasks
Verify Test Suite Status
# Quick check
python3 scripts/test-suite.py --quick
# Expected output
# Pass Rate: 100.0%
Investigate Test Failures
# Run with verbose
python3 scripts/test-suite.py --verbose
# Check specific category
python3 scripts/test-suite.py --category agents --verbose
# Review JSON report
cat test-results/test-results.json | jq '.failed_tests'
Add New Test Category
# In scripts/test-suite.py:
def test_new_category(self):
"""Run new category tests."""
print("\n🔍 NEW CATEGORY")
print("=" * 60)
# Add test results
self.add_result(TestResult(
name="New test: validation",
category="new_category",
subcategory="validation",
status=TestStatus.PASS,
message="Validation passed"
))
Review Test Coverage
# Run tests with coverage
pytest --cov=. --cov-report=html tests/
# View coverage report
open htmlcov/index.html
Troubleshooting Quick Reference
Test Suite Not Running
# Check Python version (need 3.10+)
python3 --version
# Ensure in repo root
pwd # Should be .../coditect-core
# Check test-suite.py exists
ls -la scripts/test-suite.py
pytest Not Found
# Activate venv
source .venv/bin/activate
# Install pytest
pip install pytest pytest-cov pytest-asyncio
Permission Denied on Scripts
# Make executable
chmod +x scripts/*.sh
Import Errors in Tests
# Most scripts use stdlib only
# For external deps:
pip install pyyaml
Best Practices for AI Agents
Before Making Code Changes
- Run tests:
python3 scripts/test-suite.py - Verify 100% pass rate - Don't introduce failures
- Review warnings - Address any new warnings
After Making Code Changes
- Run affected category:
python3 scripts/test-suite.py --category <category> - Run full suite:
python3 scripts/test-suite.py - Check for new failures/warnings
- Update test documentation if adding new patterns
When Adding New Features
- Write tests first (TDD) if applicable
- Add test cases to relevant category
- Document test patterns in appropriate guide
- Ensure CI/CD integration works
Quality Gates
| Gate | Threshold | Blocking |
|---|---|---|
| Pass Rate | 100% | Yes |
| Test Coverage | 80%+ | Yes |
| Security Issues | 0 Critical | Yes |
| Lint Errors | < 100 | No |
| Type Errors | < 50 | No |
Related Documentation
In This Directory
- README.md - Framework overview
- TEST-CATEGORIES.md - Category reference
- TEST-AUTOMATION.md - CI/CD guide
- TEST-COMPONENTS.md - Testing components
- TEST-RESULTS-GUIDE.md - Result interpretation
In Parent Directories
- ../../scripts/CLAUDE.md - Scripts documentation
- ../../docs/reference/COMPONENT-REFERENCE.md - Component inventory
- ../../docs/guides/USER-TROUBLESHOOTING.md - General troubleshooting
External Resources
Documentation Gaps (To Be Filled)
The following guides are planned but not yet created:
-
UNIT-TEST-GUIDE.md (~5000 tokens)
- pytest patterns and best practices
- Fixture usage and parameterization
- Mocking strategies
- Coverage configuration
-
INTEGRATION-TEST-GUIDE.md (~5000 tokens)
- Component interaction testing
- API integration patterns
- Database integration testing
- Contract testing
-
E2E-TEST-GUIDE.md (~5000 tokens)
- Playwright/Cypress setup
- Page object patterns
- Visual regression testing
- CI/CD integration
See TESTING-CONSOLIDATION-ANALYSIS.md for gap analysis and implementation plan.
Notes for AI Agents
Test Suite Execution:
- Always run from repository root:
/path/to/coditect-core - Use
--verbosefor detailed output when debugging - Use
--quickfor fast summary in pre-commit checks - Full suite takes ~17 seconds (acceptable for CI)
Test Philosophy:
- FAIL status blocks releases - use for critical issues only
- WARN status is informational - doesn't block
- SKIP status for missing optional tools - expected
- Maintain 100% pass rate in production
Adding Tests:
- Follow existing test patterns in
scripts/test-suite.py - Use appropriate TestStatus enum values
- Include file paths in test results for debugging
- Keep tests fast (< 1 second per test ideal)
Documentation Updates:
- Update this CLAUDE.md when adding new test categories
- Update TEST-CATEGORIES.md for new category details
- Update TEST-AUTOMATION.md for new CI/CD patterns
- Add examples to TEST-RESULTS-GUIDE.md for new status types
Compliance: CODITECT CLAUDE.md Standard v1.0.0 Last Updated: December 22, 2025 Next Review: When new test categories added or framework upgraded