Skip to main content

Testing Documentation AI Agent Context

CODITECT Testing Directory - AI Agent Context

Status: Production Version: 2.0.0 Last Updated: December 22, 2025


Essential Reading Order

READ FIRST:

  1. README.md - Framework overview and quick start
  2. TEST-CATEGORIES.md - 34 test categories reference
  3. TEST-AUTOMATION.md - CI/CD integration and automation

For Specific Tasks:


Purpose

The internal/testing/ directory contains comprehensive documentation for the CODITECT testing framework, covering test automation, categories, components, and best practices for contributors and QA specialists.

Framework Metrics (Production):

  • 34 test categories across core, infrastructure, and new tests
  • 4,714+ individual tests with 100% pass rate
  • ~17 second execution time for full test suite
  • JSON + Markdown reporting for CI/CD integration

Directory Structure

internal/testing/
├── CLAUDE.md # This AI context file
├── README.md # Framework overview

├── Core Documentation
│ ├── TEST-CATEGORIES.md # 34 categories reference
│ ├── TEST-AUTOMATION.md # CI/CD and automation
│ ├── TEST-COMPONENTS.md # Agents, skills, commands
│ └── TEST-RESULTS-GUIDE.md # Result interpretation

├── Testing Guides (Planned)
│ ├── UNIT-TEST-GUIDE.md # pytest patterns (to be created)
│ ├── INTEGRATION-TEST-GUIDE.md # Component interactions (to be created)
│ └── E2E-TEST-GUIDE.md # Playwright/Cypress (to be created)

└── Analysis
└── TESTING-CONSOLIDATION-ANALYSIS.md # Gap analysis

Quick Reference

Running Tests

# Full test suite (from repo root)
python3 scripts/test-suite.py

# Specific category
python3 scripts/test-suite.py --category agents

# Multiple categories
python3 scripts/test-suite.py --category agents,commands,skills

# Verbose output
python3 scripts/test-suite.py --verbose

# Quick summary
python3 scripts/test-suite.py --quick

Test Results Location

# JSON report
cat test-results/test-results.json

# Markdown summary
cat test-results/TEST-RESULTS.md

# Check pass rate
jq '.summary.pass_rate' test-results/test-results.json

CI/CD Integration

# GitHub Actions example
- name: Run test suite
run: python3 scripts/test-suite.py

- name: Upload results
uses: actions/upload-artifact@v4
with:
name: test-results
path: test-results/

Test Categories Quick Reference

Core Component Tests (14 categories)

CategoryTestsDescription
AGENTS107Agent definitions and metadata
COMMANDS105Slash command validation
SKILLS62Skill definitions and patterns
SCRIPTS150Python/shell script validation
HOOKS23Event-driven automation hooks
REGISTRY453Component registry integrity
XREF97Cross-reference validation
CONFIG300Configuration file validation
DOCS13Documentation structure
STRUCTURE10Directory organization
QUALITY274Content quality checks
SECURITY135Security best practices
CONSISTENCY245Cross-component consistency
ACTIVATION407Component activation workflow

Infrastructure Tests (6 categories)

CategoryTestsDescription
INTEGRATION810Component interactions
PERFORMANCE70Response time and efficiency
DEPENDENCIES655Import and module validation
SCHEMAS84JSON/YAML schema validation
DOC_COVERAGE288Documentation completeness
WORKFLOWS18CI/CD workflow validation

New Test Categories (14 categories)

CategoryTestsDescription
UNIT_TESTS18pytest unit test execution
IMPORTS130Python import validation
SYMLINKS29Symlink chain validation
GIT_INTEGRITY5Repository and submodule status
LINT5Code quality (ruff)
CLI82CLI argument validation
LINKS11Markdown link validation
PARSE101JSON/YAML parsing
DOCKER3Docker configuration
TYPES2Type checking (mypy)
E2E2End-to-end test setup
COVERAGE4Test coverage metrics
SMOKE13Critical path validation
API_CONTRACT3API schema validation

Total: 34 categories, 4,714+ tests


Testing Components Available

Specialized Testing Agents (5 total)

  • codi-qa-specialist - Quality assurance and test strategies
  • codi-test-engineer - Test engineering and TDD
  • testing-specialist - Quality gate enforcement
  • rust-qa-specialist - Rust-specific QA
  • penetration-testing-agent - Security testing

Testing Skills (5 total)

  • e2e-testing - Playwright/Cypress patterns
  • visual-regression - Percy/Chromatic integration
  • contract-testing - Pact consumer-driven contracts
  • load-testing - k6, Artillery, Locust
  • security-audit - OWASP Top 10 auditing

Testing Commands (4 total)

  • /security-scan - Comprehensive security scanning
  • /dependency-audit - Vulnerability auditing
  • /perf-profile - Performance profiling
  • /lint-docs - Documentation quality checks

Testing Hooks (2 total)

  • pre-commit-quality - Quality checks before commits
  • pre-push-submodule-check - Submodule validation

Test Status Interpretation

StatusSymbolMeaningCI Behavior
PASSTest passed successfullyContinue
FAILCritical failureBlock release
WARNInformational warningContinue
SKIPNot applicableContinue

Exit Codes:

  • 0 - All tests passed (or warnings/skips only)
  • 1 - One or more tests failed (blocks CI)

Common AI Agent Tasks

Verify Test Suite Status

# Quick check
python3 scripts/test-suite.py --quick

# Expected output
# Pass Rate: 100.0%

Investigate Test Failures

# Run with verbose
python3 scripts/test-suite.py --verbose

# Check specific category
python3 scripts/test-suite.py --category agents --verbose

# Review JSON report
cat test-results/test-results.json | jq '.failed_tests'

Add New Test Category

# In scripts/test-suite.py:

def test_new_category(self):
"""Run new category tests."""
print("\n🔍 NEW CATEGORY")
print("=" * 60)

# Add test results
self.add_result(TestResult(
name="New test: validation",
category="new_category",
subcategory="validation",
status=TestStatus.PASS,
message="Validation passed"
))

Review Test Coverage

# Run tests with coverage
pytest --cov=. --cov-report=html tests/

# View coverage report
open htmlcov/index.html

Troubleshooting Quick Reference

Test Suite Not Running

# Check Python version (need 3.10+)
python3 --version

# Ensure in repo root
pwd # Should be .../coditect-core

# Check test-suite.py exists
ls -la scripts/test-suite.py

pytest Not Found

# Activate venv
source .venv/bin/activate

# Install pytest
pip install pytest pytest-cov pytest-asyncio

Permission Denied on Scripts

# Make executable
chmod +x scripts/*.sh

Import Errors in Tests

# Most scripts use stdlib only
# For external deps:
pip install pyyaml

Best Practices for AI Agents

Before Making Code Changes

  1. Run tests: python3 scripts/test-suite.py
  2. Verify 100% pass rate - Don't introduce failures
  3. Review warnings - Address any new warnings

After Making Code Changes

  1. Run affected category: python3 scripts/test-suite.py --category <category>
  2. Run full suite: python3 scripts/test-suite.py
  3. Check for new failures/warnings
  4. Update test documentation if adding new patterns

When Adding New Features

  1. Write tests first (TDD) if applicable
  2. Add test cases to relevant category
  3. Document test patterns in appropriate guide
  4. Ensure CI/CD integration works

Quality Gates

GateThresholdBlocking
Pass Rate100%Yes
Test Coverage80%+Yes
Security Issues0 CriticalYes
Lint Errors< 100No
Type Errors< 50No

In This Directory

In Parent Directories

External Resources


Documentation Gaps (To Be Filled)

The following guides are planned but not yet created:

  1. UNIT-TEST-GUIDE.md (~5000 tokens)

    • pytest patterns and best practices
    • Fixture usage and parameterization
    • Mocking strategies
    • Coverage configuration
  2. INTEGRATION-TEST-GUIDE.md (~5000 tokens)

    • Component interaction testing
    • API integration patterns
    • Database integration testing
    • Contract testing
  3. E2E-TEST-GUIDE.md (~5000 tokens)

    • Playwright/Cypress setup
    • Page object patterns
    • Visual regression testing
    • CI/CD integration

See TESTING-CONSOLIDATION-ANALYSIS.md for gap analysis and implementation plan.


Notes for AI Agents

Test Suite Execution:

  • Always run from repository root: /path/to/coditect-core
  • Use --verbose for detailed output when debugging
  • Use --quick for fast summary in pre-commit checks
  • Full suite takes ~17 seconds (acceptable for CI)

Test Philosophy:

  • FAIL status blocks releases - use for critical issues only
  • WARN status is informational - doesn't block
  • SKIP status for missing optional tools - expected
  • Maintain 100% pass rate in production

Adding Tests:

  • Follow existing test patterns in scripts/test-suite.py
  • Use appropriate TestStatus enum values
  • Include file paths in test results for debugging
  • Keep tests fast (< 1 second per test ideal)

Documentation Updates:

  • Update this CLAUDE.md when adding new test categories
  • Update TEST-CATEGORIES.md for new category details
  • Update TEST-AUTOMATION.md for new CI/CD patterns
  • Add examples to TEST-RESULTS-GUIDE.md for new status types

Compliance: CODITECT CLAUDE.md Standard v1.0.0 Last Updated: December 22, 2025 Next Review: When new test categories added or framework upgraded