Agent Skills Framework Extension
TDD Patterns Skill
When to Use This Skill
Use this skill when implementing tdd patterns patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Test-Driven Development with red-green-refactor workflow, comprehensive test strategies, and automated coverage tracking.
Core Capabilities
- Red-Green-Refactor - TDD cycle automation
- Test Fixtures - Reusable test data and setup
- Test Doubles - Mocks, stubs, and spies
- Coverage Tracking - Line, branch, and mutation coverage
- Property Testing - Hypothesis-based testing
TDD Workflow Automation
#!/usr/bin/env python3
"""
TDD workflow automation - red-green-refactor cycle.
"""
import subprocess
from pathlib import Path
from enum import Enum
class TestState(Enum):
RED = "red" # Tests failing
GREEN = "green" # Tests passing
REFACTOR = "refactor" # Refactoring phase
class TDDWorkflow:
def __init__(self, project_path: Path):
self.project_path = project_path
self.state = TestState.RED
def run_tests(self) -> bool:
"""Execute test suite."""
result = subprocess.run(
['pytest', '-v', '--cov=src', '--cov-report=term-missing'],
cwd=self.project_path,
capture_output=True,
text=True
)
print(result.stdout)
return result.returncode == 0
def red_phase(self, test_file: str):
"""RED: Write failing test."""
print("🔴 RED: Writing failing test...")
print(f" Edit {test_file} and add failing test")
print(" Run: pytest to verify test fails")
if self.run_tests():
print("❌ Error: Tests should fail in RED phase")
return False
self.state = TestState.GREEN
return True
def green_phase(self, implementation_file: str):
"""GREEN: Write minimal implementation."""
print("🟢 GREEN: Writing minimal implementation...")
print(f" Edit {implementation_file} to make tests pass")
if not self.run_tests():
print("❌ Error: Tests still failing")
return False
self.state = TestState.REFACTOR
return True
def refactor_phase(self):
"""REFACTOR: Improve code while keeping tests passing."""
print("♻️ REFACTOR: Improving code quality...")
print(" Refactor while keeping tests green")
if not self.run_tests():
print("❌ Error: Refactoring broke tests")
return False
print("✅ TDD cycle complete")
self.state = TestState.RED
return True
# Usage example
if __name__ == '__main__':
workflow = TDDWorkflow(Path.cwd())
# RED
workflow.red_phase('tests/test_calculator.py')
# GREEN
workflow.green_phase('src/calculator.py')
# REFACTOR
workflow.refactor_phase()
Test Fixture Factory
#!/usr/bin/env python3
"""
Test fixture factory for generating test data.
"""
import pytest
from dataclasses import dataclass
from typing import List
from datetime import datetime
@dataclass
class User:
id: int
name: str
email: str
created_at: datetime
class UserFactory:
"""Factory for creating test users."""
_id_counter = 1
@classmethod
def create(cls, **kwargs) -> User:
"""Create user with default values."""
defaults = {
'id': cls._id_counter,
'name': f'User {cls._id_counter}',
'email': f'user{cls._id_counter}@example.com',
'created_at': datetime.now()
}
cls._id_counter += 1
return User(**{**defaults, **kwargs})
@classmethod
def create_batch(cls, count: int) -> List[User]:
"""Create multiple users."""
return [cls.create() for _ in range(count)]
# Pytest fixtures
@pytest.fixture
def user():
"""Single test user."""
return UserFactory.create()
@pytest.fixture
def users():
"""Multiple test users."""
return UserFactory.create_batch(5)
# Usage in tests
def test_user_creation(user):
assert user.id > 0
assert user.name.startswith('User')
assert '@example.com' in user.email
def test_batch_creation(users):
assert len(users) == 5
assert all(u.id > 0 for u in users)
Mock and Stub Patterns
#!/usr/bin/env python3
"""
Mock and stub patterns for test isolation.
"""
from unittest.mock import Mock, MagicMock, patch
import pytest
# Example service to test
class EmailService:
def send_email(self, to: str, subject: str, body: str) -> bool:
# Real implementation would send email
pass
class UserService:
def __init__(self, email_service: EmailService):
self.email_service = email_service
def register_user(self, email: str, name: str) -> bool:
# Create user in database
user_created = True
# Send welcome email
if user_created:
self.email_service.send_email(
to=email,
subject="Welcome!",
body=f"Hello {name}"
)
return user_created
# Test with mock
def test_user_registration_sends_welcome_email():
# Arrange
mock_email_service = Mock(spec=EmailService)
user_service = UserService(mock_email_service)
# Act
result = user_service.register_user('test@example.com', 'Test User')
# Assert
assert result is True
mock_email_service.send_email.assert_called_once_with(
to='test@example.com',
subject='Welcome!',
body='Hello Test User'
)
# Test with patch
@patch('path.to.EmailService')
def test_user_registration_with_patch(mock_email_class):
mock_email_service = mock_email_class.return_value
user_service = UserService(mock_email_service)
user_service.register_user('test@example.com', 'Test User')
mock_email_service.send_email.assert_called_once()
Coverage Report Automation
# .github/workflows/test-coverage.yml
name: Test Coverage
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install pytest pytest-cov
- name: Run tests with coverage
run: |
pytest --cov=src --cov-report=xml --cov-report=html --cov-report=term
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
- name: Check coverage threshold
run: |
coverage report --fail-under=80
Usage Examples
TDD Red-Green-Refactor Cycle
Apply tdd-patterns skill to implement TDD workflow with automated test execution and coverage tracking
Test Fixture Generation
Apply tdd-patterns skill to create test fixture factories for generating test data
Mock-Based Testing
Apply tdd-patterns skill to implement mock and stub patterns for isolated unit testing
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: tdd-patterns
Completed:
- [x] Red-Green-Refactor cycle completed
- [x] Test fixtures generated
- [x] Mock objects implemented
- [x] Coverage >80% achieved
- [x] CI pipeline configured
Outputs:
- tests/test_*.py (Test suite with fixtures)
- src/*.py (Implementation with tests passing)
- .github/workflows/test-coverage.yml (CI configuration)
- reports/coverage.xml (Coverage report)
Test Results: 42 passed, 0 failed | Coverage: 87%
Completion Checklist
Before marking this skill as complete, verify:
- RED phase: Tests written and failing initially
- GREEN phase: Minimal implementation makes tests pass
- REFACTOR phase: Code improved while tests stay green
- Test fixtures reduce duplication
- Mocks isolate units under test
- Coverage report generated
- Coverage threshold met (>80%)
- CI runs tests automatically on push
- All tests passing in CI
- Coverage uploaded to tracking service
Failure Indicators
This skill has FAILED if:
- ❌ Tests pass in RED phase (should fail first)
- ❌ Implementation more complex than needed in GREEN phase
- ❌ Tests fail after refactoring
- ❌ Coverage below threshold (<80%)
- ❌ Mocks not used (tests depend on external services)
- ❌ Test fixtures cause test interdependence
- ❌ CI not running tests or always passing
- ❌ No coverage tracking configured
- ❌ Tests slow (>5 seconds for unit tests)
When NOT to Use
Do NOT use this skill when:
- Prototyping exploratory code (write tests after validation)
- Working with legacy code without tests (use
characterization-testinginstead) - Integration testing external APIs (use
integration-testinginstead) - UI/UX design iteration (use
visual-regression-testinginstead) - One-off scripts with no reuse (overhead not justified)
- Performance optimization (use
benchmark-driven-developmentinstead) - Learning new framework (experiment first, TDD after understanding)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Writing implementation first | Defeats TDD purpose | Always write failing test first |
| Over-engineering in GREEN | Complexity creep | Write simplest code to pass |
| Skipping REFACTOR | Technical debt accumulates | Always refactor while green |
| Testing implementation details | Brittle tests | Test behavior, not internals |
| No test isolation | Flaky tests | Use mocks for external dependencies |
| Large test fixtures | Hard to maintain | Keep fixtures focused and minimal |
| Coverage obsession | Testing trivial code | Focus on critical paths first |
| No CI integration | Tests not run consistently | Automate test execution |
Principles
This skill embodies:
- #2 Security First - Tests catch bugs before production
- #3 Keep It Simple - Minimal implementation in GREEN phase
- #5 Eliminate Ambiguity - Clear test names document behavior
- #8 No Assumptions - Tests validate assumptions with assertions
- #10 Test Everything - TDD ensures comprehensive test coverage
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Integration Points
- cicd-pipeline-design - Automated test execution
- code-quality-patterns - Quality metrics integration
- documentation-patterns - Test documentation