Skip to main content

User Best Practices Guide

Document Type: User Guide Target Audience: CODITECT users, AI agents Last Updated: December 22, 2025 Status: Production Ready


Prerequisites

Before applying these best practices, ensure you have:

  • Completed onboarding - Finished /onboard or Welcome Aboard guide
  • Basic CODITECT experience - Created at least one project
  • Understanding of core concepts - Agents, commands, skills, sessions
  • Active virtual environment - .venv activated for Python scripts

Verify your readiness:

source .venv/bin/activate
python3 scripts/update-component-activation.py list --activated-only | head -5

Quick Start

Apply best practices immediately with these 3 steps:

Step 1: Set Up Daily Routine

# Morning: Initialize and review
./scripts/init.sh
git log --oneline --since="yesterday" | head -5

Step 2: Enable Context Preservation

# After each work session
/cx "session summary"

Step 3: Use Lean Activation

# Check current activations
python3 scripts/update-component-activation.py list --activated-only

# Deactivate unused components
python3 scripts/update-component-activation.py deactivate TYPE NAME --reason "No longer needed"

Table of Contents

  1. Daily Workflow Best Practices
  2. AI Agent Usage Patterns
  3. Component Management
  4. Session Preservation
  5. Performance Optimization

Daily Workflow Best Practices

Morning Routine (5 minutes)

Initialize environment:

# Run init script
./scripts/init.sh

# Verify environment
# Expected: Python, Git, CODITECT framework ready

Review previous session:

# Check last checkpoint
ls -lt MEMORY-CONTEXT/checkpoints/ | head -5

# Review git log
git log --oneline --since="yesterday" | head -10

During Development

DO:

  • Capture context every 2-3 hours: /cx "progress summary"
  • Create checkpoints at logical milestones
  • Use AI command router for guidance: coditect-router "need..."
  • Search for existing components before creating new
  • Document component usage in CHANGELOG.md

DON'T:

  • Skip session exports (data loss risk)
  • Create duplicate components without searching
  • Modify activation JSON directly (use CLI)
  • Proceed without activating required components
  • Ignore high dedup rates (>95% may indicate repeated work)

End of Day Routine (5 minutes)

Preserve work:

# Capture session context
/cx "daily summary"

# Create checkpoint
/create-checkpoint "Daily progress: [brief description]"

# Verify
cat MEMORY-CONTEXT/dedup_state/dedup_stats.json
# Check dedup rate (60-80% ideal)

AI Agent Usage Patterns

Agent Selection Best Practices

Search before creating:

# Always search first
Grep(pattern="task-domain", glob="agents/*.md", output_mode="files_with_matches")

# Read existing agents
Read(file_path="agents/candidate-agent.md")

# Verify activation
Read(file_path=".coditect/component-activation-status.json")

Invocation Patterns

Correct invocation:

Task(subagent_type="agent-name",
description="Clear, concise task description",
prompt="""
Detailed instructions with:
- Context
- Requirements
- Expected output
- Constraints
""")

Common mistakes to avoid:

# ❌ WRONG
Task(agent="agent-name", ...) # Incorrect parameter
Task(subagent_type="agent-name", "prompt") # Missing description

# ✅ CORRECT
Task(subagent_type="agent-name",
description="Task description",
prompt="Detailed prompt")

Multi-Agent Coordination

Orchestrator pattern:

# Step 1: Planning
Task(subagent_type="orchestrator",
description="Create implementation plan",
prompt="Plan multi-agent workflow for [feature]")

# Step 2: Execution (agents run in sequence)
Task(subagent_type="codi-backend-engineer", ...)
Task(subagent_type="codi-frontend-engineer", ...)
Task(subagent_type="codi-devops-engineer", ...)

# Step 3: Validation
Task(subagent_type="codi-documentation-writer",
description="Document implementation",
prompt="Create comprehensive documentation")

Component Management

Activation Strategy

Lean activation philosophy:

  • Activate only what you need
  • Keep 10-15 components active max
  • Deactivate when no longer needed

Activation workflow:

# Step 1: Search for component
Grep(pattern="component-purpose", glob="**/*.md")

# Step 2: Verify it solves need
Read(file_path="path/to/component.md")

# Step 3: Check activation
python3 scripts/update-component-activation.py status type component-name

# Step 4: Activate if needed
python3 scripts/update-component-activation.py activate type component-name \
--reason "Clear reason for activation"

# Step 5: Commit activation
git add .coditect/component-activation-status.json
git commit -m "chore: Activate component-name for [purpose]"

Component Discovery

Before creating new components:

  1. Search agents: Grep(pattern=".*", glob="agents/*.md")
  2. Search skills: Grep(pattern=".*", glob="skills/*.md")
  3. Search commands: Grep(pattern=".*", glob="commands/*.md")
  4. Search scripts: Glob(pattern="scripts/*.py")

Reuse ratio target: 50-90% of needs met by existing components


Session Preservation

Export Strategy

Daily context capture:

/cx "session summary"

Weekly batch (all large sessions >10 MB):

/process-jsonl-sessions --batch --min-size 10

Checkpoint Creation

When to create checkpoints:

  • Completed feature implementation
  • Fixed critical bug
  • Refactored major component
  • End of workday
  • Before risky changes

Checkpoint naming:

# ✅ GOOD
/create-checkpoint "Implemented JWT authentication with refresh tokens"
/create-checkpoint "Fixed memory leak in session handler"
/create-checkpoint "Refactored database connection pooling"

# ❌ BAD
/create-checkpoint "WIP" # Not descriptive
/create-checkpoint "stuff" # Not meaningful

Deduplication Monitoring

Target dedup rates:

  • New projects: 20-40% (normal - high uniqueness)
  • Mature projects: 60-80% (good reuse)
  • Refactoring: 40-60% (mixed)

Check statistics:

cat MEMORY-CONTEXT/dedup_state/dedup_stats.json

If dedup rate <20%: Consider if you're recreating existing solutions

If dedup rate >95%: May indicate repeated work without progress


Performance Optimization

Token Usage Optimization

CLAUDE.md size:

  • Target: <200 lines (~2,000 tokens)
  • Current: Use extraction checklist if >500 lines
  • Impact: 10 tokens/line × 100 turns = 1,000 tokens/line over session

Component activation:

  • Keep 10-15 active components max
  • Each activated component adds to context
  • Deactivate unused components

Documentation strategy:

  • Use progressive disclosure (read on-demand)
  • Don't load all docs at session start
  • Reference external files, don't inline

Session Size Management

Monitor session sizes:

du -h ~/.claude/projects/*/

If session >10 MB:

  • Use JSONL batch processing
  • Don't use text export (may timeout)

If session >50 MB:

  • Consider creating new project
  • Archive completed work
  • Split into logical sub-projects

Work Reuse Optimization

Maximize reuse:

  • Search components before creating
  • Adapt existing components when possible
  • Document reuse in CHANGELOG
  • Track reuse metrics

Token savings from reuse:

  • Reusing component: ~500 tokens saved
  • Recreating from scratch: ~5,000 tokens
  • ROI: 10x token efficiency

Advanced Patterns

Atomic File Write Pattern

When writing critical files (state, configuration, indexes), use the atomic write pattern to prevent data corruption from interrupted writes or system failures.

Why atomic writes matter:

  • Power loss safety: Partial writes don't corrupt existing data
  • Concurrent access: Other processes always see complete files
  • Rollback capability: Original file preserved until new write succeeds

Pattern implementation:

import os
import tempfile
from pathlib import Path

def atomic_write(filepath: Path, content: str) -> None:
"""
Atomically write content to file using temp + rename pattern.

Process:
1. Create temp file in same directory (same filesystem)
2. Write content to temp file
3. Verify temp file exists and has content
4. Atomic rename temp → target (single filesystem operation)
5. Clean up temp file on any failure
"""
# Create temp file in same directory (ensures same filesystem)
temp_fd, temp_path = tempfile.mkstemp(
dir=filepath.parent,
prefix=f".{filepath.name}.tmp-"
)
temp_file = Path(temp_path)

try:
# Write to temp file
with os.fdopen(temp_fd, 'w', encoding='utf-8') as f:
f.write(content)

# Verify temp file exists
if not temp_file.exists():
raise IOError(f"Temp file disappeared: {temp_file}")

# Atomic rename (single operation on POSIX systems)
temp_file.rename(filepath)

except Exception as e:
# Clean up temp file on failure
if temp_file.exists():
temp_file.unlink()
raise IOError(f"Failed to write {filepath}: {e}")

When to use:

  • Writing component-activation-status.json
  • Writing checkpoint_index.json
  • Writing global_hashes.json
  • Any file read by multiple processes/sessions

When NOT needed:

  • Log files (append-only, corruption acceptable)
  • Temporary working files
  • Files with external backup

Multi-Session Development

Session N: Discovery

# Discover and document components
Grep(pattern="domain", glob="**/*.md")
# Create session notes

Session N+1: Activation

# Activate discovered components
python3 scripts/update-component-activation.py activate ...
# Commit activation

Session N+2: Production Use

# Components auto-load (already activated)
Task(subagent_type="component-name", ...)

Error Recovery

If component fails:

  1. Check activation status
  2. Verify invocation syntax
  3. Read component documentation
  4. Test with simple prompt
  5. Escalate if persistent

If export fails:

  1. Check session size (<10 MB for text export)
  2. Try JSONL processing if large
  3. Manual export as fallback
  4. Report if reproducible

Quality Gates

Before committing:

  • All tests pass
  • Session exported
  • Checkpoint created
  • Documentation updated
  • No uncommitted changes

Before deploying:

  • All quality gates passed
  • Security audit complete
  • Documentation reviewed
  • Rollback plan documented
  • Monitoring configured

Measuring Success

Key Metrics

Productivity:

  • Projects created per week
  • Features completed per session
  • Time to production deployment

Quality:

  • Test coverage percentage
  • Documentation completeness
  • Code review comments

Efficiency:

  • Dedup rate (60-80% target)
  • Component reuse rate (50-90% target)
  • Token usage per task

Continuous Improvement

Monthly review:

  • Analyze dedup statistics
  • Review component activation patterns
  • Identify repeated tasks for automation
  • Optimize workflow based on metrics

Remember:

  • Capture context daily (/cx)
  • Create checkpoints at milestones
  • Search before creating
  • Keep components lean (10-15 active)
  • Monitor dedup rates (60-80% ideal)

Troubleshooting Common Issues

Command Not Found

Symptom: Slash command not recognized

Solution:

# 1. Verify initialization
./scripts/init.sh

# 2. Check activation
python3 scripts/update-component-activation.py status command COMMAND_NAME

# 3. Activate if needed
python3 scripts/update-component-activation.py activate command COMMAND_NAME --reason "Purpose"

Agent Not Responding

Symptom: Task() invocation fails

Solution:

# 1. Check activation
python3 scripts/update-component-activation.py status agent AGENT_NAME

# 2. Activate agent
python3 scripts/update-component-activation.py activate agent AGENT_NAME --reason "Purpose"

# 3. Use correct syntax
Task(subagent_type="agent-name", description="Task", prompt="Instructions")

Context Save Fails

Symptom: /cx fails or incomplete

Solution:

# 1. Check session size
du -h ~/.claude/projects/*/SESSION_ID.jsonl

# 2. If >10 MB, use batch processing
/process-jsonl-sessions --session SESSION_ID

# 3. Retry
/cx --force

Slow Session Startup

Symptom: >30 seconds to initialize

Solution:

# 1. Reduce activated components
python3 scripts/update-component-activation.py list --activated-only

# 2. Deactivate unused
python3 scripts/update-component-activation.py deactivate TYPE NAME --reason "Not needed"

# 3. Clean dedup state
python3 scripts/core/compact_dedup_state.py

Project Structure Issues

Symptom: Incomplete folder structure

Solution:

# 1. Check score
python3 scripts/validate-folder-organization.py . --type PROJECT_TYPE

# 2. Identify gaps
python3 scripts/folder-gap-analysis.py . --type PROJECT_TYPE --output gaps.json

# 3. Generate missing
python3 scripts/generate-missing-folders.py . --gaps-file gaps.json

# 4. Validate
python3 scripts/calculate-folder-score.py . --type PROJECT_TYPE

Next Steps

After adopting these best practices:

  1. Track metrics - Monitor dedup rates and component reuse weekly
  2. Optimize workflows - Identify repeated tasks for automation
  3. Share patterns - Document effective patterns for your team
  4. Review monthly - Analyze statistics and adjust practices
  5. Contribute back - Share improvements with the CODITECT community

Related guides:


Document Status: Production Ready Last Updated: December 22, 2025 Next Review: March 2026