Auto Trigger Configuration
Task Complexity Analysis Skill
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Expert skill for analyzing incoming tasks to determine complexity, domains, and orchestration requirements. Essential for intelligent agent routing and resource allocation.
When to Use
Use this skill when:
- Receiving new tasks that need routing decisions
- Determining if single or multi-agent approach is needed
- Estimating token budget for a task
- Deciding which specialized agents to invoke
- Assessing if tasks can run in parallel
- Planning orchestration workflows
Don't use this skill when:
- Task is already assigned to a specific agent
- User explicitly specified the agent to use
- Simple command execution (no routing needed)
- Task is purely conversational
Core Algorithm
Complexity Levels
from enum import Enum
from dataclasses import dataclass, field
from typing import List, Set, Dict, Tuple, Optional
import re
class ComplexityLevel(Enum):
"""Task complexity levels with resource implications."""
SIMPLE = "simple" # Single agent, <5 tool calls
MODERATE = "moderate" # 2-3 agents, 5-15 tool calls
COMPLEX = "complex" # 4-6 agents, 15-30 tool calls
RESEARCH = "research" # 5-10 agents, 30+ tool calls
class TaskDomain(Enum):
"""Task domains for agent routing."""
RESEARCH = "research"
ARCHITECTURE = "architecture"
IMPLEMENTATION = "implementation"
TESTING = "testing"
DOCUMENTATION = "documentation"
SECURITY = "security"
PERFORMANCE = "performance"
THEIA_EXTENSION = "theia_extension"
MULTI_AGENT = "multi_agent"
EVENT_DRIVEN = "event_driven"
DATABASE = "database"
API_DESIGN = "api_design"
STRATEGY = "strategy"
Task Analysis Result
@dataclass
class TaskAnalysis:
"""Results of task complexity analysis."""
complexity: ComplexityLevel
domains: List[TaskDomain]
estimated_tokens: int
estimated_tool_calls: int
parallel_capable: bool
requires_state_management: bool
trigger_keywords: List[str]
confidence: float
recommended_agents: List[str] = field(default_factory=list)
def to_dict(self) -> Dict:
"""Convert to dictionary for JSON serialization."""
return {
"complexity": self.complexity.value,
"domains": [d.value for d in self.domains],
"estimated_tokens": self.estimated_tokens,
"estimated_tool_calls": self.estimated_tool_calls,
"parallel_capable": self.parallel_capable,
"requires_state_management": self.requires_state_management,
"trigger_keywords": self.trigger_keywords,
"confidence": self.confidence,
"recommended_agents": self.recommended_agents
}
Domain Detection
class TaskAnalyzer:
"""Analyzes tasks to determine orchestration requirements."""
# Keyword mappings to domains
DOMAIN_KEYWORDS = {
TaskDomain.RESEARCH: {
'research', 'analyze', 'compare', 'evaluate', 'investigate',
'study', 'explore', 'survey', 'review', 'assess', 'find',
'discover', 'examine', 'benchmark'
},
TaskDomain.ARCHITECTURE: {
'design', 'architect', 'system', 'structure', 'pattern',
'framework', 'c4', 'adr', 'diagram', 'model', 'blueprint',
'microservice', 'monolith', 'distributed'
},
TaskDomain.IMPLEMENTATION: {
'implement', 'build', 'create', 'develop', 'code', 'write',
'generate', 'construct', 'program', 'script', 'refactor',
'fix', 'update', 'modify', 'add', 'feature'
},
TaskDomain.TESTING: {
'test', 'verify', 'validate', 'qa', 'quality', 'coverage',
'unit', 'integration', 'e2e', 'assertion', 'mock', 'spec',
'tdd', 'bdd', 'regression'
},
TaskDomain.DOCUMENTATION: {
'document', 'explain', 'describe', 'comment', 'readme',
'guide', 'tutorial', 'reference', 'spec', 'manual', 'docs',
'api-docs', 'openapi', 'swagger'
},
TaskDomain.SECURITY: {
'security', 'audit', 'vulnerability', 'authentication',
'authorization', 'encrypt', 'secure', 'penetration', 'threat',
'owasp', 'xss', 'sql-injection', 'csrf', 'rbac', 'jwt'
},
TaskDomain.PERFORMANCE: {
'optimize', 'performance', 'speed', 'efficient', 'scale',
'benchmark', 'profile', 'throughput', 'latency', 'memory',
'cache', 'async', 'parallel', 'concurrent'
},
TaskDomain.STRATEGY: {
'strategy', 'market', 'competitor', 'business', 'brief',
'consulting', 'analysis', 'opportunity', 'swot', 'tam',
'positioning', 'gtm', 'roadmap'
},
TaskDomain.DATABASE: {
'database', 'sql', 'nosql', 'query', 'schema', 'migration',
'orm', 'foundationdb', 'd1', 'postgresql', 'table', 'index',
'transaction', 'acid', 'mongodb', 'redis'
},
TaskDomain.API_DESIGN: {
'api', 'endpoint', 'rest', 'graphql', 'rpc', 'openapi',
'swagger', 'route', 'controller', 'service', 'interface',
'grpc', 'protobuf', 'websocket'
},
TaskDomain.MULTI_AGENT: {
'agent', 'orchestrate', 'coordinate', 'delegate', 'subagent',
'autonomous', 'multi-agent', 'workflow', 'pipeline', 'crew'
},
TaskDomain.EVENT_DRIVEN: {
'event', 'pub-sub', 'message', 'queue', 'stream', 'kafka',
'reactive', 'subscription', 'listener', 'emit', 'webhook'
},
TaskDomain.THEIA_EXTENSION: {
'theia', 'extension', 'vscode', 'ide', 'editor', 'plugin',
'monaco', 'inversify', 'widget', 'language-server'
}
}
def _detect_domains(self, words: Set[str]) -> List[TaskDomain]:
"""Detect applicable domains from keywords."""
detected_domains = []
domain_scores = {}
for domain, keywords in self.DOMAIN_KEYWORDS.items():
overlap = words & keywords
if overlap:
domain_scores[domain] = len(overlap)
# Sort by score and return top domains
sorted_domains = sorted(
domain_scores.items(),
key=lambda x: x[1],
reverse=True
)
detected_domains = [d[0] for d in sorted_domains]
# Default to implementation if no domains detected
if not detected_domains:
detected_domains.append(TaskDomain.IMPLEMENTATION)
return detected_domains
Complexity Detection
# Complexity indicators
COMPLEXITY_INDICATORS = {
ComplexityLevel.RESEARCH: {
'comprehensive', 'thorough', 'deep dive', 'full analysis',
'investigate', 'all aspects', 'complete', 'exhaustive',
'research everything', 'market study'
},
ComplexityLevel.COMPLEX: {
'end-to-end', 'full stack', 'entire', 'complete system',
'production-ready', 'enterprise', 'scalable', 'robust',
'multi-component', 'full implementation'
},
ComplexityLevel.MODERATE: {
'multiple', 'several', 'integrate', 'connect', 'combine',
'few', 'couple', 'with tests', 'add feature'
},
ComplexityLevel.SIMPLE: {
'simple', 'quick', 'basic', 'minimal', 'straightforward',
'just', 'only', 'single', 'small', 'tiny', 'one'
}
}
def _detect_complexity(
self,
task_lower: str,
words: Set[str],
domain_count: int
) -> ComplexityLevel:
"""Determine task complexity level."""
# Check for research indicators
if words & self.COMPLEXITY_INDICATORS[ComplexityLevel.RESEARCH]:
return ComplexityLevel.RESEARCH
# Check for complex indicators
if (words & self.COMPLEXITY_INDICATORS[ComplexityLevel.COMPLEX]
or domain_count >= 4):
return ComplexityLevel.COMPLEX
# Check for moderate indicators
if (words & self.COMPLEXITY_INDICATORS[ComplexityLevel.MODERATE]
or domain_count >= 2):
return ComplexityLevel.MODERATE
# Check for simple indicators
if words & self.COMPLEXITY_INDICATORS[ComplexityLevel.SIMPLE]:
return ComplexityLevel.SIMPLE
# Default based on domain count
if domain_count >= 3:
return ComplexityLevel.COMPLEX
elif domain_count >= 2:
return ComplexityLevel.MODERATE
return ComplexityLevel.SIMPLE
Resource Estimation
def _estimate_resources(
self,
complexity: ComplexityLevel,
domain_count: int,
description_length: int
) -> Tuple[int, int]:
"""Estimate token and tool call requirements."""
# Base estimates by complexity
base_estimates = {
ComplexityLevel.SIMPLE: (5_000, 3),
ComplexityLevel.MODERATE: (15_000, 10),
ComplexityLevel.COMPLEX: (50_000, 25),
ComplexityLevel.RESEARCH: (100_000, 40)
}
tokens, tools = base_estimates[complexity]
# Adjust for domain count
domain_multiplier = 1 + (domain_count - 1) * 0.2
tokens = int(tokens * domain_multiplier)
tools = int(tools * domain_multiplier)
# Adjust for description complexity
if description_length > 500:
tokens = int(tokens * 1.3)
elif description_length > 200:
tokens = int(tokens * 1.1)
return tokens, tools
Agent Routing
# Agent recommendations by domain
DOMAIN_AGENTS = {
TaskDomain.RESEARCH: [
"market-researcher", "competitive-analyst", "trend-analyst"
],
TaskDomain.ARCHITECTURE: [
"backend-architect", "software-design-architect", "senior-architect"
],
TaskDomain.IMPLEMENTATION: [
"rust-expert-developer", "backend-development",
"frontend-mobile-development"
],
TaskDomain.TESTING: [
"testing-specialist", "codi-test-engineer", "rust-qa-specialist"
],
TaskDomain.DOCUMENTATION: [
"codi-documentation-writer", "documentation-librarian"
],
TaskDomain.SECURITY: [
"security-specialist", "security-auditor", "penetration-testing-agent"
],
TaskDomain.PERFORMANCE: [
"application-performance", "performance-profiler"
],
TaskDomain.STRATEGY: [
"strategy-brief-generator", "business-intelligence-analyst"
],
TaskDomain.DATABASE: [
"database-architect", "foundationdb-expert"
],
TaskDomain.API_DESIGN: [
"backend-api-security", "actix-web-specialist"
],
TaskDomain.MULTI_AGENT: [
"orchestrator", "workflow-orchestrator", "use-case-analyzer"
]
}
def _recommend_agents(
self,
domains: List[TaskDomain],
complexity: ComplexityLevel
) -> List[str]:
"""Recommend agents for the task."""
recommended = []
seen = set()
for domain in domains:
agents = self.DOMAIN_AGENTS.get(domain, [])
for agent in agents:
if agent not in seen:
recommended.append(agent)
seen.add(agent)
# Limit recommendations based on complexity
limits = {
ComplexityLevel.SIMPLE: 2,
ComplexityLevel.MODERATE: 4,
ComplexityLevel.COMPLEX: 6,
ComplexityLevel.RESEARCH: 10
}
return recommended[:limits.get(complexity, 4)]
Complete Analysis
def analyze(self, task_description: str) -> TaskAnalysis:
"""
Analyze task to determine complexity and requirements.
Args:
task_description: User's task request
Returns:
TaskAnalysis with orchestration requirements
"""
task_lower = task_description.lower()
words = set(re.findall(r'\b\w+\b', task_lower))
# Detect domains
domains = self._detect_domains(words)
# Detect complexity level
complexity = self._detect_complexity(task_lower, words, len(domains))
# Estimate resources
estimated_tokens, estimated_tool_calls = self._estimate_resources(
complexity, len(domains), len(task_description)
)
# Determine capabilities
parallel_capable = self._is_parallel_capable(domains)
requires_state = self._requires_state_management(complexity, domains)
# Extract trigger keywords
triggers = self._extract_triggers(words)
# Calculate confidence
confidence = self._calculate_confidence(domains, triggers)
# Recommend agents
recommended_agents = self._recommend_agents(domains, complexity)
return TaskAnalysis(
complexity=complexity,
domains=domains,
estimated_tokens=estimated_tokens,
estimated_tool_calls=estimated_tool_calls,
parallel_capable=parallel_capable,
requires_state_management=requires_state,
trigger_keywords=triggers,
confidence=confidence,
recommended_agents=recommended_agents
)
def _is_parallel_capable(self, domains: List[TaskDomain]) -> bool:
"""Determine if task can be parallelized."""
parallel_domains = {
TaskDomain.RESEARCH, TaskDomain.SECURITY,
TaskDomain.PERFORMANCE, TaskDomain.TESTING,
TaskDomain.DOCUMENTATION, TaskDomain.STRATEGY
}
return any(d in parallel_domains for d in domains)
def _requires_state_management(
self,
complexity: ComplexityLevel,
domains: List[TaskDomain]
) -> bool:
"""Determine if stateful coordination is needed."""
stateful_domains = {
TaskDomain.MULTI_AGENT, TaskDomain.EVENT_DRIVEN,
TaskDomain.DATABASE, TaskDomain.STRATEGY
}
return (
complexity in [ComplexityLevel.COMPLEX, ComplexityLevel.RESEARCH]
or any(d in stateful_domains for d in domains)
)
def _extract_triggers(self, words: Set[str]) -> List[str]:
"""Extract key trigger words from task."""
all_triggers = set()
for keywords in self.DOMAIN_KEYWORDS.values():
all_triggers.update(keywords)
return sorted(words & all_triggers)
def _calculate_confidence(
self,
domains: List[TaskDomain],
triggers: List[str]
) -> float:
"""Calculate confidence in analysis."""
confidence = 0.5
confidence += min(len(domains) * 0.1, 0.3)
confidence += min(len(triggers) * 0.05, 0.2)
return min(confidence, 1.0)
Usage Examples
Basic Analysis
from scripts.analyze_task_complexity import TaskAnalyzer
analyzer = TaskAnalyzer()
# Simple task
result = analyzer.analyze("Fix a bug in the login function")
print(f"Complexity: {result.complexity.value}") # simple
print(f"Domains: {[d.value for d in result.domains]}") # ['implementation']
print(f"Estimated tokens: {result.estimated_tokens}") # ~5,000
# Complex task
result = analyzer.analyze(
"Build a production-ready REST API with authentication, "
"database integration, comprehensive tests, and documentation"
)
print(f"Complexity: {result.complexity.value}") # complex
print(f"Domains: {[d.value for d in result.domains]}")
# ['api_design', 'implementation', 'testing', 'documentation', 'database']
print(f"Recommended agents: {result.recommended_agents}")
CLI Usage
# Analyze a task
python3 scripts/analyze-task-complexity.py "Create a strategy brief for AI tools market"
# Output:
{
"complexity": "research",
"domains": ["strategy", "research"],
"estimated_tokens": 120000,
"estimated_tool_calls": 48,
"parallel_capable": true,
"requires_state_management": true,
"recommended_agents": ["strategy-brief-generator", "market-researcher", ...],
"confidence": 0.85
}
# Pretty print
python3 scripts/analyze-task-complexity.py --pretty "Build a secure API"
# From stdin (for piping)
echo '{"task": "Implement user authentication"}' | \
python3 scripts/analyze-task-complexity.py --stdin
Orchestrator Integration
async def route_task(task_description: str):
"""Route task to appropriate agent(s) based on complexity."""
analyzer = TaskAnalyzer()
analysis = analyzer.analyze(task_description)
if analysis.complexity == ComplexityLevel.SIMPLE:
# Single agent execution
agent = analysis.recommended_agents[0]
return await execute_single_agent(agent, task_description)
elif analysis.complexity == ComplexityLevel.MODERATE:
# Sequential multi-agent
results = []
for agent in analysis.recommended_agents:
result = await execute_single_agent(agent, task_description)
results.append(result)
return synthesize_results(results)
else: # COMPLEX or RESEARCH
# Full orchestration with waves
orchestrator = WaveOrchestrator()
tasks = decompose_to_agent_tasks(
task_description,
analysis.recommended_agents,
analysis.domains
)
return await orchestrator.execute_all(tasks)
Best Practices
DO
- Analyze before routing - Always analyze complexity before agent selection
- Use confidence scores - Lower confidence means more ambiguity
- Respect resource estimates - Plan token budgets accordingly
- Check parallel capability - Parallelize when safe
- Consider state needs - Complex tasks need checkpointing
- Log analysis results - For debugging and optimization
DON'T
- Don't skip analysis for "obvious" tasks - Edge cases exist
- Don't ignore low confidence - May need clarification
- Don't exceed token estimates - Budget appropriately
- Don't parallelize stateful tasks - Causes race conditions
- Don't route to single agent for research - Misses depth
Configuration Reference
| Parameter | Default | Description |
|---|---|---|
| Domain count threshold (moderate) | 2 | Domains to trigger moderate |
| Domain count threshold (complex) | 4 | Domains to trigger complex |
| Token multiplier per domain | 0.2 | Additional tokens per domain |
| Description length threshold | 500 | Chars to trigger 1.3x multiplier |
| Agent limit (simple) | 2 | Max recommended agents |
| Agent limit (moderate) | 4 | Max recommended agents |
| Agent limit (complex) | 6 | Max recommended agents |
| Agent limit (research) | 10 | Max recommended agents |
Complexity Decision Matrix
| Indicators | Domain Count | Result |
|---|---|---|
| "simple", "quick", "basic" | 1 | SIMPLE |
| "multiple", "integrate" | 2-3 | MODERATE |
| "end-to-end", "production-ready" | 3-4 | COMPLEX |
| "comprehensive", "thorough" | 4+ | RESEARCH |
| None detected | 1 | SIMPLE |
| None detected | 2 | MODERATE |
| None detected | 3+ | COMPLEX |
Resource Estimation Reference
| Complexity | Base Tokens | Base Tool Calls | Typical Duration |
|---|---|---|---|
| SIMPLE | 5,000 | 3 | <1 min |
| MODERATE | 15,000 | 10 | 1-3 min |
| COMPLEX | 50,000 | 25 | 3-10 min |
| RESEARCH | 100,000 | 40 | 10-30 min |
Integration with CODITECT
Script: scripts/analyze-task-complexity.py
Recommended integration points:
| Component | Usage | Notes |
|---|---|---|
| orchestrator | Pre-routing analysis | Always analyze first |
| use-case-analyzer | Intent classification | Combine with complexity |
| workflow-orchestrator | Resource planning | Budget tokens |
| agentic-orchestrator | Wave planning | Parallel detection |
Success Metrics
| Metric | Target | Measurement |
|---|---|---|
| Routing accuracy | >90% | Correct agent selected |
| Token estimate accuracy | ±20% | Actual vs estimated |
| False simple detection | <5% | Complex marked simple |
| False complex detection | <10% | Simple marked complex |
| Analysis latency | <100ms | Time to analyze |
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: task-complexity-analysis
Completed:
- [x] Task complexity classified (SIMPLE/MODERATE/COMPLEX/RESEARCH)
- [x] Domains detected (primary and secondary)
- [x] Resource estimates calculated (tokens, tool calls)
- [x] Agent recommendations generated
- [x] Parallelization capability assessed
Outputs:
- TaskAnalysis object with all fields populated
- Complexity level with confidence score
- Domain list (ordered by relevance)
- Token estimate (with domain/description multipliers)
- Recommended agents (limited by complexity)
- Parallel capability flag
- State management requirement flag
Completion Checklist
Before marking this skill as complete, verify:
- Complexity level assigned (SIMPLE/MODERATE/COMPLEX/RESEARCH)
- At least 1 domain detected (default: IMPLEMENTATION)
- Token estimate calculated with multipliers
- Tool call estimate within expected range
- Agent recommendations match domains
- Parallel capability flag set correctly
- State management requirement assessed
- Confidence score calculated (>0.5 minimum)
Failure Indicators
This skill has FAILED if:
- ❌ No complexity level assigned
- ❌ Empty domain list
- ❌ Token estimate = 0 or unrealistic
- ❌ No agent recommendations
- ❌ Confidence score < 0.5
- ❌ Parallel flag wrong (stateful task marked parallel)
- ❌ Analysis latency > 500ms
- ❌ Routing decision not aligned with complexity
When NOT to Use
Do NOT use this skill when:
- Task already assigned to specific agent (explicit routing)
- User specified agent with
/agentcommand - Simple command execution (no routing needed, e.g.,
/git-sync) - Purely conversational task (no work to route)
- Analysis already completed earlier in session
- Task is meta-analysis (analyzing the analyzer itself)
Use alternative approaches when:
- Explicit agent → Skip analysis, route directly
- Command invocation → Execute command handler
- Conversation → Respond directly without routing
- Previously analyzed → Reuse cached analysis
- Simple query → Use general-purpose agent
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Skipping analysis for "obvious" tasks | Edge cases misrouted | Always analyze, cache if needed |
| Ignoring low confidence (<0.5) | Ambiguous routing | Request clarification or use orchestrator |
| Exceeding token estimates | Budget overruns | Add 20-30% buffer, track actuals |
| Parallelizing stateful tasks | Race conditions, data corruption | Check state management requirement first |
| Single agent for RESEARCH | Misses depth and breadth | Route to 5-10 agents for research complexity |
| Over-routing SIMPLE tasks | Overhead, latency | Limit to 1-2 agents for simple tasks |
| Hardcoded domain-agent mappings | Inflexible, breaks with new agents | Use agent discovery service |
| No analysis logging | Can't improve accuracy | Log all analyses with outcomes |
Principles
This skill embodies these CODITECT principles:
#5 Eliminate Ambiguity
- Explicit complexity levels
- Domain detection with confidence scores
- Clear agent routing recommendations
#6 Clear, Understandable, Explainable
- Analysis results human-readable
- Confidence scores explain certainty
- Decision matrix transparent
#8 No Assumptions
- Always analyze, never assume complexity
- Validate token estimates against actuals
- Document low-confidence decisions
#9 Automation First
- Automatic complexity detection
- Automatic agent routing
- Self-calibrating estimates
Efficiency
- Analysis latency < 100ms target
- Token estimates prevent overruns
- Parallel detection maximizes throughput
Multi-Context Window Support
State Tracking (JSON)
{
"analysis_id": "analysis_20251221_142530",
"task_hash": "a1b2c3d4",
"result": {
"complexity": "complex",
"domains": ["api_design", "security", "testing"],
"estimated_tokens": 60000,
"recommended_agents": ["backend-api-security", "security-specialist"]
},
"routing_decision": {
"approach": "wave_orchestration",
"agents_assigned": 4,
"parallel_waves": 2
}
}
Progress Notes (Markdown)
# Task Complexity Analysis - Session
## Recent Analyses
| Task | Complexity | Domains | Agents |
|------|------------|---------|--------|
| Build auth API | complex | 3 | 4 |
| Fix typo | simple | 1 | 1 |
| Market research | research | 2 | 6 |
## Routing Accuracy
- Correct routes: 47/50 (94%)
- Overestimated: 2
- Underestimated: 1