Skip to main content

This skill defines the framework other skills implement triggers

Auto-Trigger Framework

When to Use This Skill

Use this skill when implementing auto trigger framework patterns in your codebase.

How to Use This Skill

  1. Review the patterns and examples below
  2. Apply the relevant patterns to your implementation
  3. Follow the best practices outlined in this skill

Meta-framework that enables implicit component activation - skills and agents can declare trigger conditions and run automatically without explicit orchestration. This reduces coordination overhead and enables parallel execution.

Architecture Overview

Current Model (Explicit Orchestration)

User Request → Orchestrator → Explicit Agent Selection → Sequential Execution

"Use analysis-loop-detection"
"Use adaptive-retry"
"Use code-summary-generator"

New Model (Auto-Trigger + Capability Routing)

User Request → Capability Matcher → Auto-activated Components (Parallel)
↓ ↓
Match capabilities Event triggers fire
to request needs automatically

Trigger Types

1. Event Triggers

Components activate automatically when specific events occur.

# In skill SKILL.md or agent .md file
auto_triggers:
events:
- event: tool_call
tool: write_file
action: invoke_after
purpose: "Generate summary after file writes"

- event: tool_sequence
pattern: ["read_file", "read_file", "read_file", "read_file", "read_file"]
threshold: 5
action: inject_guidance
purpose: "Detect analysis loops"

- event: error
type: context_exceeded
action: invoke_with_retry
purpose: "Adaptive retry on context errors"

2. Capability Tags

Components declare capabilities; orchestrator routes based on task needs.

# In component metadata
capabilities:
- token_optimization
- error_recovery
- context_management
- code_analysis
- implementation_tracking

# Orchestrator matches task keywords to capabilities
task_keywords:
"implement multiple files": [token_optimization, implementation_tracking]
"handle errors gracefully": [error_recovery]
"analyze codebase": [code_analysis]

3. Parallel Execution Groups

Components in same group run concurrently without coordination.

# Define execution groups
parallel_groups:
pre_execution:
- analysis-loop-detection # Monitors tool calls
- adaptive-retry # Wraps LLM calls

post_file_write:
- code-summary-generator # Creates summary
- implementation-tracker # Updates progress

always_active:
- memory-optimization-agent # Background context management

Implementation

Trigger Registry

from typing import List, Dict, Callable, Optional
from dataclasses import dataclass, field
from enum import Enum
import re


class TriggerType(Enum):
EVENT = "event"
CAPABILITY = "capability"
SCHEDULE = "schedule"
PATTERN = "pattern"


class EventType(Enum):
TOOL_CALL = "tool_call"
TOOL_SEQUENCE = "tool_sequence"
ERROR = "error"
FILE_WRITE = "file_write"
CONTEXT_THRESHOLD = "context_threshold"
USER_MESSAGE = "user_message"


@dataclass
class Trigger:
"""Definition of an auto-trigger"""
trigger_type: TriggerType
event_type: Optional[EventType] = None
pattern: Optional[str] = None
threshold: Optional[int] = None
capabilities: List[str] = field(default_factory=list)
action: str = "invoke"
component_name: str = ""
component_type: str = "" # skill, agent, workflow
priority: int = 50 # 0-100, higher = more priority
parallel_group: Optional[str] = None


@dataclass
class TriggerMatch:
"""Result of trigger evaluation"""
trigger: Trigger
matched: bool
confidence: float # 0.0-1.0
context: Dict = field(default_factory=dict)


class TriggerRegistry:
"""
Central registry for all auto-triggers in CODITECT.

Components register their triggers here.
The runtime evaluates triggers and activates components automatically.
"""

def __init__(self):
self.triggers: List[Trigger] = []
self.capability_index: Dict[str, List[str]] = {} # capability -> components
self.parallel_groups: Dict[str, List[str]] = {}

def register(self, trigger: Trigger):
"""Register a new trigger"""
self.triggers.append(trigger)

# Index capabilities for fast lookup
for cap in trigger.capabilities:
if cap not in self.capability_index:
self.capability_index[cap] = []
self.capability_index[cap].append(trigger.component_name)

# Index parallel groups
if trigger.parallel_group:
if trigger.parallel_group not in self.parallel_groups:
self.parallel_groups[trigger.parallel_group] = []
self.parallel_groups[trigger.parallel_group].append(trigger.component_name)

def evaluate_event(self, event_type: EventType, context: Dict) -> List[TriggerMatch]:
"""Evaluate all triggers against an event"""
matches = []

for trigger in self.triggers:
if trigger.event_type != event_type:
continue

match = self._evaluate_trigger(trigger, context)
if match.matched:
matches.append(match)

# Sort by priority
matches.sort(key=lambda m: m.trigger.priority, reverse=True)
return matches

def find_by_capabilities(self, required: List[str]) -> List[str]:
"""Find components matching required capabilities"""
components = set()

for cap in required:
if cap in self.capability_index:
components.update(self.capability_index[cap])

return list(components)

def get_parallel_group(self, group_name: str) -> List[str]:
"""Get all components in a parallel execution group"""
return self.parallel_groups.get(group_name, [])

def _evaluate_trigger(self, trigger: Trigger, context: Dict) -> TriggerMatch:
"""Evaluate a single trigger against context"""
matched = False
confidence = 0.0

if trigger.event_type == EventType.TOOL_SEQUENCE:
# Check for tool sequence pattern
recent_tools = context.get("recent_tool_calls", [])
if self._matches_sequence(recent_tools, trigger.pattern, trigger.threshold):
matched = True
confidence = 1.0

elif trigger.event_type == EventType.TOOL_CALL:
# Check for specific tool call
tool_name = context.get("tool_name", "")
if tool_name == trigger.pattern:
matched = True
confidence = 1.0

elif trigger.event_type == EventType.ERROR:
# Check for error type
error_type = context.get("error_type", "")
if error_type == trigger.pattern:
matched = True
confidence = 1.0

elif trigger.event_type == EventType.CONTEXT_THRESHOLD:
# Check context size threshold
token_count = context.get("token_count", 0)
if token_count >= trigger.threshold:
matched = True
confidence = min(1.0, token_count / trigger.threshold)

return TriggerMatch(
trigger=trigger,
matched=matched,
confidence=confidence,
context=context
)

def _matches_sequence(self, recent: List[str], pattern: List[str], threshold: int) -> bool:
"""Check if recent tool calls match pattern threshold"""
if len(recent) < threshold:
return False

# Check last N calls
analysis_tools = set(pattern) if isinstance(pattern, list) else {pattern}
recent_slice = recent[-threshold:]

return all(tool in analysis_tools for tool in recent_slice)


# Global registry instance
TRIGGER_REGISTRY = TriggerRegistry()

Component Registration Example

# In analysis-loop-detection skill initialization
from skills.auto_trigger_framework import TRIGGER_REGISTRY, Trigger, TriggerType, EventType

# Register auto-trigger
TRIGGER_REGISTRY.register(Trigger(
trigger_type=TriggerType.EVENT,
event_type=EventType.TOOL_SEQUENCE,
pattern=["read_file", "grep", "glob", "read_file", "search"],
threshold=5,
action="inject_guidance",
component_name="analysis-loop-detection",
component_type="skill",
priority=90, # High priority - should trigger quickly
parallel_group="always_active",
capabilities=["loop_detection", "agent_monitoring", "guidance_injection"]
))

Runtime Integration

class AutoTriggerRuntime:
"""
Runtime that evaluates triggers and activates components.

Integrates with Claude Code tool execution pipeline.
"""

def __init__(self, registry: TriggerRegistry):
self.registry = registry
self.recent_tool_calls: List[str] = []
self.active_components: Dict[str, Any] = {}

def on_tool_call(self, tool_name: str, tool_args: Dict, result: Any):
"""Called after each tool execution"""
self.recent_tool_calls.append(tool_name)

# Keep last 20 calls
if len(self.recent_tool_calls) > 20:
self.recent_tool_calls = self.recent_tool_calls[-20:]

# Evaluate triggers
context = {
"tool_name": tool_name,
"tool_args": tool_args,
"result": result,
"recent_tool_calls": self.recent_tool_calls
}

matches = self.registry.evaluate_event(EventType.TOOL_CALL, context)

# Also check sequence triggers
seq_matches = self.registry.evaluate_event(EventType.TOOL_SEQUENCE, context)
matches.extend(seq_matches)

# Execute matched triggers
for match in matches:
self._execute_trigger(match)

def on_error(self, error_type: str, error_message: str):
"""Called on errors"""
context = {
"error_type": error_type,
"error_message": error_message
}

matches = self.registry.evaluate_event(EventType.ERROR, context)

for match in matches:
self._execute_trigger(match)

def _execute_trigger(self, match: TriggerMatch):
"""Execute a triggered component"""
trigger = match.trigger

if trigger.action == "invoke":
# Standard invocation
self._invoke_component(trigger.component_name, trigger.component_type)

elif trigger.action == "inject_guidance":
# Inject guidance message into conversation
self._inject_guidance(trigger.component_name, match.context)

elif trigger.action == "invoke_with_retry":
# Invoke with modified parameters
self._invoke_with_retry(trigger.component_name, match.context)

def _invoke_component(self, name: str, component_type: str):
"""Invoke a component by name"""
# Implementation depends on CODITECT runtime
pass

def _inject_guidance(self, component_name: str, context: Dict):
"""Inject guidance message for loop breaking"""
if component_name == "analysis-loop-detection":
guidance = (
"ANALYSIS LOOP DETECTED: You have made 5+ consecutive read-only "
"tool calls without taking action. Stop analyzing and start implementing "
"based on your current understanding."
)
# Inject into conversation context
pass

def _invoke_with_retry(self, component_name: str, context: Dict):
"""Invoke adaptive retry logic"""
if component_name == "adaptive-retry":
# Reduce parameters and retry
pass

Updating Existing Skills

analysis-loop-detection - Add Auto-Trigger

# Add to skills/analysis-loop-detection/SKILL.md frontmatter
auto_triggers:
- trigger_type: event
event_type: tool_sequence
pattern: ["read_file", "grep", "glob", "search_reference_code"]
threshold: 5
action: inject_guidance
parallel_group: always_active
capabilities: [loop_detection, agent_monitoring]

adaptive-retry - Add Auto-Trigger

# Add to skills/adaptive-retry/SKILL.md frontmatter
auto_triggers:
- trigger_type: event
event_type: error
pattern: context_exceeded
action: invoke_with_retry
parallel_group: error_handling
capabilities: [error_recovery, retry_logic]

- trigger_type: event
event_type: error
pattern: rate_limited
action: invoke_with_backoff
parallel_group: error_handling

code-summary-generator - Add Auto-Trigger

# Add to skills/code-summary-generator/SKILL.md frontmatter
auto_triggers:
- trigger_type: event
event_type: tool_call
pattern: write_file
action: invoke_after
parallel_group: post_file_write
capabilities: [summarization, context_optimization]

Capability-Based Routing

Instead of explicit agent selection, orchestrator matches task to capabilities:

def route_task(task_description: str) -> List[str]:
"""Route task to components based on capabilities"""

# Extract keywords from task
keywords = extract_keywords(task_description)

# Map keywords to capabilities
capability_map = {
"implement": ["implementation_tracking", "code_generation"],
"multiple files": ["token_optimization", "checkpoint_resume"],
"error": ["error_recovery", "adaptive_retry"],
"analyze": ["code_analysis", "pattern_detection"],
"optimize": ["token_optimization", "memory_management"],
"research paper": ["document_segmentation", "concept_extraction"],
}

required_capabilities = set()
for keyword in keywords:
if keyword in capability_map:
required_capabilities.update(capability_map[keyword])

# Find matching components
components = TRIGGER_REGISTRY.find_by_capabilities(list(required_capabilities))

return components

Benefits

AspectExplicit OrchestrationAuto-Trigger
Setup overheadHigh - must list all componentsLow - components self-register
ParallelismManual coordinationAutomatic via groups
New component integrationUpdate orchestratorJust add triggers
Runtime flexibilityStaticDynamic based on context
Token usageHigher (explicit coordination)Lower (automatic)

Trigger Configuration Quick Reference

Copy-paste configurations for common trigger scenarios:

ScenarioTrigger TypeEvent TypePatternAction
Analysis loopEVENTTOOL_SEQUENCE["read_file"×5]inject_guidance
Context overflowEVENTERRORcontext_exceededinvoke_with_retry
Rate limitingEVENTERRORrate_limitedinvoke_with_backoff
File createdEVENTTOOL_CALLwrite_fileinvoke_after
High token usageEVENTCONTEXT_THRESHOLD>80000invoke (compaction)
Code searchCAPABILITY-["code_analysis"]route_to_matching

YAML Snippet Library:

# Loop detection (copy to skill SKILL.md)
auto_triggers:
- trigger_type: event
event_type: tool_sequence
pattern: ["read_file", "grep", "glob"]
threshold: 5
action: inject_guidance
priority: 90

# Context overflow recovery
auto_triggers:
- trigger_type: event
event_type: error
pattern: context_exceeded
action: invoke_with_retry
priority: 95

# Post-write summary generation
auto_triggers:
- trigger_type: event
event_type: tool_call
pattern: write_file
action: invoke_after
priority: 50
parallel_group: post_file_write

# Capability-based routing
capabilities:
- code_analysis
- token_optimization
- error_recovery

Priority Guidelines:

Priority RangeUse For
90-100Critical safety (loops, errors)
70-89Quality gates (validation, review)
50-69Enhancement (summaries, tracking)
0-49Background optimization

Migration Path

  1. Phase 1: Add auto_triggers to existing DeepCode skills (non-breaking)
  2. Phase 2: Create TriggerRegistry and runtime hooks
  3. Phase 3: Update orchestrator to use capability routing
  4. Phase 4: Enable parallel execution groups

Success Metrics

MetricTarget
Orchestration tokens saved30-50%
New component integration time<5 min
Parallel execution coverage80% of applicable cases
False trigger rate<5%

Success Output

When successful, this skill MUST output:

✅ SKILL COMPLETE: auto-trigger-framework

Completed:
- [x] TriggerRegistry initialized with {trigger_count} registered triggers
- [x] {component_count} components registered with auto-triggers
- [x] Capability index built: {capability_count} unique capabilities
- [x] {parallel_group_count} parallel execution groups configured
- [x] Event triggers validated for {event_type_count} event types

Outputs:
- Trigger registry: {registered_components} components active
- Capability routing: {matched_count} matches for task "{task_description}"
- Parallel groups: {group_list}
- Auto-activation success rate: {activation_rate}%
- Token savings: {orchestration_tokens_saved} tokens ({savings_pct}% reduction)

Completion Checklist

Before marking this skill as complete, verify:

  • TriggerRegistry singleton created and accessible globally
  • All trigger types implemented (EVENT, CAPABILITY, SCHEDULE, PATTERN)
  • Event evaluation logic tested for each EventType
  • Capability index correctly maps capabilities → components
  • Parallel group execution validated (components run concurrently)
  • Component registration includes all required fields (name, type, priority)
  • Trigger matching produces confidence scores (0.0-1.0)
  • AutoTriggerRuntime integrated with tool execution pipeline
  • Guidance injection working for loop detection
  • No duplicate component registrations (unique component names)

Failure Indicators

This skill has FAILED if:

  • ❌ TriggerRegistry not initialized (None or empty)
  • ❌ Component registration fails silently (no error, not added)
  • ❌ Event evaluation always returns empty matches (never triggers)
  • ❌ Capability routing returns 0 matches for valid keywords
  • ❌ Parallel execution runs sequentially (no concurrency)
  • ❌ Trigger confidence always 0.0 or 1.0 (no nuance)
  • ❌ Tool call hook not firing (on_tool_call never invoked)
  • ❌ Infinite trigger loop (trigger activates itself recursively)
  • ❌ False positive rate > 10% (triggering on wrong events)

When NOT to Use

Do NOT use this skill when:

  • Single-agent workflow with no coordination needed
  • Explicit orchestration preferred for transparency
  • Debugging complex workflows (auto-triggers hide control flow)
  • Components require manual approval before execution
  • No repeated patterns to optimize (one-off tasks)
  • Token budget unlimited (no need for efficiency)
  • System under development (triggers add complexity)

Use alternative skills:

  • For explicit coordination → communication-protocols
  • For manual orchestration → Direct agent invocation
  • For workflow debugging → workflow-debugging-patterns
  • For approval-required tasks → Human-in-the-loop workflows

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Registering duplicate triggersConflicts and double-firingUse unique component names
No priority orderingNon-deterministic executionSet explicit priorities (0-100)
Overly broad patternsFalse positives everywhereNarrow pattern matching criteria
Ignoring confidence scoresAll triggers treated equallySort by confidence, filter < 0.7
No parallel groupsSequential execution overheadGroup independent components
Missing event typesTriggers never fireEnsure event types match tool calls
Circular trigger dependenciesInfinite loopsDesign acyclic trigger graphs
Hardcoded component namesBreaks when components renamedUse capability-based routing

Principles

This skill embodies:

  • #1 Recycle → Extend → Re-Use → Create - Extend components with auto-triggers, no rewrite
  • #3 Keep It Simple - Single registry pattern, minimal coordination overhead
  • #4 Separation of Concerns - Trigger logic separate from component implementation
  • #5 Eliminate Ambiguity - Explicit confidence scores and event type matching
  • #6 Clear, Understandable, Explainable - Trigger matches show why component activated
  • #7 Composability - Any component can declare triggers, compose freely
  • #9 Progressive Disclosure - Start with event triggers, add capability routing later

Full Standard: CODITECT-STANDARD-AUTOMATION.md