Auto Trigger Configuration (see skills/auto trigger framework/SKILL.md)
Analysis Loop Detection
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Expert skill for detecting and recovering from agent analysis loops. Prevents agents from getting stuck reading/searching files without making progress toward implementation.
When to Use
Use this skill when:
- Agent has made 5+ consecutive read/search tool calls without writing
- Agent is researching but not producing output
- Implementation task is taking too long on analysis phase
- Need to enforce "progress over perfection" mindset
- Token budget is being consumed without visible progress
- Agent is in a "spinning" state (same operations repeated)
Don't use this skill when:
- Agent is in explicit research/exploration mode (user requested investigation)
- First few tool calls of a new task (need initial context)
- Complex codebase requires extensive reading before understanding
- Agent is building mental model before implementation
Core Algorithm
Loop Detection
from typing import List, Set
from dataclasses import dataclass
from datetime import datetime
@dataclass
class ToolCallRecord:
"""Record of a tool call for loop detection"""
tool_name: str
timestamp: datetime
file_path: str = None
pattern: str = None
class AnalysisLoopDetector:
"""Detects when agent is stuck in read-only analysis loop"""
# Tools that don't produce output (analysis-only)
ANALYSIS_TOOLS: Set[str] = {
"read",
"read_file",
"grep",
"glob",
"search",
"search_reference_code",
"get_all_available_references",
"ls",
"find",
}
# Tools that produce output (making progress)
PROGRESS_TOOLS: Set[str] = {
"write",
"write_file",
"edit",
"bash", # Can produce output
"create_file",
"delete_file",
}
def __init__(self, threshold: int = 5, window_size: int = 10):
"""
Initialize detector.
Args:
threshold: Minimum consecutive analysis calls to trigger detection
window_size: Number of recent calls to analyze
"""
self.threshold = threshold
self.window_size = window_size
self.recent_calls: List[ToolCallRecord] = []
def record_call(self, tool_name: str, **kwargs) -> None:
"""Record a tool call"""
self.recent_calls.append(ToolCallRecord(
tool_name=tool_name.lower(),
timestamp=datetime.now(),
file_path=kwargs.get("file_path"),
pattern=kwargs.get("pattern"),
))
# Keep only recent window
if len(self.recent_calls) > self.window_size:
self.recent_calls = self.recent_calls[-self.window_size:]
def is_in_loop(self) -> bool:
"""
Detect if agent is stuck in analysis loop.
Returns True if last `threshold` calls are all analysis-only tools.
"""
if len(self.recent_calls) < self.threshold:
return False
recent = self.recent_calls[-self.threshold:]
return all(
call.tool_name in self.ANALYSIS_TOOLS
for call in recent
)
def is_repeating_pattern(self) -> bool:
"""
Detect if agent is reading same files repeatedly.
Returns True if same file read 3+ times in window.
"""
if len(self.recent_calls) < 3:
return False
file_counts = {}
for call in self.recent_calls:
if call.file_path:
file_counts[call.file_path] = file_counts.get(call.file_path, 0) + 1
return any(count >= 3 for count in file_counts.values())
def get_loop_stats(self) -> dict:
"""Get statistics about current analysis pattern"""
analysis_count = sum(
1 for call in self.recent_calls
if call.tool_name in self.ANALYSIS_TOOLS
)
progress_count = sum(
1 for call in self.recent_calls
if call.tool_name in self.PROGRESS_TOOLS
)
return {
"total_calls": len(self.recent_calls),
"analysis_calls": analysis_count,
"progress_calls": progress_count,
"analysis_ratio": analysis_count / len(self.recent_calls) if self.recent_calls else 0,
"is_in_loop": self.is_in_loop(),
"is_repeating": self.is_repeating_pattern(),
}
Guidance Generation
def get_loop_breakout_guidance(detector: AnalysisLoopDetector) -> str:
"""
Generate guidance message to help agent break out of loop.
This message is injected into the agent's context when loop detected.
"""
stats = detector.get_loop_stats()
guidance = f"""
## ANALYSIS LOOP DETECTED
You've been reading/analyzing for {stats['analysis_calls']} consecutive calls
without writing any code or making visible progress.
**Current State:**
- Analysis-only calls: {stats['analysis_calls']}
- Progress calls: {stats['progress_calls']}
- Analysis ratio: {stats['analysis_ratio']:.0%}
**RECOMMENDED ACTIONS:**
1. **STOP reading additional files** - You have enough context
2. **Start implementing** - Use your current understanding to write code
3. **Accept uncertainty** - A reasonable implementation beats perfect understanding
4. **Iterate if needed** - You can refine after initial implementation
**ANTI-PATTERNS TO AVOID:**
- Reading "one more file" before starting
- Searching for additional examples
- Re-reading files you've already seen
**THE GOAL IS PROGRESS, NOT PERFECT UNDERSTANDING.**
Take action now: Write the file you're trying to implement, using what you know.
"""
return guidance
def get_repetition_guidance(detector: AnalysisLoopDetector) -> str:
"""Generate guidance when agent is reading same files repeatedly"""
return """
## REPETITIVE READING DETECTED
You're reading the same files multiple times. This usually indicates:
- Uncertainty about how to proceed
- Looking for "the right answer" instead of making a decision
- Context window limitations causing forgetting
**RECOMMENDED ACTIONS:**
1. **Summarize what you know** - Write your understanding in a comment or note
2. **Make a decision** - Choose one approach and implement it
3. **Document assumptions** - Note what you're uncertain about
4. **Move forward** - Implement first, refine later
Stop re-reading and start implementing.
"""
Integration Pattern
class AgentWithLoopDetection:
"""Agent wrapper that integrates loop detection"""
def __init__(self, agent, loop_threshold: int = 5):
self.agent = agent
self.detector = AnalysisLoopDetector(threshold=loop_threshold)
async def execute_tool(self, tool_name: str, **kwargs):
"""Execute tool with loop detection"""
# Record the call
self.detector.record_call(tool_name, **kwargs)
# Check for loop
if self.detector.is_in_loop():
guidance = get_loop_breakout_guidance(self.detector)
# Inject guidance into agent context
self.agent.add_system_message(guidance)
if self.detector.is_repeating_pattern():
guidance = get_repetition_guidance(self.detector)
self.agent.add_system_message(guidance)
# Execute the actual tool
return await self.agent.original_execute_tool(tool_name, **kwargs)
Usage Examples
Basic Detection
# Create detector
detector = AnalysisLoopDetector(threshold=5, window_size=10)
# Simulate agent tool calls
detector.record_call("read", file_path="src/main.py")
detector.record_call("grep", pattern="class.*Handler")
detector.record_call("read", file_path="src/utils.py")
detector.record_call("glob", pattern="*.py")
detector.record_call("read", file_path="src/config.py")
# Check status
if detector.is_in_loop():
print(get_loop_breakout_guidance(detector))
# Output: ANALYSIS LOOP DETECTED message
With Orchestrator Integration
# In orchestrator agent
class Orchestrator:
def __init__(self):
self.loop_detector = AnalysisLoopDetector()
self.loop_intervention_count = 0
async def coordinate_task(self, task):
for iteration in range(max_iterations):
tool_call = await self.get_next_action()
# Record for loop detection
self.loop_detector.record_call(tool_call.name, **tool_call.params)
# Intervene if looping
if self.loop_detector.is_in_loop():
self.loop_intervention_count += 1
await self.inject_guidance(
get_loop_breakout_guidance(self.loop_detector)
)
# Force progress if stuck too long
if self.loop_intervention_count >= 2:
await self.force_implementation_mode()
result = await self.execute_tool(tool_call)
Metrics Collection
# Track loop detection for observability
class LoopDetectionMetrics:
def __init__(self):
self.detections = 0
self.recoveries = 0
self.tokens_saved = 0
def on_loop_detected(self, detector: AnalysisLoopDetector):
self.detections += 1
# Estimate tokens saved by breaking loop early
# Average: 1000 tokens per unnecessary read operation
self.tokens_saved += detector.threshold * 1000
def on_recovery(self):
self.recoveries += 1
def get_stats(self) -> dict:
return {
"loop_detections": self.detections,
"successful_recoveries": self.recoveries,
"recovery_rate": self.recoveries / self.detections if self.detections else 1.0,
"estimated_tokens_saved": self.tokens_saved,
}
Best Practices
DO
- Integrate proactively - Add to all implementation agents
- Set appropriate threshold - 5 is good default, adjust for task complexity
- Track metrics - Monitor detection rate and recovery success
- Log interventions - Record when guidance is injected
- Allow override - Let users disable for legitimate research tasks
- Combine with progress tracking - Use with implementation tracker skill
DON'T
- Don't be too aggressive - Allow reasonable research before flagging
- Don't interrupt exploration - Research mode needs different settings
- Don't ignore context - Complex tasks may need more reading
- Don't punish thoroughness - Balance speed with quality
- Don't duplicate guidance - One intervention per loop detection
Configuration Options
# Conservative (for complex tasks)
detector = AnalysisLoopDetector(threshold=8, window_size=15)
# Default (balanced)
detector = AnalysisLoopDetector(threshold=5, window_size=10)
# Aggressive (for simple tasks)
detector = AnalysisLoopDetector(threshold=3, window_size=5)
# Custom tool classification
detector.ANALYSIS_TOOLS.add("custom_search_tool")
detector.PROGRESS_TOOLS.add("custom_write_tool")
Integration with CODITECT Agents
Recommended integration points:
| Agent | Threshold | Notes |
|---|---|---|
| backend-development | 5 | Default for code implementation |
| frontend-development | 5 | Default for code implementation |
| code-implementation | 5 | Default for code implementation |
| orchestrator | 7 | Higher tolerance for coordination |
| Explore | 10 | Research mode needs more latitude |
| debugger | 6 | Debugging may need more investigation |
Success Metrics
| Metric | Target | Measurement |
|---|---|---|
| Spinning incidents | -80% | Count of interventions vs. baseline |
| False positives | <5% | Legitimate research flagged incorrectly |
| Token savings | 5000+/incident | Tokens not wasted on unnecessary reads |
| Recovery rate | >90% | Successful breakout after guidance |
Source Reference
This pattern was extracted from DeepCode (HKUDS/DeepCode) multi-agent system.
Original location: workflows/agents/code_implementation_agent.py (lines 843-911)
Original codebase stats:
- 51 Python files analyzed
- 33,497 lines of code
- 12 patterns extracted
See /submodules/labs/DeepCode/DEEP-ANALYSIS.md for complete analysis.
Multi-Context Window Support
State Tracking
Loop Detection State (JSON):
{
"checkpoint_id": "ckpt_loop_detection_20251214",
"recent_tool_calls": [
{"tool": "read", "file": "src/main.py", "time": "2025-12-14T10:00:00Z"},
{"tool": "grep", "pattern": "class", "time": "2025-12-14T10:00:15Z"}
],
"loop_interventions": 2,
"successful_recoveries": 2,
"tokens_saved": 10000
}
Progress Notes (Markdown):
# Loop Detection Progress - 2025-12-14
## Interventions
- 10:05 - Loop detected after 5 read calls, guidance injected
- 10:07 - Agent broke loop, started implementing handler.py
## Metrics
- Total interventions: 2
- Recovery rate: 100%
- Estimated tokens saved: 10,000
Session Recovery
When starting a fresh context window:
- Load checkpoint: Read
.coditect/checkpoints/loop-detection-latest.json - Restore recent calls: Rebuild detector state from checkpoint
- Continue monitoring: Resume normal loop detection
Token Savings: ~40% reduction by not re-analyzing detection patterns
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: analysis-loop-detection
Completed:
- [x] Loop detector initialized (threshold: 5, window: 10)
- [x] Tool calls monitored for analysis patterns
- [x] Loop detected after N consecutive analysis calls
- [x] Breakout guidance injected into agent context
- [x] Agent successfully broke loop and resumed progress
- [x] Metrics collected for observability
Outputs:
- Loop detection triggered: 2 times
- Successful recoveries: 2
- Recovery rate: 100%
- Estimated tokens saved: 10,000
- Analysis-to-progress ratio improved from 100% to 40%
Completion Checklist
Before marking this skill as complete, verify:
- AnalysisLoopDetector instantiated with appropriate threshold
- Tool calls recorded using
record_call()method - Loop detection triggered (
is_in_loop()returned True) - Breakout guidance generated and injected
- Agent resumed implementation after guidance
- Metrics tracked (detections, recoveries, tokens saved)
- Integration with orchestrator or agent wrapper complete
- Configuration appropriate for task complexity
Failure Indicators
This skill has FAILED if:
- ❌ Detector not instantiated or misconfigured
- ❌ Tool calls not being recorded
- ❌ Loop detected but no guidance injected
- ❌ Agent continues looping after intervention (>2 interventions)
- ❌ False positive rate >5% (legitimate research flagged)
- ❌ Metrics not collected or incomplete
When NOT to Use
Do NOT use this skill when:
- Agent is in explicit research/exploration mode (user requested investigation)
- First few tool calls of a new task (need initial context)
- Complex unfamiliar codebase requires extensive reading
- Agent building mental model before implementation (legitimate)
- Task is purely analytical (code audit, documentation review)
- User explicitly requested thorough analysis
- Working with academic/research tasks requiring deep reading
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Too aggressive threshold (≤2) | Interrupts legitimate research | Use threshold ≥5 for balanced detection |
| No metrics tracking | Can't measure effectiveness | Always collect detections/recoveries/tokens |
| Ignoring context (task type) | False positives | Adjust threshold by task complexity |
| Multiple interventions without escalation | Stuck in loop | Force implementation mode after 2 interventions |
| Not customizing tool lists | Wrong tools classified | Add custom tools to ANALYSIS_TOOLS/PROGRESS_TOOLS |
| Blocking exploration mode | Frustrates users | Allow override/disable for research tasks |
| No observability | Can't debug issues | Log all interventions with timestamps |
Principles
This skill embodies:
- #4 Progress Over Perfection - Break analysis loops to drive implementation
- #8 No Assumptions - Track metrics to verify effectiveness
- #9 Based on Facts - Use tool call patterns, not subjective assessment
- #11 Token Efficiency - Save tokens by preventing unnecessary reads
Full Standard: CODITECT-STANDARD-AUTOMATION.md