Skill
Parallel Tool Execution Skill
Metadata
name: parallel-tool-execution
version: 1.0.0
category: orchestration
status: active
priority: P1
derived_from: Claude Operating Preferences v6.0
When to Use This Skill
Use this skill when:
- Executing multiple independent tool calls simultaneously
- Performing speculative searches across sources
- Optimizing latency for multi-tool workflows
- Coordinating parallel file or API operations
Core Insight
From Anthropic: "Claude 4.x excels at parallel tool execution"
Claude can execute multiple tool calls in a single response when they are independent, reducing total latency significantly.
Core Patterns
Parallel Tool Configuration
from dataclasses import dataclass
from typing import List, Optional
@dataclass
class ParallelToolConfig:
"""Configure parallel tool execution"""
enabled: bool = True
max_parallel: int = 5
timeout_per_tool: float = 30.0
fail_fast: bool = False # Stop on first error
def should_parallelize(self, tool_count: int) -> bool:
"""Determine if parallelization is beneficial"""
return self.enabled and tool_count > 1
Speculative Search Coordinator
import asyncio
from typing import Dict, Any, List
@dataclass
class SearchResult:
"""Result from a single search"""
source: str
query: str
results: List[Any]
latency_ms: float
success: bool
error: Optional[str] = None
class SpeculativeSearchCoordinator:
"""Fire multiple searches simultaneously"""
def __init__(self, config: ParallelToolConfig):
self.config = config
async def parallel_search(
self,
query: str,
sources: List[str]
) -> List[SearchResult]:
"""Execute search across multiple sources in parallel"""
tasks = [
self._search_source(query, source)
for source in sources[:self.config.max_parallel]
]
results = await asyncio.gather(
*tasks,
return_exceptions=not self.config.fail_fast
)
return [
r if isinstance(r, SearchResult)
else SearchResult(
source="unknown",
query=query,
results=[],
latency_ms=0,
success=False,
error=str(r)
)
for r in results
]
async def _search_source(
self,
query: str,
source: str
) -> SearchResult:
"""Search a single source"""
import time
start = time.time()
try:
# Implementation would dispatch to actual search
results = await self._execute_search(query, source)
latency = (time.time() - start) * 1000
return SearchResult(
source=source,
query=query,
results=results,
latency_ms=latency,
success=True
)
except Exception as e:
return SearchResult(
source=source,
query=query,
results=[],
latency_ms=(time.time() - start) * 1000,
success=False,
error=str(e)
)
async def _execute_search(
self,
query: str,
source: str
) -> List[Any]:
"""Execute actual search - implement per source"""
raise NotImplementedError
Parallel File Operations
async def parallel_read_files(
file_paths: List[str],
config: ParallelToolConfig
) -> Dict[str, str]:
"""Read multiple files in parallel"""
async def read_file(path: str) -> tuple:
try:
with open(path, 'r') as f:
return (path, f.read(), None)
except Exception as e:
return (path, None, str(e))
tasks = [read_file(p) for p in file_paths[:config.max_parallel]]
results = await asyncio.gather(*tasks)
return {
path: content
for path, content, error in results
if content is not None
}
Tool Call Batching
def batch_tool_calls(
calls: List[Dict[str, Any]],
batch_size: int = 5
) -> List[List[Dict[str, Any]]]:
"""Batch tool calls for parallel execution"""
batches = []
current_batch = []
for call in calls:
if call.get("depends_on"):
# Has dependency - start new batch
if current_batch:
batches.append(current_batch)
batches.append([call])
current_batch = []
else:
current_batch.append(call)
if len(current_batch) >= batch_size:
batches.append(current_batch)
current_batch = []
if current_batch:
batches.append(current_batch)
return batches
Best Practices
When to Parallelize
| Scenario | Parallelize? | Reason |
|---|---|---|
| Multiple file reads | Yes | No dependencies |
| Sequential API calls | No | Order matters |
| Multiple grep searches | Yes | Independent queries |
| Write then read | No | Write must complete first |
| Multiple web fetches | Yes | Independent requests |
Prompt Pattern for Parallel Tools
When requesting parallel tool execution in prompts:
Execute these operations in parallel:
1. Read file A
2. Read file B
3. Search codebase for pattern X
These are independent and can run simultaneously.
Quality Gate
Target: >80% parallel tool utilization
def calculate_parallel_utilization(
total_tools: int,
parallel_batches: int
) -> float:
"""Calculate parallel utilization percentage"""
if parallel_batches == 0:
return 0
ideal_batches = (total_tools + 4) // 5 # Max 5 per batch
return min(ideal_batches / parallel_batches, 1.0) * 100
Success Output
When this skill completes successfully, output:
✅ SKILL COMPLETE: parallel-tool-execution
Completed:
- [x] Identified independent tool calls for parallelization
- [x] Configured parallel execution with appropriate batch size
- [x] Executed tools in parallel with optimal latency
- [x] Achieved >80% parallel tool utilization
Results:
- Total tools executed: [count]
- Parallel batches: [count]
- Utilization: [percentage]%
- Latency reduction: [percentage]%
Completion Checklist
Before marking this skill as complete, verify:
- Independent tool calls identified correctly (no sequential dependencies)
- ParallelToolConfig configured with appropriate max_parallel limit
- Tool calls batched appropriately (max 5 per batch)
- Parallel execution completed without dependency errors
- Parallel utilization calculated and >80% achieved
- Total latency reduced compared to sequential execution
Failure Indicators
This skill has FAILED if:
- ❌ Sequential dependencies executed in parallel (data corruption risk)
- ❌ Parallel batch exceeded max_parallel limit causing timeouts
- ❌ Tools failed due to race conditions or resource conflicts
- ❌ Parallel utilization <50% (inefficient parallelization)
- ❌ Total latency increased compared to sequential execution
- ❌ Error handling failed to capture tool failures properly
When NOT to Use
Do NOT use this skill when:
- Tasks have sequential dependencies (use sequential execution instead)
- Only 1-2 independent tools (parallelization overhead not worth it)
- Tools modify shared state (use locking mechanisms first)
- Tools require specific execution order (use task dependency graph)
- System resources are constrained (may cause thrashing)
- Debugging tool execution (parallel logs are harder to read)
Use instead:
- Sequential execution for dependent tasks
- Simple direct tool calls for 1-2 tools
- Locking/coordination patterns for shared state
- Task queue systems for complex dependencies
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Parallelize everything | Creates race conditions, resource exhaustion | Analyze dependencies first, batch intelligently |
| Ignore failures | Silent failures in parallel execution hard to debug | Use fail_fast or comprehensive error collection |
| Unbounded parallelization | System overload, thrashing, timeouts | Set max_parallel limits based on system capacity |
| Wrong batch size | Too large: timeouts; too small: overhead | Use 3-5 tools per batch as default |
| No timeout configuration | Hanging tools block entire batch | Set timeout_per_tool based on expected runtime |
| Parallel file writes | File corruption, race conditions | Use file locking or sequential writes |
Principles
This skill embodies CODITECT principles:
- #1 Recycle → Extend → Re-Use → Create - Reuses Anthropic's parallel tool execution capability
- #2 First Principles - Understands WHY parallelization works (independent operations)
- #3 Keep It Simple - Simple batching logic, clear configuration
- #4 Separation of Concerns - Separates parallelization from tool logic
- #5 Eliminate Ambiguity - Clear success/failure markers, explicit completion criteria
- #8 No Assumptions - Validates independence before parallelizing
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Related Components
agents/parallel-tool-coordinator.md- Coordination agentcommands/parallel-execute.md- CLI command
Version: 1.0.0 | Updated: 2026-01-04 | Author: CODITECT Team