Skill
Opus 4.5 Optimization Skill
Metadata
name: opus-4.5-optimization
version: 1.0.0
category: orchestration
status: active
priority: P0
derived_from: Claude Operating Preferences v6.0
When to Use This Skill
Use this skill when:
- Writing prompts for Claude Opus 4.5
- Optimizing existing prompts to avoid overtriggering
- Configuring delegation patterns
- Maximizing Opus 4.5's trained capabilities
Core Insight
From Anthropic: "Opus 4.5 may overtrigger on aggressive prompts"
Opus 4.5 is highly responsive and may over-interpret strong language. Gentler prompting often produces better results.
Prompt Optimization Patterns
Aggressive Language Replacements
AGGRESSIVE_REPLACEMENTS = {
# Imperatives
"MUST": "Please",
"ALWAYS": "Prefer to",
"NEVER": "Avoid",
"CRITICAL": "Important",
"ABSOLUTELY": "",
"REQUIRED": "Recommended",
"MANDATORY": "Suggested",
# Emphasis
"VERY IMPORTANT": "Note",
"EXTREMELY": "Quite",
"ESSENTIAL": "Helpful",
"CRUCIAL": "Useful",
# Threats/Warnings
"YOU WILL BE": "You may be",
"FAILURE TO": "If you don't",
"YOU MUST NOT": "Please avoid",
"UNDER NO CIRCUMSTANCES": "Generally avoid",
}
Opus45PromptOptimizer
import re
from typing import List, Tuple
class Opus45PromptOptimizer:
"""Optimize prompts for Opus 4.5 sensitivity"""
AGGRESSIVE_REPLACEMENTS = {
"MUST": "Please",
"ALWAYS": "Prefer to",
"NEVER": "Avoid",
"CRITICAL": "Important",
"ABSOLUTELY": "",
"REQUIRED": "Recommended",
"MANDATORY": "Suggested",
"VERY IMPORTANT": "Note",
"EXTREMELY": "Quite",
"ESSENTIAL": "Helpful",
"CRUCIAL": "Useful",
}
THINK_ALTERNATIVES = {
"think about": "consider",
"think through": "work through",
"think carefully": "review carefully",
"think step by step": "proceed step by step",
}
@classmethod
def optimize_prompt(
cls,
prompt: str,
thinking_disabled: bool = False
) -> str:
"""Optimize prompt for Opus 4.5"""
result = prompt
# Replace aggressive language
for aggressive, gentle in cls.AGGRESSIVE_REPLACEMENTS.items():
# Case-insensitive replacement
pattern = re.compile(re.escape(aggressive), re.IGNORECASE)
result = pattern.sub(gentle, result)
# Replace "think" words if thinking is disabled
if thinking_disabled:
for think_word, alternative in cls.THINK_ALTERNATIVES.items():
pattern = re.compile(re.escape(think_word), re.IGNORECASE)
result = pattern.sub(alternative, result)
# Remove excessive exclamation marks
result = re.sub(r'!{2,}', '!', result)
# Remove ALL CAPS sections (except acronyms)
def lower_non_acronym(match):
text = match.group(0)
if len(text) <= 4: # Likely acronym
return text
return text.capitalize()
result = re.sub(r'\b[A-Z]{5,}\b', lower_non_acronym, result)
return result.strip()
@classmethod
def get_optimization_report(
cls,
original: str,
optimized: str
) -> dict:
"""Generate report of optimizations made"""
changes = []
for aggressive, gentle in cls.AGGRESSIVE_REPLACEMENTS.items():
if aggressive.lower() in original.lower():
changes.append({
"original": aggressive,
"replacement": gentle,
"reason": "Aggressive language may cause overtriggering"
})
return {
"original_length": len(original),
"optimized_length": len(optimized),
"changes_made": len(changes),
"changes": changes,
"reduction_percent": round(
(1 - len(optimized) / len(original)) * 100, 1
) if original else 0
}
Delegation Configuration
Opus 4.5 excels at delegation. Configure for optimal sub-agent usage:
@dataclass
class DelegationConfig:
"""Configure Opus 4.5 trained delegation patterns"""
# Minimum prompt length for sub-agent tasks
min_delegation_prompt_length: int = 100
# Always include in delegation prompts
required_context: List[str] = field(default_factory=lambda: [
"objective",
"constraints",
"output_format",
"success_criteria"
])
# Delegation patterns that work well
effective_patterns: List[str] = field(default_factory=lambda: [
"Break this into subtasks",
"Delegate the following",
"Coordinate between",
"Orchestrate the workflow",
])
def validate_delegation_prompt(self, prompt: str) -> Tuple[bool, List[str]]:
"""Validate a delegation prompt meets requirements"""
issues = []
if len(prompt) < self.min_delegation_prompt_length:
issues.append(
f"Prompt too short ({len(prompt)} chars). "
f"Min: {self.min_delegation_prompt_length}"
)
for required in self.required_context:
if required.lower() not in prompt.lower():
issues.append(f"Missing required context: {required}")
return len(issues) == 0, issues
Parallel Agent Coordination
@dataclass
class ParallelAgentCoordinator:
"""Coordinate parallel sub-agents (Opus 4.5 strength)"""
max_parallel: int = 5
result_aggregation: str = "merge" # merge, vote, first_success
async def coordinate_parallel(
self,
tasks: List[SubagentTask],
timeout: float = 300.0
) -> dict:
"""Run tasks in parallel and aggregate results"""
import asyncio
async def run_task(task):
# Implementation would dispatch to actual agent
pass
results = await asyncio.gather(
*[run_task(t) for t in tasks[:self.max_parallel]],
return_exceptions=True
)
return self._aggregate_results(results)
def _aggregate_results(self, results: List) -> dict:
"""Aggregate parallel results based on strategy"""
successful = [r for r in results if not isinstance(r, Exception)]
failed = [r for r in results if isinstance(r, Exception)]
if self.result_aggregation == "first_success":
return successful[0] if successful else {"error": "All failed"}
elif self.result_aggregation == "vote":
# Implement voting logic
pass
else: # merge
return {
"results": successful,
"failures": len(failed),
"success_rate": len(successful) / len(results)
}
Prompt Transformation Examples
Before Optimization
CRITICAL: You MUST ALWAYS follow these rules EXACTLY.
NEVER deviate from the specified format.
This is EXTREMELY IMPORTANT and ABSOLUTELY REQUIRED.
Think carefully about each step before proceeding.
After Optimization
Important: Please prefer to follow these rules closely.
Avoid deviating from the specified format.
This is quite helpful and recommended.
Consider each step before proceeding.
Token Efficiency
Opus 4.5 achieves 76% token efficiency compared to previous versions:
OPUS_45_EFFICIENCY = 0.24 # Uses only 24% of previous tokens
def estimate_opus45_tokens(legacy_estimate: int) -> int:
"""Estimate Opus 4.5 token usage from legacy estimate"""
return int(legacy_estimate * OPUS_45_EFFICIENCY)
Quality Gates for Opus 4.5
| Metric | v5 Target | Opus 4.5 Target |
|---|---|---|
| Token Efficiency | <1000/task | <500/task |
| Cost per Task | <$1.50 | <$0.50 |
| Delegation Quality | N/A | >90% |
| Parallel Utilization | N/A | >80% |
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: opus-4.5-optimization
Completed:
- [x] Aggressive language replaced with gentle alternatives
- [x] Token efficiency optimized (76% reduction achieved)
- [x] Delegation patterns configured for Opus 4.5
- [x] Prompt validated against overtrigger patterns
Outputs:
- Optimized prompt text
- Optimization report with change summary
- Token reduction metrics (original vs optimized)
- Delegation configuration (if applicable)
Completion Checklist
Before marking this skill as complete, verify:
- All aggressive language (MUST, ALWAYS, NEVER) replaced with gentle alternatives
- Token count reduced by at least 20% compared to original
- Prompt tested for overtrigger sensitivity
- Delegation context includes all 4 required elements (objective, constraints, output_format, success_criteria)
- Optimization report generated with before/after comparison
- No "think" words present if thinking is disabled
Failure Indicators
This skill has FAILED if:
- ❌ Optimized prompt still contains aggressive imperatives (CRITICAL, ABSOLUTELY, MANDATORY)
- ❌ Token count increased instead of decreased
- ❌ Delegation prompt under 100 characters
- ❌ Missing required context elements in delegation
- ❌ Opus 4.5 overtriggers on optimized prompt
When NOT to Use
Do NOT use this skill when:
- Using Claude Sonnet, Haiku, or other non-Opus models (they don't overtrigger)
- Prompt is already gentle and conversational
- Working with legacy Claude v1/v2 prompts (different optimization needed)
- Prompt requires strong directives for safety/compliance reasons
- Token count is already under 200 (diminishing returns)
Use these alternatives instead:
- For Sonnet/Haiku: Standard prompt optimization (no overtrigger concern)
- For legacy models: Use
legacy-prompt-migrationskill - For safety-critical prompts: Use
safety-prompt-hardeningskill
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Over-optimizing short prompts | Wastes tokens on optimization overhead | Only optimize prompts >500 tokens |
| Removing ALL emphasis | Makes prompt too vague | Keep important guidance, just soften tone |
| Optimizing without testing | May break prompt functionality | Always test optimized prompt before deploying |
| Using aggressive replacements table blindly | May change meaning unintentionally | Review each replacement for context |
| Ignoring delegation minimums | Creates ineffective sub-agent tasks | Enforce 100-char minimum for delegations |
| Optimizing thinking-disabled prompts with "think" words | Confuses Opus 4.5 | Check thinking mode before optimizing |
Principles
This skill embodies:
- #5 Eliminate Ambiguity - Clear optimization criteria with measurable targets
- #6 Clear, Understandable, Explainable - Explicit before/after comparison and change rationale
- #8 No Assumptions - Validate token reduction and overtrigger sensitivity
- #1 Recycle → Extend → Re-Use → Create - Reuse aggressive replacement patterns across all prompts
Model-Specific Optimization: This skill is specifically tuned for Opus 4.5's trained sensitivity patterns. Applying to other models may produce suboptimal results.
Related Components
agents/prompt-optimizer-opus45.md- Agent for batch optimizationcommands/optimize-prompt.md- CLI toolscripts/prompt-optimizer.py- Batch processing script