Application Performance Patterns
Application Performance Patterns
When to Use This Skill
Use this skill when implementing application performance patterns patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Level 1: Quick Reference (Under 500 tokens)
Profiling Commands
# Python CPU profiling
py-spy record -o profile.svg -- python app.py
# Node.js profiling
node --prof app.js
node --prof-process isolate-*.log > processed.txt
# Rust/C profiling
perf record -g ./target/release/app
perf report
Performance Metrics
interface PerformanceMetrics {
p50Latency: number; // 50th percentile
p95Latency: number; // 95th percentile
p99Latency: number; // 99th percentile
throughput: number; // requests/second
errorRate: number; // percentage
cpuUsage: number; // percentage
memoryUsage: number; // MB
}
Level 2: Implementation Details (Under 2000 tokens)
Memory Leak Detection (Node.js)
import v8 from 'v8';
import { writeFileSync } from 'fs';
function takeHeapSnapshot(filename: string) {
const snapshotStream = v8.writeHeapSnapshot();
console.log(`Heap snapshot written to: ${snapshotStream}`);
}
// Periodic memory monitoring
setInterval(() => {
const used = process.memoryUsage();
console.log({
heapUsed: Math.round(used.heapUsed / 1024 / 1024) + 'MB',
heapTotal: Math.round(used.heapTotal / 1024 / 1024) + 'MB',
rss: Math.round(used.rss / 1024 / 1024) + 'MB',
});
}, 60000);
Database Query Optimization
-- Analyze query performance
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
-- Add missing index
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
Caching Strategy
import { Redis } from 'ioredis';
const redis = new Redis();
async function getCached<T>(
key: string,
fetcher: () => Promise<T>,
ttl: number = 300
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const fresh = await fetcher();
await redis.setex(key, ttl, JSON.stringify(fresh));
return fresh;
}
Level 3: Complete Reference (Full tokens)
Comprehensive Performance Audit
interface PerformanceAudit {
timestamp: Date;
duration: number;
metrics: PerformanceMetrics;
bottlenecks: Bottleneck[];
recommendations: Recommendation[];
}
interface Bottleneck {
type: 'cpu' | 'memory' | 'io' | 'network' | 'database';
severity: 'low' | 'medium' | 'high' | 'critical';
description: string;
location?: string;
}
async function runPerformanceAudit(): Promise<PerformanceAudit> {
const startTime = Date.now();
const bottlenecks: Bottleneck[] = [];
// CPU check
const cpuUsage = await measureCPU();
if (cpuUsage > 80) {
bottlenecks.push({
type: 'cpu',
severity: cpuUsage > 95 ? 'critical' : 'high',
description: `CPU usage at ${cpuUsage}%`,
});
}
// Memory check
const memoryUsage = process.memoryUsage();
if (memoryUsage.heapUsed / memoryUsage.heapTotal > 0.85) {
bottlenecks.push({
type: 'memory',
severity: 'high',
description: 'Heap usage above 85%',
});
}
return {
timestamp: new Date(),
duration: Date.now() - startTime,
metrics: await collectMetrics(),
bottlenecks,
recommendations: generateRecommendations(bottlenecks),
};
}
Best Practices:
- Profile before optimizing
- Focus on p95/p99 latencies
- Implement caching strategically
- Monitor memory continuously
- Use async I/O operations
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: application-performance-patterns
Completed:
- [x] Performance profiling executed
- [x] Bottlenecks identified and analyzed
- [x] Optimization recommendations generated
- [x] Performance metrics collected
Outputs:
- Performance audit report with metrics
- Bottleneck analysis with severity ratings
- Optimization recommendations prioritized
- Monitoring dashboard configured
Completion Checklist
Before marking this skill as complete, verify:
- Profiling tools executed successfully (py-spy, perf, or equivalent)
- Performance metrics collected (p50/p95/p99 latencies, throughput, CPU, memory)
- Bottlenecks identified with severity classification
- Caching strategy evaluated and recommendations provided
- Database queries analyzed with EXPLAIN ANALYZE
- Memory leak detection run (heap snapshots if applicable)
- Performance audit report generated
- All outputs exist at expected locations
Failure Indicators
This skill has FAILED if:
- ❌ Profiling tools not available or failed to execute
- ❌ No performance metrics collected
- ❌ Unable to identify any bottlenecks in high-load scenarios
- ❌ Database query analysis failed without EXPLAIN ANALYZE output
- ❌ Memory profiling tools unavailable in Node.js/Python environment
- ❌ Generated recommendations lack specific implementation details
- ❌ Performance audit incomplete or missing critical sections
When NOT to Use
Do NOT use this skill when:
- Application is not experiencing performance issues (premature optimization)
- No baseline performance metrics exist to compare against
- Application is still in early prototyping phase
- Performance requirements are not defined
- Profiling tools cannot be installed in the target environment
- Single-threaded applications with minimal I/O (use simpler analysis instead)
- Optimization would add significant complexity without measurable benefit
- Team lacks expertise to implement recommendations (use performance-consultation skill instead)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Optimizing without profiling | Wastes time on non-bottlenecks | Always profile first, identify actual bottlenecks |
| Focusing only on p50 latency | Misses tail latency issues affecting users | Analyze p95/p99 latencies for real-world impact |
| Implementing all caching | Adds complexity without benefit | Cache strategically based on access patterns |
| Ignoring memory growth | Leads to memory leaks and crashes | Monitor memory continuously, investigate growth |
| Blocking I/O operations | Reduces throughput significantly | Use async I/O for all network/disk operations |
| Over-indexing databases | Slows writes, wastes storage | Index only frequently queried columns |
| Premature optimization | Code complexity without benefit | Measure first, optimize proven bottlenecks |
| Missing production metrics | Can't validate improvements | Implement monitoring before optimizing |
Principles
This skill embodies:
- #1 First Principles - Profile to understand WHY before optimizing HOW
- #5 Eliminate Ambiguity - Clear metrics and severity classifications
- #6 Clear, Understandable, Explainable - Performance reports with actionable insights
- #8 No Assumptions - Measure actual performance, don't guess bottlenecks
- #9 Keep It Simple - Simplest optimization that achieves target metrics
Full Standard: CODITECT-STANDARD-AUTOMATION.md