Optimization Patterns Skill
Optimization Patterns Skill
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Production-ready optimization patterns for improving application performance across caching, lazy loading, bundle optimization, database queries, memory management, and algorithmic efficiency.
When to Use This Skill
Use optimization-patterns when:
- Implementing caching strategies (memory, Redis, CDN)
- Optimizing frontend bundle size and loading
- Improving database query performance
- Reducing memory footprint and preventing leaks
- Optimizing algorithms and data structures
- Implementing lazy loading and code splitting
Don't use optimization-patterns when:
- Profiling to identify bottlenecks (use
performance-profileragent) - Load testing performance (use
load-testingskill) - Premature optimization without profiling data
- Security-focused code review (use
security-auditskill)
Optimization Categories
| Category | Impact | Effort | Priority |
|---|---|---|---|
| Caching | High | Low-Medium | 1st |
| Database | High | Medium | 2nd |
| Bundle Size | Medium | Low | 3rd |
| Lazy Loading | Medium | Low | 4th |
| Memory | Medium | Medium | 5th |
| Algorithms | Varies | High | Case-by-case |
Instructions
Phase 1: Caching Strategies
Objective: Implement effective caching at multiple levels.
-
In-memory caching (Python):
from functools import lru_cache
from cachetools import TTLCache, cached
import time
# Simple LRU cache
@lru_cache(maxsize=1000)
def get_user_data(user_id: int) -> dict:
# Expensive database call
return db.users.find_one({"_id": user_id})
# TTL cache (expiring)
cache = TTLCache(maxsize=100, ttl=300) # 5 minutes
@cached(cache)
def get_config(key: str) -> str:
return db.config.find_one({"key": key})["value"]
# Manual cache with invalidation
class UserCache:
def __init__(self):
self._cache = {}
self._timestamps = {}
self._ttl = 300
def get(self, user_id: int) -> dict | None:
if user_id in self._cache:
if time.time() - self._timestamps[user_id] < self._ttl:
return self._cache[user_id]
del self._cache[user_id]
return None
def set(self, user_id: int, data: dict):
self._cache[user_id] = data
self._timestamps[user_id] = time.time()
def invalidate(self, user_id: int):
self._cache.pop(user_id, None) -
Redis caching:
import redis
import json
from functools import wraps
redis_client = redis.Redis(host='localhost', port=6379, db=0)
def redis_cache(ttl: int = 300):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Create cache key from function name and args
key = f"{func.__name__}:{hash(str(args) + str(kwargs))}"
# Try cache first
cached = redis_client.get(key)
if cached:
return json.loads(cached)
# Call function and cache result
result = func(*args, **kwargs)
redis_client.setex(key, ttl, json.dumps(result))
return result
return wrapper
return decorator
@redis_cache(ttl=600)
def get_product_recommendations(user_id: int) -> list:
# Expensive ML computation
return ml_model.predict(user_id) -
HTTP caching headers:
from fastapi import FastAPI, Response
from datetime import datetime, timedelta
app = FastAPI()
@app.get("/api/products/{product_id}")
def get_product(product_id: int, response: Response):
product = db.products.find_one({"_id": product_id})
# Cache for 1 hour
response.headers["Cache-Control"] = "public, max-age=3600"
response.headers["ETag"] = f'"{product["updated_at"]}"'
return product
@app.get("/api/user/profile")
def get_profile(response: Response):
# Private data - no caching
response.headers["Cache-Control"] = "private, no-store"
return get_current_user()
Phase 2: Database Query Optimization
Objective: Optimize database queries for better performance.
-
Index optimization:
-- Identify missing indexes (PostgreSQL)
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;
-- Create composite index for common queries
CREATE INDEX CONCURRENTLY idx_orders_user_status
ON orders (user_id, status, created_at DESC);
-- Partial index for active records only
CREATE INDEX idx_users_active
ON users (email)
WHERE deleted_at IS NULL; -
Query optimization patterns:
# BAD: N+1 query problem
users = User.query.all()
for user in users:
print(user.orders) # Separate query for each user!
# GOOD: Eager loading
users = User.query.options(joinedload(User.orders)).all()
for user in users:
print(user.orders) # Already loaded
# BAD: Select all columns
users = User.query.all()
# GOOD: Select only needed columns
users = User.query.with_entities(User.id, User.name, User.email).all()
# BAD: Multiple queries for aggregates
total_orders = Order.query.count()
total_revenue = db.session.query(func.sum(Order.amount)).scalar()
# GOOD: Single query with multiple aggregates
result = db.session.query(
func.count(Order.id),
func.sum(Order.amount),
func.avg(Order.amount)
).first() -
Pagination optimization:
# BAD: Offset pagination (slow for large offsets)
products = Product.query.offset(10000).limit(20).all()
# GOOD: Cursor-based pagination
def get_products_after(cursor_id: int, limit: int = 20):
return Product.query.filter(
Product.id > cursor_id
).order_by(Product.id).limit(limit).all()
# GOOD: Keyset pagination with sorting
def get_products_page(last_created_at: datetime, last_id: int, limit: int = 20):
return Product.query.filter(
db.or_(
Product.created_at < last_created_at,
db.and_(
Product.created_at == last_created_at,
Product.id < last_id
)
)
).order_by(
Product.created_at.desc(),
Product.id.desc()
).limit(limit).all()
Phase 3: Frontend Bundle Optimization
Objective: Reduce bundle size and improve loading times.
-
Code splitting (React):
import { lazy, Suspense } from 'react';
// Lazy load heavy components
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Analytics = lazy(() => import('./pages/Analytics'));
const Settings = lazy(() => import('./pages/Settings'));
function App() {
return (
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/analytics" element={<Analytics />} />
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>
);
}
// Named exports for better chunk names
const AdminPanel = lazy(() =>
import('./pages/AdminPanel').then(module => ({
default: module.AdminPanel
}))
); -
Tree shaking and imports:
// BAD: Import entire library
import _ from 'lodash';
const result = _.map(data, 'id');
// GOOD: Import only what you need
import map from 'lodash/map';
const result = map(data, 'id');
// GOOD: Use native methods when possible
const result = data.map(item => item.id);
// BAD: Import all icons
import * as Icons from '@heroicons/react/24/outline';
// GOOD: Import specific icons
import { HomeIcon, UserIcon } from '@heroicons/react/24/outline'; -
Webpack/Vite optimization:
// vite.config.js
import { defineConfig } from 'vite';
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
// Vendor chunk for stable dependencies
vendor: ['react', 'react-dom', 'react-router-dom'],
// UI library chunk
ui: ['@radix-ui/react-dialog', '@radix-ui/react-dropdown-menu'],
// Charts chunk (loaded only when needed)
charts: ['recharts', 'd3'],
},
},
},
// Analyze bundle
sourcemap: true,
},
});
Phase 4: Lazy Loading Patterns
Objective: Defer loading of non-critical resources.
-
Image lazy loading:
// Native lazy loading
<img src="large-image.jpg" loading="lazy" alt="Description" />
// Intersection Observer for more control
function LazyImage({ src, alt }: { src: string; alt: string }) {
const [isLoaded, setIsLoaded] = useState(false);
const imgRef = useRef<HTMLImageElement>(null);
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setIsLoaded(true);
observer.disconnect();
}
},
{ rootMargin: '100px' }
);
if (imgRef.current) {
observer.observe(imgRef.current);
}
return () => observer.disconnect();
}, []);
return (
<img
ref={imgRef}
src={isLoaded ? src : 'placeholder.jpg'}
alt={alt}
/>
);
} -
Component lazy loading:
// Lazy load below-the-fold content
function ProductPage() {
const [showReviews, setShowReviews] = useState(false);
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setShowReviews(true);
}
},
{ threshold: 0.1 }
);
const reviewsSection = document.getElementById('reviews-trigger');
if (reviewsSection) {
observer.observe(reviewsSection);
}
return () => observer.disconnect();
}, []);
return (
<div>
<ProductDetails />
<div id="reviews-trigger" />
{showReviews && <ProductReviews />}
</div>
);
}
Phase 5: Memory Management
Objective: Optimize memory usage and prevent leaks.
-
Python memory optimization:
# Use generators for large datasets
# BAD: Load all into memory
def get_all_records():
return [process(r) for r in db.records.find()]
# GOOD: Generator (lazy evaluation)
def get_all_records():
for record in db.records.find():
yield process(record)
# Use __slots__ for memory-efficient classes
class Point:
__slots__ = ['x', 'y'] # Saves ~40% memory vs dict
def __init__(self, x, y):
self.x = x
self.y = y
# Use appropriate data structures
from array import array
# BAD: List of floats (each float is a Python object)
numbers = [1.0, 2.0, 3.0, 4.0]
# GOOD: Array of floats (compact C-level storage)
numbers = array('d', [1.0, 2.0, 3.0, 4.0]) -
JavaScript memory optimization:
// Clean up event listeners
useEffect(() => {
const handler = () => { /* ... */ };
window.addEventListener('resize', handler);
return () => {
window.removeEventListener('resize', handler);
};
}, []);
// Avoid creating objects in render
// BAD: New object every render
<Component style={{ color: 'red' }} />
// GOOD: Memoized or constant
const style = useMemo(() => ({ color: 'red' }), []);
<Component style={style} />
// WeakMap for metadata without memory leaks
const metadata = new WeakMap<Element, { clicks: number }>();
function trackClicks(element: Element) {
const data = metadata.get(element) || { clicks: 0 };
data.clicks++;
metadata.set(element, data);
// When element is garbage collected, metadata is too
}
Examples
Example 1: API Response Caching
from fastapi import FastAPI, Depends
from fastapi_cache import FastAPICache
from fastapi_cache.backends.redis import RedisBackend
from fastapi_cache.decorator import cache
app = FastAPI()
@app.on_event("startup")
async def startup():
redis = aioredis.from_url("redis://localhost")
FastAPICache.init(RedisBackend(redis), prefix="api-cache")
@app.get("/api/products")
@cache(expire=300) # Cache for 5 minutes
async def get_products(category: str = None):
return await db.products.find({"category": category}).to_list()
@app.get("/api/products/{id}")
@cache(expire=600, key_builder=lambda *args, **kwargs: f"product:{kwargs['id']}")
async def get_product(id: int):
return await db.products.find_one({"_id": id})
Example 2: React Performance Optimization
import { memo, useMemo, useCallback } from 'react';
// Memoized component
const ProductCard = memo(function ProductCard({
product,
onAddToCart
}: {
product: Product;
onAddToCart: (id: number) => void;
}) {
return (
<div>
<h3>{product.name}</h3>
<button onClick={() => onAddToCart(product.id)}>Add</button>
</div>
);
});
// Parent component with optimized callbacks
function ProductList({ products }: { products: Product[] }) {
const [cart, setCart] = useState<number[]>([]);
// Memoized callback
const handleAddToCart = useCallback((id: number) => {
setCart(prev => [...prev, id]);
}, []);
// Memoized derived data
const sortedProducts = useMemo(
() => [...products].sort((a, b) => a.price - b.price),
[products]
);
return (
<div>
{sortedProducts.map(product => (
<ProductCard
key={product.id}
product={product}
onAddToCart={handleAddToCart}
/>
))}
</div>
);
}
Troubleshooting
| Issue | Solution |
|---|---|
| Cache stampede | Use cache warming or lock mechanisms |
| Stale cache data | Implement proper invalidation strategy |
| Memory leaks | Use profiler to identify, clean up subscriptions |
| Bundle too large | Analyze with webpack-bundle-analyzer |
| Slow database | Add indexes, optimize queries, use EXPLAIN |
Best Practices
- Profile before optimizing (measure, don't guess)
- Optimize the critical path first
- Use appropriate caching strategies per data type
- Implement cache invalidation from the start
- Monitor performance metrics in production
- Document optimization decisions
- Test performance improvements with benchmarks
- Consider trade-offs (memory vs CPU, complexity vs speed)
References
- Google Web Vitals
- React Performance Optimization
- PostgreSQL Query Optimization
- Python Performance Tips
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: optimization-patterns
Completed:
- [x] Performance bottlenecks identified via profiling
- [x] Caching strategy implemented (in-memory, Redis, or HTTP)
- [x] Database queries optimized (indexes, pagination, N+1 resolution)
- [x] Frontend bundle size reduced by X%
- [x] Memory optimizations applied (generators, weak references, cleanup)
- [x] Before/after metrics captured
Outputs:
- Performance profiling report
- Caching configuration files
- Database migration for new indexes
- Updated frontend build configuration
- Performance metrics comparison (before/after)
Completion Checklist
Before marking this skill as complete, verify:
- Profiling data captured BEFORE optimization (baseline metrics)
- Specific bottleneck identified (CPU, memory, I/O, network)
- Optimization strategy selected based on profiling data
- Implementation completed and tested
- Performance metrics captured AFTER optimization
- Measurable improvement achieved (>20% for targeted metric)
- No regressions in other performance areas
- Memory usage validated (no new leaks introduced)
- Production monitoring configured for ongoing tracking
Failure Indicators
This skill has FAILED if:
- ❌ No profiling data collected before optimization (premature optimization)
- ❌ Optimization applied to non-bottleneck areas
- ❌ Performance regression in other areas (e.g., faster but uses 3x memory)
- ❌ Cache invalidation strategy missing or incorrect
- ❌ Database indexes created without CONCURRENTLY (table locks in production)
- ❌ Bundle size increased instead of decreased
- ❌ Memory leaks introduced by optimization
- ❌ No measurable performance improvement (<10% change)
- ❌ Code complexity significantly increased without justification
When NOT to Use
Do NOT use this skill when:
- No profiling data exists (use
performance-profileragent first) - System is already meeting performance targets (avoid premature optimization)
- Code is in early prototype phase (optimize later)
- Bottleneck is external dependency you don't control (focus on workarounds)
- Security issues present (fix security first, then optimize)
- Code correctness issues exist (fix bugs first, then optimize)
- Team lacks capacity to maintain complex optimizations
- Simple architecture change would solve the problem (refactor first)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Premature optimization | Wastes time on non-bottlenecks | Profile first, identify actual bottlenecks |
| Optimizing without metrics | Can't prove improvement | Capture before/after metrics |
| Over-caching | Stale data, cache invalidation nightmares | Cache only stable, frequently-accessed data |
| Ignoring cache warming | Cache misses on cold start | Pre-populate cache on deployment |
| Offset pagination at scale | O(n) query time as offset grows | Use cursor-based pagination |
| Index on every column | Slows writes, wastes space | Index only query-critical columns |
| Lazy loading everything | Waterfalls, poor UX | Critical resources load eagerly |
| Micro-optimizations | Negligible impact, complex code | Focus on algorithmic improvements |
Principles
This skill embodies:
- #2 First Principles - Understand what causes slowness before fixing
- #3 Keep It Simple - Simplest optimization that achieves target
- #8 No Assumptions - Profile, don't guess at bottlenecks
- #10 Quality Over Speed - Optimize correctness first, then performance
- #11 Measure Twice, Cut Once - Capture metrics before and after
- #12 Research When in Doubt - Use proven optimization patterns
Full Principles: CODITECT-STANDARD-AUTOMATION.md
Status: Production-ready Categories: Caching, Database, Bundle, Lazy Loading, Memory, Algorithms Languages: Python, JavaScript/TypeScript, Rust, Go Impact: High performance improvements with proven patterns