Agent Skills Framework Extension
Caching Strategies Skill
When to Use This Skill
Use this skill when implementing caching strategies patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Redis, in-memory caching, cache invalidation, and distributed caching patterns for high-performance applications.
Core Capabilities
- Cache-Aside - Application-managed caching
- Write-Through - Synchronized cache writes
- Redis Patterns - Pub/sub, sorted sets, streams
- Invalidation - TTL, event-driven, tag-based
- Distributed Caching - Multi-node coordination
Cache-Aside Pattern
// src/cache/cache-aside.ts
import Redis from 'ioredis';
interface CacheOptions {
ttl?: number;
prefix?: string;
}
export class CacheAside<T> {
private redis: Redis;
private prefix: string;
private defaultTtl: number;
constructor(redis: Redis, options: CacheOptions = {}) {
this.redis = redis;
this.prefix = options.prefix || '';
this.defaultTtl = options.ttl || 3600;
}
private key(id: string): string {
return this.prefix ? `${this.prefix}:${id}` : id;
}
async get(id: string): Promise<T | null> {
const data = await this.redis.get(this.key(id));
return data ? JSON.parse(data) : null;
}
async set(id: string, value: T, ttl?: number): Promise<void> {
const key = this.key(id);
const serialized = JSON.stringify(value);
await this.redis.setex(key, ttl ?? this.defaultTtl, serialized);
}
async getOrSet(
id: string,
fetcher: () => Promise<T>,
ttl?: number
): Promise<T> {
// Try cache first
const cached = await this.get(id);
if (cached !== null) {
return cached;
}
// Fetch from source
const value = await fetcher();
// Store in cache (fire and forget)
this.set(id, value, ttl).catch(err => {
console.error('Cache set error:', err);
});
return value;
}
async invalidate(id: string): Promise<void> {
await this.redis.del(this.key(id));
}
async invalidatePattern(pattern: string): Promise<number> {
const keys = await this.redis.keys(this.key(pattern));
if (keys.length === 0) return 0;
return this.redis.del(...keys);
}
}
// Usage
const userCache = new CacheAside<User>(redis, {
prefix: 'user',
ttl: 3600, // 1 hour
});
async function getUser(id: string): Promise<User> {
return userCache.getOrSet(id, () => db.users.findById(id));
}
Write-Through Cache
// src/cache/write-through.ts
export class WriteThroughCache<T extends { id: string }> {
constructor(
private readonly cache: CacheAside<T>,
private readonly repository: Repository<T>
) {}
async get(id: string): Promise<T | null> {
return this.cache.getOrSet(id, () => this.repository.findById(id));
}
async create(data: Omit<T, 'id'>): Promise<T> {
// Write to database first
const entity = await this.repository.create(data);
// Update cache
await this.cache.set(entity.id, entity);
return entity;
}
async update(id: string, data: Partial<T>): Promise<T> {
// Update database
const entity = await this.repository.update(id, data);
// Update cache
await this.cache.set(id, entity);
return entity;
}
async delete(id: string): Promise<void> {
// Delete from database
await this.repository.delete(id);
// Invalidate cache
await this.cache.invalidate(id);
}
}
Redis Data Structures
// src/cache/redis-structures.ts
import Redis from 'ioredis';
export class RedisDataStructures {
constructor(private readonly redis: Redis) {}
// Sorted Set for Leaderboards
async addToLeaderboard(
leaderboard: string,
userId: string,
score: number
): Promise<void> {
await this.redis.zadd(leaderboard, score, userId);
}
async getTopScores(leaderboard: string, count: number): Promise<Array<{
userId: string;
score: number;
rank: number;
}>> {
const results = await this.redis.zrevrange(
leaderboard,
0,
count - 1,
'WITHSCORES'
);
const scores: Array<{ userId: string; score: number; rank: number }> = [];
for (let i = 0; i < results.length; i += 2) {
scores.push({
userId: results[i],
score: parseFloat(results[i + 1]),
rank: i / 2 + 1,
});
}
return scores;
}
async getUserRank(leaderboard: string, userId: string): Promise<number | null> {
const rank = await this.redis.zrevrank(leaderboard, userId);
return rank !== null ? rank + 1 : null;
}
// Hash for User Sessions
async setSession(sessionId: string, data: Record<string, string>): Promise<void> {
await this.redis.hset(`session:${sessionId}`, data);
await this.redis.expire(`session:${sessionId}`, 3600);
}
async getSession(sessionId: string): Promise<Record<string, string> | null> {
const data = await this.redis.hgetall(`session:${sessionId}`);
return Object.keys(data).length > 0 ? data : null;
}
// List for Rate Limiting
async checkRateLimit(
key: string,
limit: number,
windowSeconds: number
): Promise<{ allowed: boolean; remaining: number }> {
const now = Date.now();
const windowStart = now - windowSeconds * 1000;
const pipeline = this.redis.pipeline();
pipeline.zremrangebyscore(key, 0, windowStart);
pipeline.zadd(key, now, `${now}`);
pipeline.zcard(key);
pipeline.expire(key, windowSeconds);
const results = await pipeline.exec();
const count = results![2][1] as number;
return {
allowed: count <= limit,
remaining: Math.max(0, limit - count),
};
}
// Pub/Sub for Cache Invalidation
async publishInvalidation(channel: string, key: string): Promise<void> {
await this.redis.publish(channel, JSON.stringify({ action: 'invalidate', key }));
}
subscribeToInvalidations(
channel: string,
handler: (key: string) => void
): void {
const subscriber = this.redis.duplicate();
subscriber.subscribe(channel);
subscriber.on('message', (ch, message) => {
if (ch === channel) {
const { action, key } = JSON.parse(message);
if (action === 'invalidate') {
handler(key);
}
}
});
}
}
Multi-Level Caching
// src/cache/multi-level.ts
interface CacheLevel<T> {
get(key: string): Promise<T | null>;
set(key: string, value: T, ttl?: number): Promise<void>;
invalidate(key: string): Promise<void>;
}
export class MultiLevelCache<T> {
private levels: CacheLevel<T>[];
constructor(...levels: CacheLevel<T>[]) {
this.levels = levels;
}
async get(key: string): Promise<T | null> {
for (let i = 0; i < this.levels.length; i++) {
const value = await this.levels[i].get(key);
if (value !== null) {
// Backfill upper levels
for (let j = 0; j < i; j++) {
await this.levels[j].set(key, value);
}
return value;
}
}
return null;
}
async set(key: string, value: T, ttl?: number): Promise<void> {
await Promise.all(
this.levels.map(level => level.set(key, value, ttl))
);
}
async invalidate(key: string): Promise<void> {
await Promise.all(
this.levels.map(level => level.invalidate(key))
);
}
}
// In-memory L1 cache
class MemoryCache<T> implements CacheLevel<T> {
private cache = new Map<string, { value: T; expires: number }>();
async get(key: string): Promise<T | null> {
const entry = this.cache.get(key);
if (!entry) return null;
if (Date.now() > entry.expires) {
this.cache.delete(key);
return null;
}
return entry.value;
}
async set(key: string, value: T, ttl = 60): Promise<void> {
this.cache.set(key, {
value,
expires: Date.now() + ttl * 1000,
});
}
async invalidate(key: string): Promise<void> {
this.cache.delete(key);
}
}
// Usage: L1 memory + L2 Redis
const cache = new MultiLevelCache<User>(
new MemoryCache<User>(),
new CacheAside<User>(redis, { prefix: 'user' })
);
Cache Warming
// src/cache/cache-warmer.ts
export class CacheWarmer<T> {
constructor(
private readonly cache: CacheAside<T>,
private readonly fetcher: (ids: string[]) => Promise<Map<string, T>>
) {}
async warmUp(ids: string[]): Promise<void> {
const BATCH_SIZE = 100;
for (let i = 0; i < ids.length; i += BATCH_SIZE) {
const batch = ids.slice(i, i + BATCH_SIZE);
const data = await this.fetcher(batch);
await Promise.all(
Array.from(data.entries()).map(([id, value]) =>
this.cache.set(id, value)
)
);
}
}
async warmPopular(limit = 1000): Promise<void> {
// Get most accessed items from analytics
const popularIds = await getPopularItemIds(limit);
await this.warmUp(popularIds);
}
}
Usage Examples
Implement Caching Layer
Apply caching-strategies skill to add Redis caching for user profiles with 1-hour TTL
Add Multi-Level Cache
Apply caching-strategies skill to implement L1 memory + L2 Redis caching for product catalog
Implement Rate Limiting
Apply caching-strategies skill to add Redis-based rate limiting (100 requests/minute)
Integration Points
- database-schema-optimization - Query result caching
- performance-profiling - Cache hit rate monitoring
- monitoring-observability - Cache metrics and alerts
Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: caching-strategies
Completed:
- [x] Cache layer implemented (Cache-Aside/Write-Through)
- [x] Redis connection established and tested
- [x] TTL configured for cache entries
- [x] Cache invalidation strategy defined
- [x] Cache hit/miss metrics instrumented
- [x] Integration tests passing
- [x] Performance benchmarks show improvement
Cache Configuration:
- Strategy: {CACHE_STRATEGY}
- TTL: {TTL_SECONDS}s
- Key prefix: {PREFIX}
- Redis endpoint: {REDIS_HOST}
Performance Metrics:
- Cache hit rate: {HIT_RATE}%
- Average latency reduction: {LATENCY_REDUCTION}ms
- Memory usage: {MEMORY_MB}MB
Outputs:
- src/cache/*.ts implemented
- Tests: tests/cache/*.spec.ts passing
- Metrics: Cache performance dashboard created
Completion Checklist
Before marking this skill as complete, verify:
- Redis client connection established successfully
- Cache-aside pattern implemented with getOrSet()
- TTL values configured appropriately for data type
- Cache invalidation triggers identified and implemented
- Cache key naming convention follows prefix pattern
- Serialization/deserialization handles all data types
- Error handling for cache failures (doesn't break app)
- Cache miss fallback to database works correctly
- Unit tests cover cache hit and miss scenarios
- Integration tests verify end-to-end caching flow
- Cache metrics instrumented (hit rate, latency)
- Performance benchmarks show measurable improvement
- Documentation updated with cache architecture
Failure Indicators
This skill has FAILED if:
- ❌ Redis connection fails and app crashes
- ❌ Cache set operation blocks indefinitely
- ❌ TTL not set, causing memory leak
- ❌ Cache invalidation never triggers (stale data served)
- ❌ Serialization errors for complex objects
- ❌ Cache key collisions between different entities
- ❌ Cache miss doesn't fall back to database
- ❌ Performance worse than no caching (overhead)
- ❌ Cache hit rate < 50% (ineffective caching)
- ❌ Memory usage exceeds Redis limits
- ❌ Tests fail due to race conditions
When NOT to Use
Do NOT use this skill when:
- Data changes frequently - Cache invalidation overhead > benefit (use direct DB queries)
- Unique queries - Each query is different, cache hit rate will be 0%
- Small datasets - Database already fast enough (<10ms), caching adds complexity
- Real-time requirements - Can't tolerate any staleness, cache introduces delay
- Write-heavy workloads - Cache invalidation storms degrade performance
- Single-user app - No benefit from shared cache, use in-memory variables
- Memory constrained - Redis memory cost > database query cost
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Caching everything | Memory exhaustion, low hit rate | Cache only frequently accessed data |
| No TTL set | Memory leak, stale data forever | Always set appropriate TTL |
| Ignoring cache failures | App crashes when Redis down | Wrap cache calls in try/catch, fall back to DB |
| Cache key collisions | Wrong data returned | Use prefixes: user:{id}, order:{id} |
| Synchronous cache writes | Blocks request thread | Use fire-and-forget for cache sets |
| No invalidation strategy | Stale data served indefinitely | Invalidate on writes, use pub/sub |
| Over-aggressive caching | Cache churn, no hits | Profile access patterns first |
| No cache metrics | Can't measure effectiveness | Instrument hit rate, latency, memory |
Principles
This skill embodies:
- #3 Keep It Simple - Cache-aside pattern is simplest and most flexible
- #4 Separation of Concerns - Cache layer separate from business logic
- #5 Eliminate Ambiguity - Clear TTL, invalidation, and fallback strategies
- #9 Performance Matters - 50-90% latency reduction for cached queries
- #11 Quality is Non-Negotiable - Must not serve stale data in critical paths
- #13 Observability is Essential - Cache metrics track hit rate and effectiveness
Full Standard: CODITECT-STANDARD-AUTOMATION.md