Agent Skills Framework Extension
Session Analysis Patterns Skill
When to Use This Skill
Use this skill when implementing session analysis patterns patterns in your codebase.
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Session indexing, development insight extraction, pattern recognition, and productivity analysis.
Core Capabilities
- Session Indexing - Structured session storage
- Pattern Extraction - Identify development patterns
- Progress Tracking - Measure advancement
- Productivity Analysis - Time and efficiency metrics
- Topic Clustering - Group related sessions
- Insight Generation - Extract learnings
Session Indexer
scripts/session-indexer.py
from dataclasses import dataclass from typing import List, Dict, Optional from datetime import datetime import json import sqlite3
@dataclass class SessionMessage: role: str # 'user' or 'assistant' content: str timestamp: datetime metadata: Dict
@dataclass class SessionSummary: session_id: str start_time: datetime end_time: datetime message_count: int topics: List[str] decisions: List[str] code_files: List[str] productivity_score: float
class SessionIndexer: """Index and analyze development sessions"""
def __init__(self, db_path: str):
self.db_path = db_path
self._init_database()
def _init_database(self):
"""Initialize SQLite database"""
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
start_time TIMESTAMP,
end_time TIMESTAMP,
message_count INTEGER,
topics TEXT,
decisions TEXT,
code_files TEXT,
productivity_score REAL
)
''')
cursor.execute('''
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP,
metadata TEXT,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
''')
conn.commit()
conn.close()
def index_session(
self,
session_id: str,
messages: List[SessionMessage]
) -> SessionSummary:
"""Index a complete session"""
if not messages:
raise ValueError("No messages to index")
# Extract session metadata
start_time = messages[0].timestamp
end_time = messages[-1].timestamp
topics = self._extract_topics(messages)
decisions = self._extract_decisions(messages)
code_files = self._extract_code_files(messages)
productivity = self._calculate_productivity(messages, start_time, end_time)
summary = SessionSummary(
session_id=session_id,
start_time=start_time,
end_time=end_time,
message_count=len(messages),
topics=topics,
decisions=decisions,
code_files=code_files,
productivity_score=productivity
)
# Store in database
self._store_session(summary, messages)
return summary
def _extract_topics(self, messages: List[SessionMessage]) -> List[str]:
"""Extract main topics from session"""
# Simple keyword extraction
topics = set()
keywords = ['implement', 'debug', 'refactor', 'design', 'research']
for msg in messages:
content_lower = msg.content.lower()
for keyword in keywords:
if keyword in content_lower:
topics.add(keyword)
return list(topics)
def _extract_decisions(self, messages: List[SessionMessage]) -> List[str]:
"""Extract decisions made"""
decisions = []
decision_indicators = ['decided to', 'will use', 'going with', 'chose']
for msg in messages:
for indicator in decision_indicators:
if indicator in msg.content.lower():
# Extract sentence
sentences = msg.content.split('.')
for sentence in sentences:
if indicator in sentence.lower():
decisions.append(sentence.strip())
return decisions[:5] # Top 5
def _extract_code_files(self, messages: List[SessionMessage]) -> List[str]:
"""Extract mentioned code files"""
import re
files = set()
# Pattern for file paths
file_pattern = r'[\w\-./]+\.(py|ts|js|tsx|jsx|rs|go|java|cpp|h)'
for msg in messages:
matches = re.findall(file_pattern, msg.content)
files.update(matches)
return list(files)[:10] # Top 10
def _calculate_productivity(
self,
messages: List[SessionMessage],
start: datetime,
end: datetime
) -> float:
"""Calculate productivity score"""
duration_hours = (end - start).total_seconds() / 3600
# Metrics
message_rate = len(messages) / max(duration_hours, 0.1)
code_messages = sum(1 for m in messages if '```' in m.content)
code_ratio = code_messages / len(messages)
# Score (0-1)
score = min((message_rate / 20) * 0.5 + code_ratio * 0.5, 1.0)
return score
def _store_session(self, summary: SessionSummary, messages: List[SessionMessage]):
"""Store session in database"""
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
# Store summary
cursor.execute('''
INSERT OR REPLACE INTO sessions VALUES (?, ?, ?, ?, ?, ?, ?, ?)
''', (
summary.session_id,
summary.start_time.isoformat(),
summary.end_time.isoformat(),
summary.message_count,
json.dumps(summary.topics),
json.dumps(summary.decisions),
json.dumps(summary.code_files),
summary.productivity_score
))
# Store messages
for msg in messages:
cursor.execute('''
INSERT INTO messages (session_id, role, content, timestamp, metadata)
VALUES (?, ?, ?, ?, ?)
''', (
summary.session_id,
msg.role,
msg.content,
msg.timestamp.isoformat(),
json.dumps(msg.metadata)
))
conn.commit()
conn.close()
def search_sessions(
self,
topic: Optional[str] = None,
min_productivity: float = 0.0
) -> List[SessionSummary]:
"""Search indexed sessions"""
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
query = 'SELECT * FROM sessions WHERE productivity_score >= ?'
params = [min_productivity]
if topic:
query += ' AND topics LIKE ?'
params.append(f'%{topic}%')
cursor.execute(query, params)
rows = cursor.fetchall()
sessions = []
for row in rows:
sessions.append(SessionSummary(
session_id=row[0],
start_time=datetime.fromisoformat(row[1]),
end_time=datetime.fromisoformat(row[2]),
message_count=row[3],
topics=json.loads(row[4]),
decisions=json.loads(row[5]),
code_files=json.loads(row[6]),
productivity_score=row[7]
))
conn.close()
return sessions
Usage
indexer = SessionIndexer('.coditect/sessions.db')
messages = [
SessionMessage(
role='user',
content='Help me implement JWT authentication',
timestamp=datetime.now(),
metadata={}
),
SessionMessage(
role='assistant',
content='Here is JWT implementation:\npython\ndef create_jwt(): pass\n',
timestamp=datetime.now(),
metadata={}
),
]
summary = indexer.index_session('session-123', messages) print(f"Topics: {summary.topics}") print(f"Productivity: {summary.productivity_score:.2f}")
Pattern Extractor
// scripts/pattern-extractor.ts
interface DevelopmentPattern {
name: string;
frequency: number;
examples: string[];
insight: string;
}
class PatternExtractor {
/**
* Extract development patterns from sessions
*/
extractPatterns(sessions: any[]): DevelopmentPattern[] {
const patterns: Map<string, DevelopmentPattern> = new Map();
// Look for recurring workflows
this.detectWorkflowPatterns(sessions, patterns);
// Look for common mistakes
this.detectErrorPatterns(sessions, patterns);
// Look for productivity patterns
this.detectProductivityPatterns(sessions, patterns);
return Array.from(patterns.values())
.sort((a, b) => b.frequency - a.frequency);
}
private detectWorkflowPatterns(
sessions: any[],
patterns: Map<string, DevelopmentPattern>
): void {
// Common workflow: research -> implement -> test
let researchImplementTest = 0;
for (const session of sessions) {
const topics = session.topics || [];
if (
topics.includes('research') &&
topics.includes('implement') &&
topics.includes('test')
) {
researchImplementTest++;
}
}
if (researchImplementTest > 0) {
patterns.set('research-implement-test', {
name: 'Research → Implement → Test',
frequency: researchImplementTest,
examples: ['Session with full development cycle'],
insight: 'Following complete development workflow'
});
}
}
private detectErrorPatterns(
sessions: any[],
patterns: Map<string, DevelopmentPattern>
): void {
// Track common error types
const errorTypes = new Map<string, number>();
for (const session of sessions) {
const decisions = session.decisions || [];
for (const decision of decisions) {
if (decision.toLowerCase().includes('error')) {
const errorType = this.classifyError(decision);
errorTypes.set(errorType, (errorTypes.get(errorType) || 0) + 1);
}
}
}
for (const [errorType, count] of errorTypes) {
if (count >= 2) {
patterns.set(`error-${errorType}`, {
name: `Common Error: ${errorType}`,
frequency: count,
examples: [`${errorType} occurred ${count} times`],
insight: `Recurring ${errorType} - consider preventive measures`
});
}
}
}
private detectProductivityPatterns(
sessions: any[],
patterns: Map<string, DevelopmentPattern>
): void {
// Find most productive time patterns
const timeSlots = new Map<string, number[]>();
for (const session of sessions) {
const hour = new Date(session.start_time).getHours();
const timeSlot = hour < 12 ? 'morning' : hour < 18 ? 'afternoon' : 'evening';
if (!timeSlots.has(timeSlot)) {
timeSlots.set(timeSlot, []);
}
timeSlots.get(timeSlot)!.push(session.productivity_score || 0);
}
let bestSlot = '';
let bestAvg = 0;
for (const [slot, scores] of timeSlots) {
const avg = scores.reduce((a, b) => a + b, 0) / scores.length;
if (avg > bestAvg) {
bestAvg = avg;
bestSlot = slot;
}
}
if (bestSlot) {
patterns.set('productivity-time', {
name: `Peak Productivity: ${bestSlot}`,
frequency: timeSlots.get(bestSlot)!.length,
examples: [`${bestSlot} sessions`],
insight: `Most productive during ${bestSlot} (avg score: ${bestAvg.toFixed(2)})`
});
}
}
private classifyError(text: string): string {
if (text.toLowerCase().includes('syntax')) return 'syntax';
if (text.toLowerCase().includes('type')) return 'type';
if (text.toLowerCase().includes('import')) return 'import';
return 'other';
}
}
// Usage
const extractor = new PatternExtractor();
const sessions = [
{
session_id: 's1',
topics: ['research', 'implement', 'test'],
productivity_score: 0.8,
start_time: '2025-01-01T09:00:00'
}
];
const patterns = extractor.extractPatterns(sessions);
console.log(`Found ${patterns.length} patterns`);
## Progress Tracker
```python
scripts/progress-tracker.py
from dataclasses import dataclass from typing import List, Dict from datetime import datetime, timedelta
@dataclass class ProgressMetric: metric_name: str current_value: float previous_value: float change_percent: float trend: str # 'improving', 'stable', 'declining'
class ProgressTracker: """Track development progress over time"""
def analyze_progress(
self,
sessions: List[SessionSummary],
window_days: int = 7
) -> List[ProgressMetric]:
"""Analyze progress metrics"""
metrics = []
# Split into current and previous periods
cutoff = datetime.now() - timedelta(days=window_days)
recent = [s for s in sessions if s.start_time >= cutoff]
previous = [
s for s in sessions
if s.start_time < cutoff and s.start_time >= cutoff - timedelta(days=window_days)
]
# Productivity trend
metrics.append(self._track_productivity(recent, previous))
# Session frequency
metrics.append(self._track_frequency(recent, previous, window_days))
# Code file diversity
metrics.append(self._track_file_diversity(recent, previous))
return metrics
def _track_productivity(
self,
recent: List[SessionSummary],
previous: List[SessionSummary]
) -> ProgressMetric:
"""Track productivity trend"""
recent_avg = sum(s.productivity_score for s in recent) / len(recent) if recent else 0
prev_avg = sum(s.productivity_score for s in previous) / len(previous) if previous else 0
change = ((recent_avg - prev_avg) / prev_avg * 100) if prev_avg > 0 else 0
trend = 'stable'
if change > 5:
trend = 'improving'
elif change < -5:
trend = 'declining'
return ProgressMetric(
metric_name='productivity',
current_value=recent_avg,
previous_value=prev_avg,
change_percent=change,
trend=trend
)
def _track_frequency(
self,
recent: List[SessionSummary],
previous: List[SessionSummary],
window_days: int
) -> ProgressMetric:
"""Track session frequency"""
recent_freq = len(recent) / window_days
prev_freq = len(previous) / window_days
change = ((recent_freq - prev_freq) / prev_freq * 100) if prev_freq > 0 else 0
return ProgressMetric(
metric_name='session_frequency',
current_value=recent_freq,
previous_value=prev_freq,
change_percent=change,
trend='improving' if change > 0 else 'stable' if change == 0 else 'declining'
)
def _track_file_diversity(
self,
recent: List[SessionSummary],
previous: List[SessionSummary]
) -> ProgressMetric:
"""Track code file diversity"""
recent_files = set()
for s in recent:
recent_files.update(s.code_files)
prev_files = set()
for s in previous:
prev_files.update(s.code_files)
recent_count = len(recent_files)
prev_count = len(prev_files)
change = ((recent_count - prev_count) / prev_count * 100) if prev_count > 0 else 0
return ProgressMetric(
metric_name='file_diversity',
current_value=recent_count,
previous_value=prev_count,
change_percent=change,
trend='improving' if change > 0 else 'stable'
)
Usage
tracker = ProgressTracker()
Assume sessions is list of SessionSummary
metrics = tracker.analyze_progress(sessions, window_days=7)
Usage Examples
Session Indexing
Apply session-analysis-patterns skill to index session messages with topic and decision extraction
Pattern Extraction
Apply session-analysis-patterns skill to extract development patterns from last 30 sessions
Progress Tracking
Apply session-analysis-patterns skill to analyze productivity trends over last 7 days
## Success Output
When successful, this skill MUST output:
✅ SKILL COMPLETE: session-analysis-patterns
Completed:
- Session messages indexed in SQLite database
- Topics extracted from conversation content
- Decisions identified and documented
- Code files referenced catalogued
- Productivity score calculated
- Development patterns identified
Outputs:
- .coditect/sessions.db - Indexed session database
- Session summary: [session_id]
- Topics: [list of topics]
- Productivity score: [0.0-1.0]
- Pattern insights: [X patterns detected]
## Completion Checklist
Before marking this skill as complete, verify:
- [ ] SQLite database initialized with proper schema
- [ ] All session messages stored with timestamps and metadata
- [ ] Topics extracted using keyword analysis
- [ ] Decision statements identified and captured
- [ ] Code file references extracted with regex patterns
- [ ] Productivity score calculated (message rate + code ratio)
- [ ] Session summary generated with all metrics
- [ ] Pattern extraction completed (workflow, error, productivity patterns)
## Failure Indicators
This skill has FAILED if:
- ❌ Database not created or schema missing tables
- ❌ Messages not stored (empty database)
- ❌ Topic extraction returned empty list for non-trivial session
- ❌ Productivity score is 0 or not calculated
- ❌ Code file regex failed to find any files when code blocks present
- ❌ Session summary missing required fields (start_time, end_time, message_count)
- ❌ Pattern extraction found no patterns for multi-session analysis
## When NOT to Use
**Do NOT use this skill when:**
- Analyzing single messages in isolation (use `message-analysis-patterns` instead)
- Performing real-time conversation tracking (use `conversation-monitoring` instead)
- Conducting code quality analysis (use `code-quality-patterns` instead)
- Tracking task completion only (use `task-tracking-patterns` instead)
- Analyzing system logs (use `log-analysis-patterns` instead)
**Use these alternatives instead:**
- For message analysis: `message-analysis-patterns` skill
- For real-time monitoring: `conversation-monitoring` skill
- For task tracking: `task-tracking-patterns` skill
- For log analysis: `log-analysis-patterns` skill
## Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|--------------|---------|----------|
| Analyzing without timestamps | Cannot calculate productivity or trends | Ensure all messages have timestamps |
| Ignoring metadata | Lose context for decisions | Store full metadata with each message |
| Over-extracting topics | Noise in topic list | Limit to top 5-10 topics per session |
| Hard-coded file extensions | Miss new file types | Use configurable regex patterns |
| Single session pattern analysis | Insufficient data for trends | Require minimum 5 sessions for pattern extraction |
| No time-window filtering | Old data skews current trends | Use time windows (7 days, 30 days) |
| Storing full message content | Database bloat | Consider content summarization for old sessions |
## Principles
This skill embodies:
- **#5 Eliminate Ambiguity** - Structured session data with clear metrics
- **#6 Clear, Understandable, Explainable** - Session summaries with human-readable insights
- **#8 No Assumptions** - Validate session data before analysis
- **#2 First Principles** - Pattern recognition from data, not preconceived notions
- **#1 Recycle → Extend → Re-Use** - Index once, query many times
**Temporal Analysis:** This skill focuses on session-level analysis. For cross-session trends, combine with `progress-tracking-patterns`.
## Integration Points
- **memory-context-patterns** - Session storage
- **thoughts-analysis-patterns** - Insight extraction
- **prompt-analysis-patterns** - Intent tracking