Agent Skills Framework Extension
Research Patterns Skill
When to Use This Skillโ
Use this skill when implementing research patterns patterns in your codebase.
How to Use This Skillโ
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Technical research, library comparison, best practices research, and actionable implementation guidance.
Core Capabilitiesโ
- Technology Evaluation - Compare frameworks, libraries, tools
- Best Practices Research - Industry standards and patterns
- Code Example Extraction - Working implementations
- Architecture Decisions - Technology selection support
- Performance Analysis - Benchmarks and optimization
- Integration Patterns - How technologies work together
Library Comparison Frameworkโ
# scripts/library-comparison.py
from dataclasses import dataclass, field
from typing import List, Dict, Optional, Any
from datetime import datetime
import json
@dataclass
class LibraryMetrics:
"""Package ecosystem metrics"""
name: str
version: str
downloads_monthly: int
stars: int
forks: int
open_issues: int
closed_issues: int
contributors: int
last_commit: str
license: str
bundle_size: Optional[int] = None # bytes
dependencies: int = 0
@dataclass
class FeatureMatrix:
"""Feature comparison matrix"""
features: Dict[str, Dict[str, Any]] # feature -> library -> value
def to_markdown(self) -> str:
"""Generate markdown table"""
if not self.features:
return "No features to compare"
libraries = set()
for lib_features in self.features.values():
libraries.update(lib_features.keys())
libraries = sorted(libraries)
# Header
md = "| Feature | " + " | ".join(libraries) + " |\n"
md += "|" + "----|" * (len(libraries) + 1) + "\n"
# Rows
for feature, lib_values in self.features.items():
row = [feature]
for lib in libraries:
value = lib_values.get(lib, "")
if isinstance(value, bool):
row.append("โ" if value else "โ")
else:
row.append(str(value))
md += "| " + " | ".join(row) + " |\n"
return md
@dataclass
class ComparisonReport:
"""Complete library comparison"""
topic: str
libraries: List[str]
metrics: Dict[str, LibraryMetrics]
features: FeatureMatrix
benchmarks: Dict[str, Dict[str, float]] # test -> library -> result
recommendation: str
rationale: List[str]
use_cases: Dict[str, str] # library -> ideal use case
class LibraryResearcher:
"""Research and compare libraries"""
def __init__(self):
self.package_registries = {
'npm': self._fetch_npm_data,
'pypi': self._fetch_pypi_data,
'crates.io': self._fetch_crates_data,
}
async def compare_libraries(
self,
topic: str,
libraries: List[str],
ecosystem: str
) -> ComparisonReport:
"""Compare libraries comprehensively"""
# Fetch metrics
metrics = {}
for lib in libraries:
fetch_func = self.package_registries.get(ecosystem)
if fetch_func:
metrics[lib] = await fetch_func(lib)
# Build feature matrix
features = self._build_feature_matrix(topic, libraries)
# Run benchmarks (if applicable)
benchmarks = await self._run_benchmarks(libraries, ecosystem)
# Generate recommendation
recommendation, rationale = self._recommend(metrics, features, benchmarks)
# Identify use cases
use_cases = self._identify_use_cases(libraries, metrics, features)
return ComparisonReport(
topic=topic,
libraries=libraries,
metrics=metrics,
features=features,
benchmarks=benchmarks,
recommendation=recommendation,
rationale=rationale,
use_cases=use_cases
)
async def _fetch_npm_data(self, package: str) -> LibraryMetrics:
"""Fetch npm package data"""
# In practice: call npm API
return LibraryMetrics(
name=package,
version="1.0.0",
downloads_monthly=1000000,
stars=5000,
forks=500,
open_issues=50,
closed_issues=1000,
contributors=100,
last_commit="2024-01-01",
license="MIT",
bundle_size=50000,
dependencies=10
)
async def _fetch_pypi_data(self, package: str) -> LibraryMetrics:
"""Fetch PyPI package data"""
return LibraryMetrics(
name=package,
version="1.0.0",
downloads_monthly=500000,
stars=3000,
forks=300,
open_issues=30,
closed_issues=800,
contributors=50,
last_commit="2024-01-01",
license="MIT",
dependencies=5
)
async def _fetch_crates_data(self, package: str) -> LibraryMetrics:
"""Fetch crates.io data"""
return LibraryMetrics(
name=package,
version="1.0.0",
downloads_monthly=100000,
stars=2000,
forks=200,
open_issues=20,
closed_issues=500,
contributors=30,
last_commit="2024-01-01",
license="MIT/Apache-2.0",
dependencies=3
)
def _build_feature_matrix(self, topic: str, libraries: List[str]) -> FeatureMatrix:
"""Build feature comparison matrix"""
# Example: state management libraries
if "state" in topic.lower():
return FeatureMatrix(features={
"Bundle Size": {lib: "small" for lib in libraries},
"TypeScript": {lib: True for lib in libraries},
"DevTools": {lib: True for lib in libraries},
"Async Actions": {lib: True for lib in libraries},
"Middleware": {lib: True for lib in libraries},
})
return FeatureMatrix(features={})
async def _run_benchmarks(
self,
libraries: List[str],
ecosystem: str
) -> Dict[str, Dict[str, float]]:
"""Run performance benchmarks"""
return {
"initialization": {lib: 1.0 for lib in libraries},
"update_1000": {lib: 10.0 for lib in libraries},
"memory_mb": {lib: 5.0 for lib in libraries},
}
def _recommend(
self,
metrics: Dict[str, LibraryMetrics],
features: FeatureMatrix,
benchmarks: Dict[str, Dict[str, float]]
) -> tuple[str, List[str]]:
"""Generate recommendation"""
# Score each library
scores = {}
for lib, m in metrics.items():
score = 0.0
score += min(m.downloads_monthly / 1000000, 1.0) * 0.3
score += min(m.stars / 10000, 1.0) * 0.2
score += (1.0 - min(m.open_issues / 100, 1.0)) * 0.2
score += min(m.contributors / 100, 1.0) * 0.1
# Bundle size (smaller is better)
if m.bundle_size:
score += (1.0 - min(m.bundle_size / 100000, 1.0)) * 0.2
scores[lib] = score
best = max(scores, key=scores.get)
rationale = [
f"Highest overall score: {scores[best]:.2f}",
f"Monthly downloads: {metrics[best].downloads_monthly:,}",
f"Community size: {metrics[best].contributors} contributors",
f"Active maintenance: {metrics[best].last_commit}",
]
return best, rationale
def _identify_use_cases(
self,
libraries: List[str],
metrics: Dict[str, LibraryMetrics],
features: FeatureMatrix
) -> Dict[str, str]:
"""Identify ideal use case for each library"""
use_cases = {}
for lib, m in metrics.items():
if m.bundle_size and m.bundle_size < 10000:
use_cases[lib] = "Minimal bundle size applications"
elif m.downloads_monthly > 1000000:
use_cases[lib] = "Production applications (proven at scale)"
elif m.stars > 5000:
use_cases[lib] = "Community-driven projects"
else:
use_cases[lib] = "Specialized use cases"
return use_cases
# Usage
import asyncio
async def main():
researcher = LibraryResearcher()
report = await researcher.compare_libraries(
topic="React State Management",
libraries=["zustand", "redux-toolkit", "jotai", "recoil"],
ecosystem="npm"
)
print(f"# {report.topic} Comparison\n")
print(f"## Recommendation: {report.recommendation}\n")
print("### Rationale:")
for reason in report.rationale:
print(f"- {reason}")
print("\n## Feature Matrix\n")
print(report.features.to_markdown())
print("\n## Use Cases\n")
for lib, use_case in report.use_cases.items():
print(f"- **{lib}**: {use_case}")
if __name__ == '__main__':
asyncio.run(main())
Best Practices Extractionโ
# scripts/extract-best-practices.py
from dataclasses import dataclass
from typing import List, Dict, Optional
import re
@dataclass
class BestPractice:
"""Single best practice"""
title: str
description: str
category: str # 'security', 'performance', 'maintainability'
priority: str # 'critical', 'recommended', 'optional'
code_example: Optional[str] = None
source: str = ""
@dataclass
class PracticeGuide:
"""Complete best practices guide"""
topic: str
practices: List[BestPractice]
antipatterns: List[str]
common_mistakes: List[str]
class BestPracticesExtractor:
"""Extract best practices from documentation"""
PRACTICE_INDICATORS = [
'best practice', 'recommended', 'should',
'must', 'always', 'never', 'avoid'
]
ANTIPATTERN_INDICATORS = [
'anti-pattern', 'antipattern', 'bad practice',
'don\'t', 'avoid', 'never'
]
def extract_from_docs(self, content: str, topic: str) -> PracticeGuide:
"""Extract practices from documentation"""
practices = []
antipatterns = []
mistakes = []
# Split into sections
sections = self._split_sections(content)
for section in sections:
# Look for best practices
if any(indicator in section.lower() for indicator in self.PRACTICE_INDICATORS):
practice = self._extract_practice(section)
if practice:
practices.append(practice)
# Look for antipatterns
if any(indicator in section.lower() for indicator in self.ANTIPATTERN_INDICATORS):
antipattern = self._extract_antipattern(section)
if antipattern:
antipatterns.append(antipattern)
return PracticeGuide(
topic=topic,
practices=practices,
antipatterns=antipatterns,
common_mistakes=mistakes
)
def _split_sections(self, content: str) -> List[str]:
"""Split content into logical sections"""
# Split by headers
sections = re.split(r'\n#{1,3}\s+', content)
return [s.strip() for s in sections if len(s.strip()) > 100]
def _extract_practice(self, section: str) -> Optional[BestPractice]:
"""Extract best practice from section"""
lines = section.split('\n')
# First line is often the title
title = lines[0].strip('# ').strip()
# Look for code blocks
code_pattern = r'```[\w]*\n(.*?)\n```'
code_matches = re.findall(code_pattern, section, re.DOTALL)
code_example = code_matches[0] if code_matches else None
# Determine category
category = 'general'
if 'security' in section.lower():
category = 'security'
elif 'performance' in section.lower():
category = 'performance'
elif 'maintain' in section.lower():
category = 'maintainability'
# Determine priority
priority = 'recommended'
if any(word in section.lower() for word in ['must', 'critical', 'required']):
priority = 'critical'
elif any(word in section.lower() for word in ['optional', 'consider']):
priority = 'optional'
return BestPractice(
title=title,
description=section[:500],
category=category,
priority=priority,
code_example=code_example
)
def _extract_antipattern(self, section: str) -> Optional[str]:
"""Extract antipattern description"""
lines = section.split('\n')
return lines[0].strip('# ').strip()
def to_markdown(self, guide: PracticeGuide) -> str:
"""Generate markdown guide"""
md = f"# {guide.topic} Best Practices\n\n"
# Group by priority
critical = [p for p in guide.practices if p.priority == 'critical']
recommended = [p for p in guide.practices if p.priority == 'recommended']
optional = [p for p in guide.practices if p.priority == 'optional']
if critical:
md += "## Critical Practices\n\n"
for practice in critical:
md += f"### {practice.title}\n\n"
md += f"{practice.description}\n\n"
if practice.code_example:
md += f"```\n{practice.code_example}\n```\n\n"
if recommended:
md += "## Recommended Practices\n\n"
for practice in recommended:
md += f"### {practice.title}\n\n"
md += f"{practice.description}\n\n"
if guide.antipatterns:
md += "## Antipatterns to Avoid\n\n"
for antipattern in guide.antipatterns:
md += f"- {antipattern}\n"
return md
# Usage
extractor = BestPracticesExtractor()
docs = """
# Rust Async Best Practices
## Always use tokio::spawn for CPU-bound tasks
When performing CPU-bound work in async context, use spawn_blocking
to avoid blocking the runtime.
```rust
let result = tokio::task::spawn_blocking(|| {
// CPU-intensive work
heavy_computation()
}).await?;
Never block the runtimeโ
Don't use std:๐งต:sleep in async functions. Use tokio::time::sleep instead.
Use Arc for shared stateโ
When sharing state across tasks, use Arc<Mutex
guide = extractor.extract_from_docs(docs, "Rust Async") print(extractor.to_markdown(guide))
## Code Example Extraction
```typescript
// scripts/extract-code-examples.ts
interface CodeExample {
language: string;
code: string;
description: string;
fileName?: string;
tags: string[];
}
class CodeExampleExtractor {
/**
* Extract code examples from documentation
*/
extractExamples(content: string): CodeExample[] {
const examples: CodeExample[] = [];
// Find code blocks with backticks
const codeBlockRegex = /```(\w+)?\n([\s\S]*?)```/g;
let match;
let prevText = '';
let lastIndex = 0;
while ((match = codeBlockRegex.exec(content)) !== null) {
const language = match[1] || 'plaintext';
const code = match[2].trim();
// Get preceding text as description
const precedingText = content.substring(lastIndex, match.index);
const description = this.extractDescription(precedingText);
// Extract filename from comments
const fileName = this.extractFileName(code);
// Extract tags
const tags = this.extractTags(precedingText, code);
examples.push({
language,
code,
description,
fileName,
tags,
});
lastIndex = match.index + match[0].length;
}
return examples;
}
private extractDescription(text: string): string {
// Get last paragraph before code block
const paragraphs = text.split('\n\n');
const lastPara = paragraphs[paragraphs.length - 1];
return lastPara?.trim().substring(0, 200) || '';
}
private extractFileName(code: string): string | undefined {
// Look for filename in first line comment
const firstLine = code.split('\n')[0];
const fileMatch = firstLine.match(/\/\/\s*(.+\.\w+)/);
return fileMatch?.[1];
}
private extractTags(precedingText: string, code: string): string[] {
const tags: string[] = [];
// Language/framework tags
if (code.includes('async') || code.includes('await')) {
tags.push('async');
}
if (code.includes('React') || code.includes('useState')) {
tags.push('react');
}
if (code.includes('tokio')) {
tags.push('tokio');
}
// Concept tags from description
const concepts = ['authentication', 'database', 'api', 'testing'];
for (const concept of concepts) {
if (precedingText.toLowerCase().includes(concept)) {
tags.push(concept);
}
}
return tags;
}
/**
* Generate example catalog
*/
generateCatalog(examples: CodeExample[]): string {
let md = '# Code Examples\n\n';
// Group by language
const byLanguage = new Map<string, CodeExample[]>();
for (const ex of examples) {
if (!byLanguage.has(ex.language)) {
byLanguage.set(ex.language, []);
}
byLanguage.get(ex.language)!.push(ex);
}
for (const [lang, exs] of byLanguage) {
md += `## ${lang}\n\n`;
for (const ex of exs) {
md += `### ${ex.description.substring(0, 50)}...\n\n`;
if (ex.fileName) {
md += `**File:** \`${ex.fileName}\`\n\n`;
}
if (ex.tags.length > 0) {
md += `**Tags:** ${ex.tags.join(', ')}\n\n`;
}
md += '```' + lang + '\n';
md += ex.code + '\n';
md += '```\n\n';
}
}
return md;
}
}
// Usage
const extractor = new CodeExampleExtractor();
const docs = `
Here's how to create a JWT token in Rust:
\`\`\`rust
use jsonwebtoken::{encode, Header, EncodingKey};
let token = encode(&Header::default(), &claims, &EncodingKey::from_secret("secret".as_ref()))?;
\`\`\`
`;
const examples = extractor.extractExamples(docs);
console.log(extractor.generateCatalog(examples));
Usage Examplesโ
Library Comparisonโ
Apply research-patterns skill to compare React state management libraries (zustand, redux, jotai)
Best Practices Researchโ
Apply research-patterns skill to research Rust async/await best practices with code examples
Code Example Extractionโ
Apply research-patterns skill to extract working code examples from Next.js documentation
Success Outputโ
When successful, this skill MUST output:
โ
SKILL COMPLETE: research-patterns
Completed:
- [x] Library comparison analysis generated
- [x] Best practices extracted and documented
- [x] Code examples catalogued with metadata
- [x] Research findings saved to output files
Outputs:
- library-comparison-report.md (feature matrix, recommendation, use cases)
- best-practices-guide.md (practices grouped by priority with code examples)
- code-examples-catalog.md (examples organized by language and tags)
Completion Checklistโ
Before marking this skill as complete, verify:
- Research topic and scope clearly defined
- Target libraries/frameworks identified (minimum 2 for comparison)
- Library metrics collected (downloads, stars, maintenance activity)
- Feature matrix constructed with accurate data
- Best practices extracted from authoritative sources
- Code examples include language, description, and tags
- Recommendation provided with clear rationale
- All output files created at expected locations
- Quality score calculated if applicable
Failure Indicatorsโ
This skill has FAILED if:
- โ No library metrics available (APIs unreachable, packages not found)
- โ Comparison includes fewer than 2 valid libraries
- โ Best practices extraction found no actionable guidance
- โ Code examples missing language or context metadata
- โ Recommendation lacks rationale or contradicts feature matrix
- โ Output files not created or contain placeholder data
- โ Research sources not documented or unreliable
When NOT to Useโ
Do NOT use this skill when:
- Simple feature lookup (use direct documentation instead)
- Library already selected (use
code-example-extractionskill) - Need automated benchmark results (use
load-testingskill) - Research scope is too broad (narrow topic first, then apply skill)
- Only need dependency vulnerability scan (use
dependency-securityskill) - Time-sensitive decision (skill requires 15-30 minutes for thorough research)
Use alternative skills:
- code-review-patterns - When reviewing existing library usage
- architecture-decision-records - When documenting library selection decision
- dependency-security - When security is primary concern
Anti-Patterns (Avoid)โ
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Comparing incompatible libraries | Different domains, invalid comparison | Ensure libraries solve same problem category |
| Ignoring ecosystem metrics | Miss community health signals | Always check downloads, stars, last commit date |
| Using outdated benchmarks | Incorrect performance assumptions | Run fresh benchmarks or verify source date |
| Cherry-picking features | Biased comparison | Use comprehensive feature matrix |
| No code example validation | Examples may not compile | Test code examples before including |
| Skipping source documentation | Missing context or caveats | Cite official docs for all claims |
| Research without defined use case | Generic, not actionable | Define specific project requirements first |
Principlesโ
This skill embodies:
- #1 Recycle โ Extend โ Re-Use โ Create - Research existing solutions before building custom
- #5 Eliminate Ambiguity - Clear feature matrix eliminates "which library?" ambiguity
- #6 Clear, Understandable, Explainable - Comparison table makes tradeoffs explicit
- #8 No Assumptions - Verify all claims with package registry data or documentation
- Research When in Doubt - Core skill purpose: thorough investigation before decisions
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Integration Pointsโ
- claude-research-patterns - Web search and retrieval
- competitive-analysis - Market research depth
- educational-content-patterns - Tutorial generation