Skip to main content

Agent Skills Framework Extension

Comprehensive Review Patterns Skill

When to Use This Skill

Use this skill when implementing comprehensive review patterns patterns in your codebase.

How to Use This Skill

  1. Review the patterns and examples below
  2. Apply the relevant patterns to your implementation
  3. Follow the best practices outlined in this skill

Multi-dimensional code review covering security, performance, maintainability, architecture, testing, and documentation in a single integrated assessment.

Core Capabilities

  1. Security Analysis - SAST, dependency scanning, vulnerability detection
  2. Performance Review - Complexity analysis, N+1 queries, memory leaks
  3. Maintainability - Code smells, duplication, complexity metrics
  4. Architecture - Layer boundaries, SOLID principles, design patterns
  5. Testing - Coverage, test quality, edge cases
  6. Documentation - API docs, inline comments, examples

Multi-Dimensional Review Engine

# scripts/comprehensive_reviewer.py
from typing import List, Dict, Optional
from dataclasses import dataclass, field
from enum import Enum

class ReviewDimension(Enum):
"""Dimensions of comprehensive review."""
SECURITY = "security"
PERFORMANCE = "performance"
MAINTAINABILITY = "maintainability"
ARCHITECTURE = "architecture"
TESTING = "testing"
DOCUMENTATION = "documentation"

@dataclass
class DimensionScore:
"""Score for a single review dimension."""
dimension: ReviewDimension
score: float # 0-100
findings: List[Dict]
recommendations: List[str]
weight: float = 1.0

@dataclass
class ComprehensiveReview:
"""Complete multi-dimensional review."""
overall_score: float
dimension_scores: Dict[ReviewDimension, DimensionScore]
critical_issues: List[Dict]
blocking_issues: List[Dict]
improvement_areas: List[str]
approval_status: str

class ComprehensiveReviewer:
"""Multi-dimensional code review engine."""

DIMENSION_WEIGHTS = {
ReviewDimension.SECURITY: 0.25,
ReviewDimension.PERFORMANCE: 0.20,
ReviewDimension.MAINTAINABILITY: 0.20,
ReviewDimension.ARCHITECTURE: 0.15,
ReviewDimension.TESTING: 0.12,
ReviewDimension.DOCUMENTATION: 0.08
}

def __init__(self):
self.reviewers = {
ReviewDimension.SECURITY: SecurityReviewer(),
ReviewDimension.PERFORMANCE: PerformanceReviewer(),
ReviewDimension.MAINTAINABILITY: MaintainabilityReviewer(),
ReviewDimension.ARCHITECTURE: ArchitectureReviewer(),
ReviewDimension.TESTING: TestingReviewer(),
ReviewDimension.DOCUMENTATION: DocumentationReviewer()
}

def review(self, files: List[str], context: Dict) -> ComprehensiveReview:
"""Perform comprehensive multi-dimensional review."""
dimension_scores = {}

# Run each dimension's review
for dimension, reviewer in self.reviewers.items():
score = reviewer.review(files, context)
score.weight = self.DIMENSION_WEIGHTS[dimension]
dimension_scores[dimension] = score

# Calculate overall score
overall = sum(
score.score * score.weight
for score in dimension_scores.values()
)

# Aggregate findings
critical = self._find_critical_issues(dimension_scores)
blocking = self._find_blocking_issues(dimension_scores)
improvements = self._generate_improvements(dimension_scores)

# Determine approval status
approval = self._determine_approval(overall, blocking, critical)

return ComprehensiveReview(
overall_score=overall,
dimension_scores=dimension_scores,
critical_issues=critical,
blocking_issues=blocking,
improvement_areas=improvements,
approval_status=approval
)

def _find_critical_issues(self, scores: Dict) -> List[Dict]:
"""Extract critical issues across all dimensions."""
critical = []

for dimension, score in scores.items():
critical_findings = [
f for f in score.findings
if f.get('severity') == 'critical'
]
critical.extend(critical_findings)

return critical

def _find_blocking_issues(self, scores: Dict) -> List[Dict]:
"""Extract blocking issues."""
blocking = []

for dimension, score in scores.items():
blocking_findings = [
f for f in score.findings
if f.get('severity') == 'blocking'
]
blocking.extend(blocking_findings)

return blocking

def _generate_improvements(self, scores: Dict) -> List[str]:
"""Generate prioritized improvement recommendations."""
improvements = []

# Sort dimensions by score (lowest first)
sorted_dims = sorted(
scores.items(),
key=lambda x: x[1].score
)

for dimension, score in sorted_dims[:3]: # Top 3 areas needing improvement
if score.score < 70:
improvements.extend(score.recommendations)

return improvements[:10] # Top 10 recommendations

def _determine_approval(
self,
overall_score: float,
blocking: List,
critical: List
) -> str:
"""Determine approval status."""
if blocking:
return 'CHANGES_REQUIRED'
elif len(critical) > 5:
return 'CHANGES_REQUESTED'
elif overall_score >= 80:
return 'APPROVED'
elif overall_score >= 60:
return 'APPROVED_WITH_COMMENTS'
else:
return 'CHANGES_REQUESTED'


class SecurityReviewer:
"""Security-focused review."""

def review(self, files: List[str], context: Dict) -> DimensionScore:
findings = []
score = 100.0

for file_path in files:
# SQL injection check
if self._has_sql_injection_risk(file_path):
findings.append({
'file': file_path,
'severity': 'blocking',
'type': 'sql_injection',
'message': 'Potential SQL injection vulnerability'
})
score -= 20

# Hardcoded secrets
if self._has_hardcoded_secrets(file_path):
findings.append({
'file': file_path,
'severity': 'critical',
'type': 'hardcoded_secret',
'message': 'Hardcoded credential detected'
})
score -= 15

# Insecure dependencies
if self._has_vulnerable_dependencies(file_path):
findings.append({
'file': file_path,
'severity': 'critical',
'type': 'vulnerable_dependency',
'message': 'Vulnerable dependency detected'
})
score -= 10

return DimensionScore(
dimension=ReviewDimension.SECURITY,
score=max(0, score),
findings=findings,
recommendations=[
'Use parameterized queries for all database access',
'Move credentials to environment variables',
'Update vulnerable dependencies'
]
)


class PerformanceReviewer:
"""Performance-focused review."""

def review(self, files: List[str], context: Dict) -> DimensionScore:
findings = []
score = 100.0

for file_path in files:
# N+1 query problem
if self._has_n_plus_one(file_path):
findings.append({
'file': file_path,
'severity': 'major',
'type': 'n_plus_one',
'message': 'Potential N+1 query problem'
})
score -= 10

# Inefficient algorithms
complexity = self._calculate_complexity(file_path)
if complexity > 15:
findings.append({
'file': file_path,
'severity': 'minor',
'type': 'high_complexity',
'message': f'High cyclomatic complexity: {complexity}'
})
score -= 5

return DimensionScore(
dimension=ReviewDimension.PERFORMANCE,
score=max(0, score),
findings=findings,
recommendations=[
'Use JOINs or prefetch to avoid N+1 queries',
'Refactor complex functions into smaller units',
'Add database indexes for frequently queried fields'
]
)


# Usage
reviewer = ComprehensiveReviewer()
result = reviewer.review(
files=['src/api/auth.py', 'src/models/user.py'],
context={'pr_number': 123, 'author': 'dev'}
)

print(f"Overall Score: {result.overall_score:.1f}/100")
print(f"Status: {result.approval_status}")
print(f"\nDimension Scores:")
for dim, score in result.dimension_scores.items():
print(f" {dim.value}: {score.score:.1f}/100")

if result.blocking_issues:
print(f"\nBlocking Issues ({len(result.blocking_issues)}):")
for issue in result.blocking_issues:
print(f" - {issue['file']}: {issue['message']}")

Integrated Quality Dashboard

// tools/quality-dashboard.ts
interface QualityDashboard {
overallHealth: number;
trends: TrendData[];
dimensions: DimensionMetrics[];
alerts: Alert[];
recommendations: Recommendation[];
}

class QualityDashboardGenerator {
generate(reviewHistory: ComprehensiveReview[]): QualityDashboard {
return {
overallHealth: this.calculateHealth(reviewHistory),
trends: this.analyzeTrends(reviewHistory),
dimensions: this.aggregateDimensions(reviewHistory),
alerts: this.generateAlerts(reviewHistory),
recommendations: this.prioritizeRecommendations(reviewHistory)
};
}

private calculateHealth(history: ComprehensiveReview[]): number {
if (history.length === 0) return 0;

const recent = history.slice(-10); // Last 10 reviews
const avgScore = recent.reduce((sum, r) => sum + r.overall_score, 0) / recent.length;

return avgScore;
}

private analyzeTrends(history: ComprehensiveReview[]): TrendData[] {
// Calculate trend for each dimension over time
const trends: TrendData[] = [];

// ... implementation

return trends;
}
}

Usage Examples

Perform Comprehensive Review

Apply comprehensive-review-patterns skill to run multi-dimensional review covering all quality aspects

Generate Quality Dashboard

Apply comprehensive-review-patterns skill to create quality dashboard with trends and recommendations

Integrated Assessment

Apply comprehensive-review-patterns skill to assess security, performance, and maintainability in single review

Success Output

When successfully performing comprehensive review:

✅ SKILL COMPLETE: comprehensive-review-patterns

Completed:
- [x] Multi-dimensional review executed across all 6 dimensions
- [x] Security analysis completed (SAST, dependency scan, vulnerability detection)
- [x] Performance review performed (N+1 queries, complexity analysis)
- [x] Maintainability assessed (code smells, duplication, metrics)
- [x] Architecture validated (SOLID principles, layer boundaries)
- [x] Testing evaluated (coverage, test quality, edge cases)
- [x] Documentation checked (API docs, comments, examples)
- [x] Overall score calculated with weighted dimension scores
- [x] Approval status determined (APPROVED/CHANGES_REQUESTED/etc.)

Outputs:
- ComprehensiveReview object with overall_score (0-100)
- DimensionScore for each of 6 dimensions with findings
- critical_issues list (severity: critical)
- blocking_issues list (severity: blocking)
- improvement_areas with prioritized recommendations
- approval_status (APPROVED/APPROVED_WITH_COMMENTS/CHANGES_REQUESTED/CHANGES_REQUIRED)

Completion Checklist

Before marking this skill as complete, verify:

  • All 6 review dimensions executed (Security, Performance, Maintainability, Architecture, Testing, Documentation)
  • Each dimension scored (0-100) with findings list
  • Dimension weights applied correctly (Security 25%, Performance 20%, etc.)
  • Overall score calculated as weighted sum of dimension scores
  • Critical issues extracted (severity: critical) across all dimensions
  • Blocking issues identified (severity: blocking) and listed
  • Improvement recommendations generated from lowest-scoring dimensions
  • Approval status determined from overall score and blocking/critical issues
  • No dimension skipped or scored as null/undefined

Failure Indicators

This skill has FAILED if:

  • ❌ One or more dimensions not executed (null/missing dimension score)
  • ❌ Overall score outside valid range (0-100)
  • ❌ Dimension weights don't sum to 1.0 (invalid weighting)
  • ❌ Blocking issues present but approval status not CHANGES_REQUIRED
  • ❌ Critical SQL injection found but not flagged as blocking
  • ❌ Score calculation doesn't match weighted sum (math error)
  • ❌ Empty findings list when issues clearly exist (detection failure)
  • ❌ Approval status inconsistent with overall score (e.g., APPROVED at 40%)

When NOT to Use

Do NOT use this skill when:

  • Single-dimension review needed (e.g., security-only audit)
  • Quick review for minor changes (comprehensive review is heavyweight)
  • No time for multi-hour review process
  • Files are non-code (e.g., documentation-only changes)
  • Automated review not appropriate (need human judgment)

Use alternatives instead:

  • For security-only → Use SecurityReviewer class directly
  • For performance-only → Use PerformanceReviewer class directly
  • For quick review → Use single-dimension review with fast checklist
  • For docs-only → Skip code review dimensions, use doc review
  • For human judgment → Request manual review instead

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Skipping dimensions for speedIncomplete assessment, miss issuesRun all 6 dimensions or don't use comprehensive review
Equal weighting all dimensionsSecurity not prioritized appropriatelyUse defined weights (Security 25%, etc.)
Ignoring blocking issuesApprove code with critical flawsCheck blocking_issues, set CHANGES_REQUIRED
No improvement recommendationsTeam doesn't know what to fixGenerate from lowest-scoring dimensions
Single file at a timeMiss cross-file issuesReview all changed files together
No context providedCan't assess architecture/designProvide PR number, author, related changes

Principles

This skill embodies CODITECT automation principles:

  • #1 Recycle → Extend → Re-Use → Create - Reuses individual dimension reviewers (SecurityReviewer, etc.) in comprehensive workflow
  • #4 Keep It Simple - Clear dimension enumeration and weighted scoring model
  • #5 Eliminate Ambiguity - Explicit approval statuses (APPROVED/CHANGES_REQUESTED/CHANGES_REQUIRED)
  • #6 Clear, Understandable, Explainable - Dimension scores show exactly where quality issues exist
  • #8 No Assumptions - Validates all dimensions, doesn't skip even if previous scores high
  • Quality First - Multi-dimensional assessment ensures nothing missed

Full Standard: CODITECT-STANDARD-AUTOMATION.md


Integration Points

  • code-review-patterns - Base review methodology
  • orchestrator-code-review-patterns - Multi-agent coordination
  • qa-review-methodology - Quality assessment frameworks