Skip to main content

Code Review Patterns Skill

Code Review Patterns Skill

When to Use This Skill

Use this skill when implementing code review patterns patterns in your codebase.

How to Use This Skill

  1. Review the patterns and examples below
  2. Apply the relevant patterns to your implementation
  3. Follow the best practices outlined in this skill

Review methodology, constructive feedback patterns, PR best practices, and automated quality gates for effective code review.

Core Capabilities

  1. Structured Review - Systematic review process, checklist-driven, multi-pass approach
  2. Constructive Feedback - Actionable comments, severity levels, improvement suggestions
  3. PR Workflow - Automated checks, review assignment, approval criteria
  4. Quality Gates - Coverage thresholds, complexity limits, security scans
  5. Review Metrics - Time to review, comment quality, approval rate

Structured Review System

# scripts/code_reviewer.py
from typing import List, Dict, Optional
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
import ast

class ReviewSeverity(Enum):
"""Severity levels for review comments."""
BLOCKING = "blocking" # Must fix before merge
CRITICAL = "critical" # Should fix before merge
MAJOR = "major" # Important but not blocking
MINOR = "minor" # Nice to have
NITPICK = "nitpick" # Style/preference

class ReviewCategory(Enum):
"""Categories of review feedback."""
CORRECTNESS = "correctness"
PERFORMANCE = "performance"
SECURITY = "security"
MAINTAINABILITY = "maintainability"
STYLE = "style"
DOCUMENTATION = "documentation"
TESTING = "testing"
ARCHITECTURE = "architecture"

@dataclass
class ReviewComment:
"""Single review comment."""
file: str
line: int
severity: ReviewSeverity
category: ReviewCategory
message: str
suggestion: Optional[str] = None
code_snippet: Optional[str] = None
references: List[str] = field(default_factory=list)

@dataclass
class ReviewReport:
"""Complete code review report."""
summary: str
overall_score: float # 0-100
comments: List[ReviewComment]
quality_metrics: Dict[str, float]
approval_status: str # 'approved', 'changes_requested', 'needs_review'
checklist: Dict[str, bool]

class CodeReviewer:
"""Systematic code review with multi-dimensional analysis."""

REVIEW_CHECKLIST = {
'correctness': [
'Logic is correct and handles edge cases',
'No potential null/undefined errors',
'Proper error handling implemented',
'No race conditions or concurrency issues'
],
'security': [
'No SQL injection vulnerabilities',
'Input validation and sanitization',
'No hardcoded credentials or secrets',
'Proper authentication and authorization'
],
'performance': [
'No unnecessary loops or nested iterations',
'Database queries are optimized',
'Proper caching where applicable',
'No memory leaks or resource exhaustion'
],
'maintainability': [
'Code is readable and self-documenting',
'Functions are focused and single-purpose',
'No code duplication',
'Complexity is within acceptable limits'
],
'testing': [
'Unit tests cover new/changed code',
'Edge cases are tested',
'Integration tests where needed',
'Test coverage meets threshold (80%+)'
],
'documentation': [
'Public APIs are documented',
'Complex logic has inline comments',
'README updated if needed',
'CHANGELOG updated'
]
}

def __init__(self):
self.comments: List[ReviewComment] = []

def review(self, pr_diff: str, pr_metadata: Dict) -> ReviewReport:
"""Perform complete code review."""
files_changed = self._parse_diff(pr_diff)

# Multi-pass review
for file_path, changes in files_changed.items():
# Pass 1: Correctness
self._review_correctness(file_path, changes)

# Pass 2: Security
self._review_security(file_path, changes)

# Pass 3: Performance
self._review_performance(file_path, changes)

# Pass 4: Maintainability
self._review_maintainability(file_path, changes)

# Pass 5: Testing
self._review_testing(file_path, changes, pr_metadata)

# Pass 6: Documentation
self._review_documentation(file_path, changes)

# Calculate metrics and score
metrics = self._calculate_metrics()
score = self._calculate_overall_score()
checklist = self._evaluate_checklist(files_changed)

return ReviewReport(
summary=self._generate_summary(),
overall_score=score,
comments=self.comments,
quality_metrics=metrics,
approval_status=self._determine_approval_status(),
checklist=checklist
)

def _review_correctness(self, file_path: str, changes: Dict):
"""Review for logical correctness."""
# Check for common logic errors
if self._has_potential_null_dereference(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.CRITICAL,
category=ReviewCategory.CORRECTNESS,
message='Potential null/undefined dereference detected',
suggestion='Add null check before accessing property'
))

# Check error handling
if self._missing_error_handling(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.MAJOR,
category=ReviewCategory.CORRECTNESS,
message='Missing error handling for async operation',
suggestion='Wrap in try-catch or add .catch() handler'
))

def _review_security(self, file_path: str, changes: Dict):
"""Review for security issues."""
# SQL injection check
if self._has_sql_injection_risk(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.BLOCKING,
category=ReviewCategory.SECURITY,
message='Potential SQL injection vulnerability',
suggestion='Use parameterized queries or ORM',
references=['OWASP Top 10: Injection']
))

# Hardcoded secrets
if self._has_hardcoded_secret(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.BLOCKING,
category=ReviewCategory.SECURITY,
message='Hardcoded credential detected',
suggestion='Move to environment variable or secret manager'
))

def _review_performance(self, file_path: str, changes: Dict):
"""Review for performance issues."""
# N+1 query problem
if self._has_n_plus_one_query(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.MAJOR,
category=ReviewCategory.PERFORMANCE,
message='Potential N+1 query problem',
suggestion='Use JOIN or prefetch to load related data'
))

# Inefficient algorithm
complexity = self._calculate_time_complexity(changes)
if complexity > 'O(n^2)':
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.MINOR,
category=ReviewCategory.PERFORMANCE,
message=f'High time complexity detected: {complexity}',
suggestion='Consider using hash map or more efficient data structure'
))

def _review_maintainability(self, file_path: str, changes: Dict):
"""Review for maintainability."""
# Function complexity
if self._is_function_too_complex(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.MAJOR,
category=ReviewCategory.MAINTAINABILITY,
message='Function complexity exceeds threshold (CC > 15)',
suggestion='Extract methods to reduce complexity'
))

# Code duplication
if duplicates := self._find_duplicates(changes):
self.comments.append(ReviewComment(
file=file_path,
line=changes['line'],
severity=ReviewSeverity.MINOR,
category=ReviewCategory.MAINTAINABILITY,
message=f'Duplicate code found in {duplicates}',
suggestion='Extract common code into shared function'
))

def _calculate_overall_score(self) -> float:
"""Calculate overall review score (0-100)."""
if not self.comments:
return 100.0

# Weighted severity deductions
severity_weights = {
ReviewSeverity.BLOCKING: 20,
ReviewSeverity.CRITICAL: 10,
ReviewSeverity.MAJOR: 5,
ReviewSeverity.MINOR: 2,
ReviewSeverity.NITPICK: 0.5
}

total_deduction = sum(
severity_weights[comment.severity]
for comment in self.comments
)

return max(0, 100 - total_deduction)

def _determine_approval_status(self) -> str:
"""Determine if changes should be approved."""
blocking_issues = [
c for c in self.comments
if c.severity == ReviewSeverity.BLOCKING
]

if blocking_issues:
return 'changes_requested'

critical_issues = [
c for c in self.comments
if c.severity == ReviewSeverity.CRITICAL
]

if len(critical_issues) > 3:
return 'changes_requested'

if len(self.comments) == 0:
return 'approved'

return 'approved_with_comments'


# Usage
reviewer = CodeReviewer()
report = reviewer.review(pr_diff, pr_metadata)

print(f"Review Score: {report.overall_score}/100")
print(f"Status: {report.approval_status}")
print(f"\nComments ({len(report.comments)}):")
for comment in report.comments:
print(f" [{comment.severity.value}] {comment.file}:{comment.line}")
print(f" {comment.message}")
if comment.suggestion:
print(f" Suggestion: {comment.suggestion}")

PR Automation Workflow

# .github/workflows/code-review.yml
name: Automated Code Review

on:
pull_request:
types: [opened, synchronize, reopened]

jobs:
automated-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Run linters
run: |
npm run lint
python -m flake8 .
python -m mypy .

- name: Check code coverage
run: |
npm run test:coverage
# Fail if coverage below 80%
coverage report --fail-under=80

- name: Security scan
uses: snyk/actions/python@master
with:
command: test

- name: Complexity analysis
run: |
radon cc . -a -nb --total-average
# Fail if average complexity > 10
radon cc . -a -nb | grep "Average complexity: A"

- name: Check for secrets
uses: trufflesecurity/trufflehog@main
with:
path: ./

- name: Post review comments
uses: actions/github-script@v7
with:
script: |
const review = require('./scripts/generate-review.js');
const comments = await review.analyze(context);

await github.rest.pulls.createReview({
...context.repo,
pull_number: context.payload.pull_request.number,
event: comments.blocking > 0 ? 'REQUEST_CHANGES' : 'COMMENT',
body: comments.summary,
comments: comments.items
});

quality-gates:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Enforce quality gates
run: |
python scripts/quality_gates.py \
--max-complexity 15 \
--min-coverage 80 \
--max-file-lines 500 \
--max-function-lines 50

Constructive Feedback Templates

// tools/feedback-generator.ts
interface FeedbackTemplate {
pattern: string;
template: string;
severity: string;
suggestion: string;
}

const FEEDBACK_TEMPLATES: FeedbackTemplate[] = [
{
pattern: 'missing_error_handling',
template: 'This {{operation}} could fail, but there\'s no error handling. Consider what happens if {{failure_scenario}}.',
severity: 'major',
suggestion: 'Add try-catch block or return Result<T, E> type'
},
{
pattern: 'complex_condition',
template: 'This condition is complex and hard to understand. Breaking it into well-named variables would improve readability.',
severity: 'minor',
suggestion: 'Extract into variables: const isValid = ...; const hasPermission = ...; if (isValid && hasPermission) {...}'
},
{
pattern: 'magic_number',
template: 'The number {{value}} appears to be a magic number. What does it represent?',
severity: 'nitpick',
suggestion: 'Define as named constant: const MAX_RETRIES = {{value}};'
},
{
pattern: 'unclear_naming',
template: 'The name "{{name}}" doesn\'t clearly communicate intent. Consider a more descriptive name.',
severity: 'minor',
suggestion: 'Rename to something like "{{suggested_name}}" to better describe {{purpose}}'
}
];

class FeedbackGenerator {
generateFeedback(issue: any): string {
const template = FEEDBACK_TEMPLATES.find(t => t.pattern === issue.type);

if (!template) {
return issue.message;
}

// Fill in template variables
let feedback = template.template;
for (const [key, value] of Object.entries(issue.context)) {
feedback = feedback.replace(`{{${key}}}`, String(value));
}

// Add suggestion
feedback += `\n\n**Suggestion:** ${template.suggestion}`;

return feedback;
}
}

Usage Examples

Perform Complete PR Review

Apply code-review-patterns skill to review PR #123 with multi-dimensional analysis and quality gates

Generate Review Checklist

Apply code-review-patterns skill to create review checklist for backend API changes

Automated Quality Gates

Apply code-review-patterns skill to enforce coverage 80%, complexity <15, and security scan passing

Post Constructive Feedback

Apply code-review-patterns skill to generate actionable review comments with severity levels and suggestions

Success Output

When this skill is successfully applied, output:

✅ SKILL COMPLETE: code-review-patterns

Completed:
- [x] Multi-pass review executed (correctness, security, performance, maintainability, testing, documentation)
- [x] Review score calculated: 85/100
- [x] 12 comments generated with severity levels and suggestions
- [x] Approval status determined: approved_with_comments

Outputs:
- ReviewReport with overall score and comments
- Actionable feedback with severity levels (BLOCKING/CRITICAL/MAJOR/MINOR/NITPICK)
- Quality gates validation results (coverage, complexity, security scan)
- GitHub PR comments posted

Completion Checklist

Before marking this skill as complete, verify:

  • CodeReviewer class implemented with all 6 review passes
  • Review score calculated (0-100 scale)
  • Comments include severity, category, message, and suggestion
  • Approval status determined (approved/approved_with_comments/changes_requested)
  • Quality gates configured (coverage >80%, complexity <15)
  • Automated checks integrated (linting, security scan, complexity)
  • Constructive feedback templates used
  • Review posted to PR or documented

Failure Indicators

This skill has FAILED if:

  • ❌ Review misses critical security vulnerabilities
  • ❌ No actionable suggestions provided for issues
  • ❌ Quality gates not enforced (tests pass despite low coverage)
  • ❌ Review comments lack context or are unconstructive
  • ❌ Approval given despite blocking issues
  • ❌ Automated checks not integrated with review workflow
  • ❌ Review process takes >2 hours for typical PR

When NOT to Use

Do NOT use this skill when:

  • Trivial changes (typo fixes, formatting) - simple approval sufficient
  • Automated dependency updates - rely on automated tests instead
  • Documentation-only changes with no logic - lighter review process appropriate
  • Prototype/experimental code not intended for production - defer comprehensive review
  • Time-critical hotfixes - expedited review process needed
  • Single-developer personal projects - peer review not applicable

Use alternatives instead:

  • Trivial changes → Quick approval checklist
  • Dependency updates → Automated security scan + tests
  • Documentation → Spell check + readability review
  • Hotfixes → Expedited security-focused review

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Rubber-stamp approvalMisses critical issuesFollow multi-pass review checklist
Nitpicking onlyMisses architectural issuesReview correctness and security first
No actionable feedbackDeveloper doesn't know how to fixAlways provide suggestion with comment
Blocking on styleSlows velocity unnecessarilyUse NITPICK severity for style issues
Manual quality gatesInconsistent enforcementAutomate coverage, complexity, security checks
Review after mergeIssues in productionEnforce review before merge
Single reviewerBus factor and blind spotsRequire 2+ reviewers for critical code

Principles

This skill embodies:

  • #2 First Principles Thinking - Understand correctness before style
  • #4 Separation of Concerns - Multi-pass review separates dimensions (security, performance, etc.)
  • #5 Eliminate Ambiguity - Severity levels and suggestions clarify expectations
  • #6 Clear, Understandable, Explainable - Constructive feedback with examples
  • #7 Automation - Automate quality gates to reduce manual burden
  • #8 No Assumptions - Verify tests exist and pass for changed code

Full Standard: CODITECT-STANDARD-AUTOMATION.md

Integration Points

  • codebase-analysis-patterns - Architecture and metrics analysis
  • pattern-finding - Duplication and anti-pattern detection
  • comprehensive-review-patterns - Multi-dimensional review
  • orchestrator-code-review-patterns - ADR compliance and cross-cutting concerns