Skip to main content

CODITECT STANDARD: Skills

Version: 2.0.0 Status: Production Standard Last Updated: January 23, 2026 Authority: Based on Anthropic Agent Skills Official Documentation (January 2026) Scope: All skill definitions in .coditect/skills/


1. Executive Summary

This document defines the authoritative standard for CODITECT skill directories and SKILL.md files. All skills MUST comply with this standard to ensure compatibility with Anthropic's Claude Agent SDK progressive disclosure system.

Compliance: Mandatory for all skills in coditect-core repository.

Critical Requirement: Skills MUST use YAML frontmatter (not Markdown headers) per Anthropic specification.

1.1 What Makes a Skill (Anthropic Definition)

A skill is NOT just documentation or a reference guide. According to Anthropic, a skill is:

An opinionated best practice workflow for a recurring activity with specific examples, self-validation steps, and actionable guidance.

Essential Characteristics:

CharacteristicDescriptionExample
OpinionatedPrescribes THE way to do something, not options"Use pdfplumber for text extraction" not "You can use pypdf, pdfplumber, or PyMuPDF"
Best PracticeEncodes expertise and proven patternsTested workflows that avoid common mistakes
WorkflowStep-by-step process with clear checkpointsPhase 1 → Validate → Phase 2 → Verify → Output
Recurring ActivitySolves a problem encountered repeatedlyPDF extraction, code review, deployment
Specific ExamplesConcrete input/output pairsReal code, actual file formats, expected results
Self-ValidatingBuilt-in verification stepsChecklists, validation scripts, quality gates

Skills are NOT:

  • ❌ Reference documentation (use REFERENCE.md instead)
  • ❌ API listings (use separate files)
  • ❌ Options menus with multiple approaches
  • ❌ Vague guidance ("follow best practices")
  • ❌ One-time procedures (skills are for recurring work)

2. Directory Structure

2.1 Skill Directory Naming

Pattern: .coditect/skills/{skill-name}/

Rules:

  • Lowercase only
  • Hyphen-separated words (kebab-case)
  • No spaces, underscores, or special characters
  • Maximum 64 characters
  • Must match name field in SKILL.md YAML frontmatter
  • Cannot contain reserved words: "anthropic", "claude"

Naming Convention (Anthropic Recommended):

Use gerund form (verb + -ing) for Skill names, as this clearly describes the activity or capability the Skill provides.

Preferred (Gerund Form):

  • processing-pdfs
  • analyzing-spreadsheets
  • managing-databases
  • testing-code
  • writing-documentation

Acceptable Alternatives:

  • Noun phrases: pdf-processing, spreadsheet-analysis
  • Action-oriented: process-pdfs, analyze-spreadsheets

Avoid:

  • Vague names: helper, utils, tools
  • Overly generic: documents, data, files
  • Reserved words: anthropic-helper, claude-tools
  • Inconsistent patterns within your skill collection

Invalid Examples:

  • .coditect/skills/BiographicalResearch/ (uppercase)
  • .coditect/skills/biographical_research/ (underscore)
  • .coditect/skills/biographical research/ (space)
  • .coditect/skills/anthropic-pdf-tool/ (reserved word)

2.2 Required Files

Minimal Structure:

skills/{skill-name}/
├── SKILL.md # REQUIRED - Main skill definition
└── README.md # OPTIONAL - Additional documentation

Progressive Disclosure Structure (Recommended):

skills/{skill-name}/
├── SKILL.md # REQUIRED - Level 2: Instructions
├── FORMS.md # OPTIONAL - Level 3: Specialized guidance
├── REFERENCE.md # OPTIONAL - Level 3: Detailed reference
├── EXAMPLES.md # OPTIONAL - Level 3: Extended examples
└── scripts/ # OPTIONAL - Level 3: Executable utilities
├── helper.py
└── processor.sh

3. SKILL.md Format (REQUIRED)

3.1 YAML Frontmatter (MANDATORY)

Every SKILL.md MUST start with YAML frontmatter:

---
name: skill-name-lowercase-with-hyphens
description: Brief description of what this Skill does and when to use it
---

Required Fields:

FieldTypeRequiredMax LengthValidation
namestringYES64 charsLowercase, hyphens only, no "anthropic" or "claude"
descriptionstringYES1024 charsNon-empty, no XML tags, third person

Claude Code Extension Fields (Optional):

FieldPurposeExample
context: forkRun skill in isolation (subagent)context: fork
disable-model-invocation: trueOnly user can invoke (not Claude)For /commit, /deploy
user-invocable: falseOnly Claude can invoke (background knowledge)For reference skills
allowed-toolsRestrict available toolsallowed-tools: Bash(gh:*)

CODITECT Metadata Fields (Recommended):

All skills SHOULD include these metadata fields for version observability and lifecycle tracking (H.24):

FieldTypeRequiredFormatDescription
versionstringRecommendedSemVer (1.0.0)Component version
component_typestringRecommendedFixed: skillComponent type identifier
createdstringRecommended'YYYY-MM-DD'Creation date
updatedstringRecommended'YYYY-MM-DD'Last content modification date
last_reviewedstringRecommended'YYYY-MM-DD'Last human/AI review date
trackstringRecommendedSingle letterCODITECT track assignment (e.g., H)
statusstringRecommendedactive / deprecatedLifecycle status
audiencestringRecommendedcustomer / contributorTarget audience
tokensstringRecommended~1500Approximate token cost
tagsarrayRecommended[skill, automation]Category tags

last_reviewed vs updated:

  • updated records when the content was last modified (auto-set on edit)
  • last_reviewed records when the content was last verified for correctness (set deliberately)
  • A component where updated > last_reviewed has been changed but not yet verified — use /version --outdated to find these
  • A component where last_reviewed is older than 90 days is stale — use /version --stale to find these

CRITICAL VALIDATION RULES:

  • name field MUST match directory name
  • name MUST be lowercase letters, numbers, and hyphens only
  • name CANNOT contain "anthropic" or "claude"
  • description MUST be non-empty
  • description CANNOT contain XML tags (<, >, &)
  • description MUST be under 1024 characters
  • description MUST be written in THIRD PERSON (critical for discovery)

Description Writing Guidelines:

⚠️ Always write in third person. The description is injected into the system prompt, and inconsistent point-of-view can cause discovery problems.

  • Good: "Processes Excel files and generates reports"
  • Avoid: "I can help you process Excel files"
  • Avoid: "You can use this to process Excel files"

Be specific and include key terms. Include both what the Skill does and specific triggers/contexts for when to use it. Claude uses this to choose the right Skill from potentially 100+ available Skills.

Example (CORRECT):

---
name: processing-pdfs
description: Extracts text and tables from PDF files, fills forms, merges documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
---

Example (CORRECT - Claude Code Extensions):

---
name: deploying-to-production
description: Manages production deployments with safety checks and rollback capabilities. Use when deploying services or managing release pipelines.
disable-model-invocation: true
allowed-tools: Bash(kubectl:*), Bash(gcloud:*)
---

Example (INCORRECT - DO NOT USE):

# Biographical Research Skill

**Skill Name:** biographical-research

This format violates Anthropic standards and will NOT work with progressive disclosure.

3.2 Markdown Body Structure

After YAML frontmatter, structure content as:

---
name: skill-name
description: Brief description
---

# Skill Name

## Purpose

[1-2 paragraphs explaining why this skill exists and what problems it solves]

## Instructions

[Step-by-step guidance for Claude to follow when using this skill]

### Step 1: [Action]
[Detailed instructions for this step]

### Step 2: [Action]
[Detailed instructions]

## Examples

### Example 1: [Scenario]

[Concrete example with input/output]


### Example 2: [Scenario]

[Another example]


## Integration

**Related Components:**
- **Agents:** [agent-name] - [relationship]
- **Commands:** /command-name - [relationship]
- **Skills:** [skill-name] - [relationship]

## References

[Links to additional resources loaded on-demand]
- See [FORMS.md](FORMS.md) for specialized templates
- See [REFERENCE.md](REFERENCE.md) for detailed API documentation
- Execute `scripts/helper.py` for automated processing

4. Progressive Disclosure Implementation

Skills implement Anthropic's 3-level progressive disclosure:

Level 1: Metadata (Always Loaded)

What: YAML frontmatter only (name and description) When: Session startup Token Cost: ~100 tokens per skill Purpose: Agent discovery - Claude knows when to trigger skill without loading full content

---
name: pdf-extraction
description: Extract text, tables, and images from PDF files with structure preservation
---

Optimization: Keep description concise but informative. Claude decides whether to load Level 2 based solely on this.

Level 2: Instructions (Triggered Loading)

What: Full SKILL.md body (everything after YAML frontmatter) When: Claude determines skill is relevant to task Token Cost: <5000 tokens (Anthropic recommendation) Line Limit: Under 500 lines for optimal performance (Anthropic Jan 2026) Purpose: Procedural knowledge - how to use the skill

Best Practices:

  • Front-load critical instructions
  • Use clear step-by-step format
  • Include 2-4 concrete examples
  • Keep under 500 lines (approximately 3000-4000 words)
  • Keep under 5000 tokens
  • Link to Level 3 resources for details
  • References must be one level deep from SKILL.md (avoid nested references)

Level 3: Resources (On-Demand Loading)

What: Additional files (FORMS.md, REFERENCE.md, scripts/) When: Claude reads specific files or executes scripts Token Cost: Effectively unlimited (files not in context unless read) Purpose: Detailed reference, templates, executable code

File Types:

  • FORMS.md - Templates, schemas, fill-in forms
  • REFERENCE.md - Complete API docs, comprehensive guides
  • EXAMPLES.md - Extended examples with full context
  • scripts/ - Python/bash utilities that execute via Bash tool

Critical Insight: Scripts run via Bash tool - only OUTPUT enters context, not code itself. This enables bundling unlimited code without token cost.

4.4 Reference Depth Rules (Anthropic Jan 2026)

Keep references one level deep from SKILL.md. Claude may partially read files when they're referenced from other referenced files, resulting in incomplete information.

Bad Example (Too Deep):

# SKILL.md
See [advanced.md](advanced.md)...

# advanced.md
See [details.md](details.md)...

# details.md
Here's the actual information...

Good Example (One Level Deep):

# SKILL.md

**Basic usage**: [instructions in SKILL.md]
**Advanced features**: See [advanced.md](advanced.md)
**API reference**: See [reference.md](reference.md)
**Examples**: See [examples.md](examples.md)

4.5 Table of Contents for Long Files

For reference files longer than 100 lines, include a table of contents at the top. This ensures Claude can see the full scope even when previewing with partial reads.

# API Reference

## Contents
- Authentication and setup
- Core methods (create, read, update, delete)
- Advanced features (batch operations, webhooks)
- Error handling patterns
- Code examples

## Authentication and setup
...

5. Quality Criteria

5.1 Compliance Checklist

YAML Frontmatter (Weight: 40%)

  • YAML frontmatter present at file start
  • name field exists and matches directory name
  • name is lowercase with hyphens only
  • name under 64 characters
  • description field exists and non-empty
  • description under 1024 characters
  • No XML tags in description
  • No forbidden words ("anthropic", "claude") in name

Progressive Disclosure Design (Weight: 25%)

  • Level 1 (metadata) optimized for discovery
  • Level 2 (SKILL.md body) under 500 lines (Anthropic Jan 2026)
  • Level 2 (SKILL.md body) under 5000 tokens
  • Level 3 resources in separate files (if applicable)
  • Clear references to Level 3 resources
  • References are one level deep from SKILL.md
  • Long files (>100 lines) have table of contents

Instruction Quality (Weight: 25%)

  • Purpose section explains when to use skill
  • Instructions organized in clear steps
  • At least 2 concrete examples provided
  • Integration section lists related components

File Structure (Weight: 10%)

  • Directory name matches skill name
  • SKILL.md file present in directory
  • Additional resources properly organized
  • Scripts have execution permissions (if present)

5.2 Grading Scale

Grade A (90-100%): Exemplary progressive disclosure, production-ready Grade B (80-89%): Meets all requirements, minor optimizations possible Grade C (70-79%): Functional but needs improvement Grade D (60-69%): Major issues, significant rework needed Grade F (<60%): Does not meet Anthropic standards, complete rework required

5.3 Automated Validation

python3 .coditect/scripts/validate-skill.py skills/{skill-name}/

Validation Checks:

  • YAML frontmatter parsing
  • Required fields present
  • Field value validation
  • Token count estimation
  • File structure verification
  • Naming convention compliance

6. Templates

6.1 Minimal Skill Template (Anthropic Jan 2026)

---
name: processing-examples
description: Processes example data files and generates structured output. Use when working with example files, data transformation, or when the user mentions processing or converting examples.
---

# Processing Examples

## Quick Start

```python
# Basic usage
python scripts/process.py input.json output.csv

Instructions

Step 1: Prepare Input

Ensure input file exists and is valid JSON/CSV format.

Step 2: Execute Processing

python scripts/process.py input.json --format csv

Step 3: Verify Output

Check output file contains expected data:

  • All records present
  • Formatting correct
  • No data loss

Examples

Example 1: JSON to CSV

Input:

{"name": "test", "value": 123}

Command:

python scripts/process.py data.json output.csv

Output:

name,value
test,123

Advanced Usage

For complex scenarios, see:

Integration

Related Skills:

  • validating-data - Pre-validation before processing
  • transforming-formats - Additional format conversions

**Key Improvements (Anthropic Jan 2026):**
- Gerund naming (`processing-examples` not `example-processor`)
- Third-person description
- Specific triggers ("when the user mentions processing")
- Concise instructions (assumes Claude knows basics)
- Links to Level 3 resources for details

### 6.2 Full Skill Template (Opinionated Best Practice Workflow)

This template demonstrates all required skill characteristics: **opinionated workflow, best practices, specific examples, self-validation, and actionable suggestions**.

```markdown
---
name: reviewing-code
description: Performs systematic code review with quality gates and actionable feedback. Use when reviewing pull requests, examining code changes, or conducting code audits.
---

# Reviewing Code

> **This skill encodes an opinionated best practice workflow for code review.**

## When to Use

✅ **Use this skill when:**
- Reviewing pull requests before merge
- Conducting code audits
- Examining code changes for quality issues
- Providing feedback on teammate's code

❌ **Do NOT use when:**
- Writing new code (use `writing-code` skill)
- Debugging runtime errors (use `debugging-errors` skill)
- Reviewing documentation only (use `reviewing-docs` skill)

## Best Practices (Opinionated)

**This skill prescribes THE way to review code:**

1. **Security first** - Always check for security issues before style
2. **One pass per concern** - Don't mix security, logic, and style feedback
3. **Be specific** - Point to exact lines with actionable suggestions
4. **Praise good patterns** - Reinforce what works, not just what's wrong

## Workflow

### Phase 1: Security Review

**Objective:** Identify security vulnerabilities before anything else.

**Steps:**
1. **Check for secrets:** Scan for API keys, passwords, tokens
```bash
grep -rn "password\|secret\|api_key\|token" --include="*.py"
  1. Check for injection: Look for unsanitized user input

    • SQL queries with string concatenation
    • Shell commands with user input
    • HTML rendering without escaping
  2. Check for auth issues: Verify authentication/authorization

    • Are endpoints protected?
    • Is authorization checked at each layer?

Quality Gate:

Security Review:
- [ ] No hardcoded secrets
- [ ] No injection vulnerabilities
- [ ] Auth properly implemented

Phase 2: Logic Review

Objective: Verify correctness of business logic.

Steps:

  1. Trace data flow: Follow data from input to output
  2. Check edge cases: Empty inputs, nulls, boundaries
  3. Verify error handling: Are failures handled gracefully?

Quality Gate:

Logic Review:
- [ ] Data flow is correct
- [ ] Edge cases handled
- [ ] Errors fail gracefully

Phase 3: Style & Maintainability

Objective: Ensure code is readable and maintainable.

Steps:

  1. Naming: Are names descriptive and consistent?
  2. Structure: Is code well-organized with single responsibility?
  3. Documentation: Are complex parts documented?

Quality Gate:

Style Review:
- [ ] Naming is clear
- [ ] Structure is logical
- [ ] Complex code is documented

Specific Examples

Example 1: Reviewing a Database Query Function

Input (Code to Review):

def get_user(user_id):
query = f"SELECT * FROM users WHERE id = {user_id}"
return db.execute(query)

Review Process:

  1. Phase 1 (Security): ❌ SQL injection vulnerability found
  2. Phase 2 (Logic): Cannot assess until security fixed
  3. Phase 3 (Style): Deferred

Output (Review Feedback):

## Security Issue (BLOCKING)

**Line 2:** SQL injection vulnerability

**Problem:** User input directly interpolated into SQL query.

**Fix:**
```python
def get_user(user_id):
query = "SELECT * FROM users WHERE id = %s"
return db.execute(query, (user_id,))

Severity: Critical - Must fix before merge.


### Example 2: Reviewing Well-Written Code

**Input:**
```python
def calculate_discount(price: float, user: User) -> float:
"""Calculate discount based on user tier."""
if price <= 0:
raise ValueError("Price must be positive")

discount_rates = {
UserTier.BRONZE: 0.05,
UserTier.SILVER: 0.10,
UserTier.GOLD: 0.15,
}

rate = discount_rates.get(user.tier, 0.0)
return price * (1 - rate)

Output:

## Review: APPROVED ✅

**Strengths:**
- Type hints improve readability
- Input validation with clear error
- Dictionary lookup is clean and extensible
- Default value handles unknown tiers

**Suggestion (optional):**
Consider logging when unknown tier is encountered for monitoring.

Self-Validation Checklist

Before completing any code review, verify:

Code Review Completion:
- [ ] Security pass completed
- [ ] Logic pass completed
- [ ] Style pass completed
- [ ] All blocking issues documented
- [ ] Actionable suggestions provided
- [ ] Positive feedback included where appropriate

Feedback Loops

After providing review feedback:

  1. Wait for author response
  2. Re-review changed files only
  3. Verify blocking issues resolved
  4. Approve or request additional changes

Suggestions for Improvement

When encountering patterns not covered:

  1. Document the new pattern encountered
  2. Add to this skill's examples if recurring
  3. Consider creating specialized skill for domain-specific reviews

Integration

Commands: /review-pr [PR-number] Agents: code-reviewer, security-specialist Related Skills: writing-code, debugging-errors, testing-code

References


**Template Characteristics:**
- ✅ **Opinionated:** Prescribes specific order (security → logic → style)
- ✅ **Best Practice:** Encodes expertise (one pass per concern, be specific)
- ✅ **Workflow:** Clear phases with quality gates
- ✅ **Specific Examples:** Real code with actual review output
- ✅ **Self-Validating:** Checklists and quality gates at each phase
- ✅ **Suggestions:** Guidance for handling novel situations

---

## 7. Level 3 Resource Templates

### 7.1 FORMS.md Template

```markdown
# Forms and Templates for [Skill Name]

## Template 1: [Template Name]

**Purpose:** [When to use this template]

**Format:**

[Template structure with placeholders]


**Example:**

[Filled-in example]


## Template 2: [Template Name]

[Additional templates...]

7.2 REFERENCE.md Template

# Reference Documentation for [Skill Name]

## API Specification

### Function 1: [Name]

**Signature:**

function_name(param1, param2, ...) -> return_type


**Parameters:**
- `param1` (type): Description
- `param2` (type): Description

**Returns:**
[Return value description]

**Example:**

[Usage example]


## Complete Index

[Comprehensive reference material...]

7.3 Scripts Directory Structure

scripts/
├── README.md # Script documentation
├── processor.py # Main processing script
├── validator.sh # Validation script
└── utils/ # Utility modules
├── __init__.py
└── helpers.py

Script Requirements:

  • Executable permissions (chmod +x)
  • Shebang line (#!/usr/bin/env python3 or #!/bin/bash)
  • Error handling and exit codes
  • Documentation in comments
  • Safe input validation

8. Migration Guide

8.1 Converting Markdown Headers to YAML

BEFORE (Non-Compliant):

# Biographical Research Skill

**Skill Name:** biographical-research
**Category:** Research & Intelligence
**Status:** Production

AFTER (Compliant):

---
name: biographical-research
description: Systematic biographical research methodology for investigating individuals using structured web search with validation requirements
---

# Biographical Research

[Rest of content...]

8.2 Migration Script

python3 .coditect/scripts/migrate-skill-to-yaml.py skills/{skill-name}/SKILL.md

What it does:

  • Extracts skill metadata from Markdown headers
  • Generates YAML frontmatter
  • Moves content after frontmatter
  • Creates backup (.bak file)
  • Validates output

8.3 Token Optimization

If SKILL.md body exceeds 5000 tokens:

Step 1: Measure current tokens

python3 scripts/count-tokens.py skills/{skill-name}/SKILL.md

Step 2: Extract detailed content to Level 3

  • Move extensive examples → EXAMPLES.md
  • Move API docs → REFERENCE.md
  • Move templates → FORMS.md

Step 3: Update SKILL.md references

For detailed examples, see [EXAMPLES.md](EXAMPLES.md)
For complete API reference, see [REFERENCE.md](REFERENCE.md)

Step 4: Re-validate

python3 scripts/validate-skill.py skills/{skill-name}/

9. Common Violations and Fixes

Violation 1: Missing YAML Frontmatter

Problem:

# My Skill

This is my skill.

Fix:

---
name: my-skill
description: Brief description of my skill purpose and capabilities
---

# My Skill

This is my skill.

Violation 2: Invalid Name Format

Problem:

---
name: My_Skill
description: Description
---

Fix:

---
name: my-skill
description: Description
---

Violation 3: Description Too Long

Problem:

description: [1500 character description that exceeds limit...]

Fix:

description: Brief description (under 1024 chars). For details see SKILL.md body.

Violation 4: Markdown Headers Instead of YAML

Problem:

**Name:** my-skill
**Description:** My description

Fix: Use YAML frontmatter (see Violation 1)


10. Evaluation-Driven Development (Anthropic Jan 2026)

10.1 Build Evaluations First

Create evaluations BEFORE writing extensive documentation. This ensures your Skill solves real problems rather than documenting imagined ones.

Evaluation-Driven Workflow:

  1. Identify gaps: Run Claude on representative tasks without a Skill. Document specific failures or missing context
  2. Create evaluations: Build three scenarios that test these gaps
  3. Establish baseline: Measure Claude's performance without the Skill
  4. Write minimal instructions: Create just enough content to address the gaps and pass evaluations
  5. Iterate: Execute evaluations, compare against baseline, and refine

10.2 Evaluation Structure

{
"skills": ["processing-pdfs"],
"query": "Extract all text from this PDF file and save it to output.txt",
"files": ["test-files/document.pdf"],
"expected_behavior": [
"Successfully reads the PDF file using an appropriate library",
"Extracts text content from all pages without missing any",
"Saves extracted text to output.txt in a clear, readable format"
]
}

10.3 Test with All Models

Test your Skill with all models you plan to use:

ModelTesting Focus
Claude HaikuDoes the Skill provide enough guidance?
Claude SonnetIs the Skill clear and efficient?
Claude OpusDoes the Skill avoid over-explaining?

What works for Opus might need more detail for Haiku.

10.4 Iterative Development with Claude

Work with one Claude instance ("Claude A") to create a Skill used by others ("Claude B"):

  1. Complete a task with Claude A using normal prompting
  2. Identify reusable patterns and context you repeatedly provided
  3. Ask Claude A to create a Skill capturing the pattern
  4. Review for conciseness (remove unnecessary explanations)
  5. Test on similar tasks with Claude B (fresh instance)
  6. Return to Claude A with observations from Claude B's behavior
  7. Iterate based on real agent behavior

11. References

11.1 Anthropic Official Documentation

11.2 CODITECT Standards

  • CODITECT-STANDARD-AGENTS.md - Agents format standard
  • CODITECT-STANDARD-COMMANDS.md - Commands format standard
  • CODITECT-STANDARD-HOOKS.md - Hooks format standard

12. Anti-Patterns to Avoid (Anthropic Jan 2026)

Anti-PatternProblemSolution
Windows-style pathsBreaks on Unix systemsAlways use forward slashes: scripts/helper.py
Too many optionsConfuses Claude with choicesProvide a default with escape hatch for edge cases
First/second person descriptionsBreaks skill discoveryAlways write description in third person
Nested file referencesClaude partially reads, misses infoKeep references one level deep from SKILL.md
No table of contentsClaude can't see scope of long filesAdd TOC for files >100 lines
Voodoo constantsUnexplained magic numbersDocument and justify all configuration values
Punting errors to ClaudeScript fails, Claude improvisesHandle errors explicitly in scripts
Time-sensitive informationContent becomes outdatedUse "old patterns" section for deprecated info
Inconsistent terminologyConfuses Claude and usersPick one term and use throughout
Over 500 lines in SKILL.mdPoor performance, bloated contextSplit content into Level 3 resources

13. Version History

VersionDateChangesAuthor
2.0.02026-01-23Major update with Anthropic Jan 2026 best practices: gerund naming, third-person descriptions, 500-line limit, Claude Code extensions, evaluation-driven development, anti-patternsCODITECT Team
1.0.02025-12-03Initial standard based on Anthropic Agent Skills documentationCODITECT Team

14. Enforcement

Compliance Deadline: All skills must achieve Grade B (80%) or higher within 30 days of standard publication.

Critical Requirement: YAML frontmatter is NON-NEGOTIABLE. Skills without YAML frontmatter will receive automatic Grade F.

Validation Method:

  • Pre-commit hooks validate YAML frontmatter
  • CI/CD pipeline blocks non-compliant skills
  • Automated skill validation on all PRs

Review Process:

  • Weekly automated compliance scans
  • Monthly manual quality audits
  • Quarterly standard review and updates

Document Control:

  • Owner: CODITECT Core Team
  • Approvers: Technical Lead, Documentation Lead
  • Review Cycle: Quarterly
  • Next Review: 2026-03-03