CODITECT STANDARD: Skills
Version: 2.0.0
Status: Production Standard
Last Updated: January 23, 2026
Authority: Based on Anthropic Agent Skills Official Documentation (January 2026)
Scope: All skill definitions in .coditect/skills/
1. Executive Summary
This document defines the authoritative standard for CODITECT skill directories and SKILL.md files. All skills MUST comply with this standard to ensure compatibility with Anthropic's Claude Agent SDK progressive disclosure system.
Compliance: Mandatory for all skills in coditect-core repository.
Critical Requirement: Skills MUST use YAML frontmatter (not Markdown headers) per Anthropic specification.
1.1 What Makes a Skill (Anthropic Definition)
A skill is NOT just documentation or a reference guide. According to Anthropic, a skill is:
An opinionated best practice workflow for a recurring activity with specific examples, self-validation steps, and actionable guidance.
Essential Characteristics:
| Characteristic | Description | Example |
|---|---|---|
| Opinionated | Prescribes THE way to do something, not options | "Use pdfplumber for text extraction" not "You can use pypdf, pdfplumber, or PyMuPDF" |
| Best Practice | Encodes expertise and proven patterns | Tested workflows that avoid common mistakes |
| Workflow | Step-by-step process with clear checkpoints | Phase 1 → Validate → Phase 2 → Verify → Output |
| Recurring Activity | Solves a problem encountered repeatedly | PDF extraction, code review, deployment |
| Specific Examples | Concrete input/output pairs | Real code, actual file formats, expected results |
| Self-Validating | Built-in verification steps | Checklists, validation scripts, quality gates |
Skills are NOT:
- ❌ Reference documentation (use REFERENCE.md instead)
- ❌ API listings (use separate files)
- ❌ Options menus with multiple approaches
- ❌ Vague guidance ("follow best practices")
- ❌ One-time procedures (skills are for recurring work)
2. Directory Structure
2.1 Skill Directory Naming
Pattern: .coditect/skills/{skill-name}/
Rules:
- Lowercase only
- Hyphen-separated words (kebab-case)
- No spaces, underscores, or special characters
- Maximum 64 characters
- Must match
namefield in SKILL.md YAML frontmatter - Cannot contain reserved words: "anthropic", "claude"
Naming Convention (Anthropic Recommended):
Use gerund form (verb + -ing) for Skill names, as this clearly describes the activity or capability the Skill provides.
Preferred (Gerund Form):
processing-pdfsanalyzing-spreadsheetsmanaging-databasestesting-codewriting-documentation
Acceptable Alternatives:
- Noun phrases:
pdf-processing,spreadsheet-analysis - Action-oriented:
process-pdfs,analyze-spreadsheets
Avoid:
- Vague names:
helper,utils,tools - Overly generic:
documents,data,files - Reserved words:
anthropic-helper,claude-tools - Inconsistent patterns within your skill collection
Invalid Examples:
.coditect/skills/BiographicalResearch/(uppercase).coditect/skills/biographical_research/(underscore).coditect/skills/biographical research/(space).coditect/skills/anthropic-pdf-tool/(reserved word)
2.2 Required Files
Minimal Structure:
skills/{skill-name}/
├── SKILL.md # REQUIRED - Main skill definition
└── README.md # OPTIONAL - Additional documentation
Progressive Disclosure Structure (Recommended):
skills/{skill-name}/
├── SKILL.md # REQUIRED - Level 2: Instructions
├── FORMS.md # OPTIONAL - Level 3: Specialized guidance
├── REFERENCE.md # OPTIONAL - Level 3: Detailed reference
├── EXAMPLES.md # OPTIONAL - Level 3: Extended examples
└── scripts/ # OPTIONAL - Level 3: Executable utilities
├── helper.py
└── processor.sh
3. SKILL.md Format (REQUIRED)
3.1 YAML Frontmatter (MANDATORY)
Every SKILL.md MUST start with YAML frontmatter:
---
name: skill-name-lowercase-with-hyphens
description: Brief description of what this Skill does and when to use it
---
Required Fields:
| Field | Type | Required | Max Length | Validation |
|---|---|---|---|---|
name | string | YES | 64 chars | Lowercase, hyphens only, no "anthropic" or "claude" |
description | string | YES | 1024 chars | Non-empty, no XML tags, third person |
Claude Code Extension Fields (Optional):
| Field | Purpose | Example |
|---|---|---|
context: fork | Run skill in isolation (subagent) | context: fork |
disable-model-invocation: true | Only user can invoke (not Claude) | For /commit, /deploy |
user-invocable: false | Only Claude can invoke (background knowledge) | For reference skills |
allowed-tools | Restrict available tools | allowed-tools: Bash(gh:*) |
CODITECT Metadata Fields (Recommended):
All skills SHOULD include these metadata fields for version observability and lifecycle tracking (H.24):
| Field | Type | Required | Format | Description |
|---|---|---|---|---|
version | string | Recommended | SemVer (1.0.0) | Component version |
component_type | string | Recommended | Fixed: skill | Component type identifier |
created | string | Recommended | 'YYYY-MM-DD' | Creation date |
updated | string | Recommended | 'YYYY-MM-DD' | Last content modification date |
last_reviewed | string | Recommended | 'YYYY-MM-DD' | Last human/AI review date |
track | string | Recommended | Single letter | CODITECT track assignment (e.g., H) |
status | string | Recommended | active / deprecated | Lifecycle status |
audience | string | Recommended | customer / contributor | Target audience |
tokens | string | Recommended | ~1500 | Approximate token cost |
tags | array | Recommended | [skill, automation] | Category tags |
last_reviewed vs updated:
updatedrecords when the content was last modified (auto-set on edit)last_reviewedrecords when the content was last verified for correctness (set deliberately)- A component where
updated > last_reviewedhas been changed but not yet verified — use/version --outdatedto find these - A component where
last_reviewedis older than 90 days is stale — use/version --staleto find these
CRITICAL VALIDATION RULES:
namefield MUST match directory namenameMUST be lowercase letters, numbers, and hyphens onlynameCANNOT contain "anthropic" or "claude"descriptionMUST be non-emptydescriptionCANNOT contain XML tags (<,>,&)descriptionMUST be under 1024 charactersdescriptionMUST be written in THIRD PERSON (critical for discovery)
Description Writing Guidelines:
⚠️ Always write in third person. The description is injected into the system prompt, and inconsistent point-of-view can cause discovery problems.
- ✅ Good: "Processes Excel files and generates reports"
- ❌ Avoid: "I can help you process Excel files"
- ❌ Avoid: "You can use this to process Excel files"
Be specific and include key terms. Include both what the Skill does and specific triggers/contexts for when to use it. Claude uses this to choose the right Skill from potentially 100+ available Skills.
Example (CORRECT):
---
name: processing-pdfs
description: Extracts text and tables from PDF files, fills forms, merges documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
---
Example (CORRECT - Claude Code Extensions):
---
name: deploying-to-production
description: Manages production deployments with safety checks and rollback capabilities. Use when deploying services or managing release pipelines.
disable-model-invocation: true
allowed-tools: Bash(kubectl:*), Bash(gcloud:*)
---
Example (INCORRECT - DO NOT USE):
# Biographical Research Skill
**Skill Name:** biographical-research
❌ This format violates Anthropic standards and will NOT work with progressive disclosure.
3.2 Markdown Body Structure
After YAML frontmatter, structure content as:
---
name: skill-name
description: Brief description
---
# Skill Name
## Purpose
[1-2 paragraphs explaining why this skill exists and what problems it solves]
## Instructions
[Step-by-step guidance for Claude to follow when using this skill]
### Step 1: [Action]
[Detailed instructions for this step]
### Step 2: [Action]
[Detailed instructions]
## Examples
### Example 1: [Scenario]
[Concrete example with input/output]
### Example 2: [Scenario]
[Another example]
## Integration
**Related Components:**
- **Agents:** [agent-name] - [relationship]
- **Commands:** /command-name - [relationship]
- **Skills:** [skill-name] - [relationship]
## References
[Links to additional resources loaded on-demand]
- See [FORMS.md](FORMS.md) for specialized templates
- See [REFERENCE.md](REFERENCE.md) for detailed API documentation
- Execute `scripts/helper.py` for automated processing
4. Progressive Disclosure Implementation
Skills implement Anthropic's 3-level progressive disclosure:
Level 1: Metadata (Always Loaded)
What: YAML frontmatter only (name and description)
When: Session startup
Token Cost: ~100 tokens per skill
Purpose: Agent discovery - Claude knows when to trigger skill without loading full content
---
name: pdf-extraction
description: Extract text, tables, and images from PDF files with structure preservation
---
Optimization: Keep description concise but informative. Claude decides whether to load Level 2 based solely on this.
Level 2: Instructions (Triggered Loading)
What: Full SKILL.md body (everything after YAML frontmatter) When: Claude determines skill is relevant to task Token Cost: <5000 tokens (Anthropic recommendation) Line Limit: Under 500 lines for optimal performance (Anthropic Jan 2026) Purpose: Procedural knowledge - how to use the skill
Best Practices:
- Front-load critical instructions
- Use clear step-by-step format
- Include 2-4 concrete examples
- Keep under 500 lines (approximately 3000-4000 words)
- Keep under 5000 tokens
- Link to Level 3 resources for details
- References must be one level deep from SKILL.md (avoid nested references)
Level 3: Resources (On-Demand Loading)
What: Additional files (FORMS.md, REFERENCE.md, scripts/) When: Claude reads specific files or executes scripts Token Cost: Effectively unlimited (files not in context unless read) Purpose: Detailed reference, templates, executable code
File Types:
- FORMS.md - Templates, schemas, fill-in forms
- REFERENCE.md - Complete API docs, comprehensive guides
- EXAMPLES.md - Extended examples with full context
- scripts/ - Python/bash utilities that execute via Bash tool
Critical Insight: Scripts run via Bash tool - only OUTPUT enters context, not code itself. This enables bundling unlimited code without token cost.
4.4 Reference Depth Rules (Anthropic Jan 2026)
Keep references one level deep from SKILL.md. Claude may partially read files when they're referenced from other referenced files, resulting in incomplete information.
Bad Example (Too Deep):
# SKILL.md
See [advanced.md](advanced.md)...
# advanced.md
See [details.md](details.md)...
# details.md
Here's the actual information...
Good Example (One Level Deep):
# SKILL.md
**Basic usage**: [instructions in SKILL.md]
**Advanced features**: See [advanced.md](advanced.md)
**API reference**: See [reference.md](reference.md)
**Examples**: See [examples.md](examples.md)
4.5 Table of Contents for Long Files
For reference files longer than 100 lines, include a table of contents at the top. This ensures Claude can see the full scope even when previewing with partial reads.
# API Reference
## Contents
- Authentication and setup
- Core methods (create, read, update, delete)
- Advanced features (batch operations, webhooks)
- Error handling patterns
- Code examples
## Authentication and setup
...
5. Quality Criteria
5.1 Compliance Checklist
YAML Frontmatter (Weight: 40%)
- YAML frontmatter present at file start
-
namefield exists and matches directory name -
nameis lowercase with hyphens only -
nameunder 64 characters -
descriptionfield exists and non-empty -
descriptionunder 1024 characters - No XML tags in description
- No forbidden words ("anthropic", "claude") in name
Progressive Disclosure Design (Weight: 25%)
- Level 1 (metadata) optimized for discovery
- Level 2 (SKILL.md body) under 500 lines (Anthropic Jan 2026)
- Level 2 (SKILL.md body) under 5000 tokens
- Level 3 resources in separate files (if applicable)
- Clear references to Level 3 resources
- References are one level deep from SKILL.md
- Long files (>100 lines) have table of contents
Instruction Quality (Weight: 25%)
- Purpose section explains when to use skill
- Instructions organized in clear steps
- At least 2 concrete examples provided
- Integration section lists related components
File Structure (Weight: 10%)
- Directory name matches skill name
- SKILL.md file present in directory
- Additional resources properly organized
- Scripts have execution permissions (if present)
5.2 Grading Scale
Grade A (90-100%): Exemplary progressive disclosure, production-ready Grade B (80-89%): Meets all requirements, minor optimizations possible Grade C (70-79%): Functional but needs improvement Grade D (60-69%): Major issues, significant rework needed Grade F (<60%): Does not meet Anthropic standards, complete rework required
5.3 Automated Validation
python3 .coditect/scripts/validate-skill.py skills/{skill-name}/
Validation Checks:
- YAML frontmatter parsing
- Required fields present
- Field value validation
- Token count estimation
- File structure verification
- Naming convention compliance
6. Templates
6.1 Minimal Skill Template (Anthropic Jan 2026)
---
name: processing-examples
description: Processes example data files and generates structured output. Use when working with example files, data transformation, or when the user mentions processing or converting examples.
---
# Processing Examples
## Quick Start
```python
# Basic usage
python scripts/process.py input.json output.csv
Instructions
Step 1: Prepare Input
Ensure input file exists and is valid JSON/CSV format.
Step 2: Execute Processing
python scripts/process.py input.json --format csv
Step 3: Verify Output
Check output file contains expected data:
- All records present
- Formatting correct
- No data loss
Examples
Example 1: JSON to CSV
Input:
{"name": "test", "value": 123}
Command:
python scripts/process.py data.json output.csv
Output:
name,value
test,123
Advanced Usage
For complex scenarios, see:
- REFERENCE.md - Complete API documentation
- EXAMPLES.md - Extended examples
Integration
Related Skills:
validating-data- Pre-validation before processingtransforming-formats- Additional format conversions
**Key Improvements (Anthropic Jan 2026):**
- Gerund naming (`processing-examples` not `example-processor`)
- Third-person description
- Specific triggers ("when the user mentions processing")
- Concise instructions (assumes Claude knows basics)
- Links to Level 3 resources for details
### 6.2 Full Skill Template (Opinionated Best Practice Workflow)
This template demonstrates all required skill characteristics: **opinionated workflow, best practices, specific examples, self-validation, and actionable suggestions**.
```markdown
---
name: reviewing-code
description: Performs systematic code review with quality gates and actionable feedback. Use when reviewing pull requests, examining code changes, or conducting code audits.
---
# Reviewing Code
> **This skill encodes an opinionated best practice workflow for code review.**
## When to Use
✅ **Use this skill when:**
- Reviewing pull requests before merge
- Conducting code audits
- Examining code changes for quality issues
- Providing feedback on teammate's code
❌ **Do NOT use when:**
- Writing new code (use `writing-code` skill)
- Debugging runtime errors (use `debugging-errors` skill)
- Reviewing documentation only (use `reviewing-docs` skill)
## Best Practices (Opinionated)
**This skill prescribes THE way to review code:**
1. **Security first** - Always check for security issues before style
2. **One pass per concern** - Don't mix security, logic, and style feedback
3. **Be specific** - Point to exact lines with actionable suggestions
4. **Praise good patterns** - Reinforce what works, not just what's wrong
## Workflow
### Phase 1: Security Review
**Objective:** Identify security vulnerabilities before anything else.
**Steps:**
1. **Check for secrets:** Scan for API keys, passwords, tokens
```bash
grep -rn "password\|secret\|api_key\|token" --include="*.py"
-
Check for injection: Look for unsanitized user input
- SQL queries with string concatenation
- Shell commands with user input
- HTML rendering without escaping
-
Check for auth issues: Verify authentication/authorization
- Are endpoints protected?
- Is authorization checked at each layer?
Quality Gate:
Security Review:
- [ ] No hardcoded secrets
- [ ] No injection vulnerabilities
- [ ] Auth properly implemented
Phase 2: Logic Review
Objective: Verify correctness of business logic.
Steps:
- Trace data flow: Follow data from input to output
- Check edge cases: Empty inputs, nulls, boundaries
- Verify error handling: Are failures handled gracefully?
Quality Gate:
Logic Review:
- [ ] Data flow is correct
- [ ] Edge cases handled
- [ ] Errors fail gracefully
Phase 3: Style & Maintainability
Objective: Ensure code is readable and maintainable.
Steps:
- Naming: Are names descriptive and consistent?
- Structure: Is code well-organized with single responsibility?
- Documentation: Are complex parts documented?
Quality Gate:
Style Review:
- [ ] Naming is clear
- [ ] Structure is logical
- [ ] Complex code is documented
Specific Examples
Example 1: Reviewing a Database Query Function
Input (Code to Review):
def get_user(user_id):
query = f"SELECT * FROM users WHERE id = {user_id}"
return db.execute(query)
Review Process:
- Phase 1 (Security): ❌ SQL injection vulnerability found
- Phase 2 (Logic): Cannot assess until security fixed
- Phase 3 (Style): Deferred
Output (Review Feedback):
## Security Issue (BLOCKING)
**Line 2:** SQL injection vulnerability
**Problem:** User input directly interpolated into SQL query.
**Fix:**
```python
def get_user(user_id):
query = "SELECT * FROM users WHERE id = %s"
return db.execute(query, (user_id,))
Severity: Critical - Must fix before merge.
### Example 2: Reviewing Well-Written Code
**Input:**
```python
def calculate_discount(price: float, user: User) -> float:
"""Calculate discount based on user tier."""
if price <= 0:
raise ValueError("Price must be positive")
discount_rates = {
UserTier.BRONZE: 0.05,
UserTier.SILVER: 0.10,
UserTier.GOLD: 0.15,
}
rate = discount_rates.get(user.tier, 0.0)
return price * (1 - rate)
Output:
## Review: APPROVED ✅
**Strengths:**
- Type hints improve readability
- Input validation with clear error
- Dictionary lookup is clean and extensible
- Default value handles unknown tiers
**Suggestion (optional):**
Consider logging when unknown tier is encountered for monitoring.
Self-Validation Checklist
Before completing any code review, verify:
Code Review Completion:
- [ ] Security pass completed
- [ ] Logic pass completed
- [ ] Style pass completed
- [ ] All blocking issues documented
- [ ] Actionable suggestions provided
- [ ] Positive feedback included where appropriate
Feedback Loops
After providing review feedback:
- Wait for author response
- Re-review changed files only
- Verify blocking issues resolved
- Approve or request additional changes
Suggestions for Improvement
When encountering patterns not covered:
- Document the new pattern encountered
- Add to this skill's examples if recurring
- Consider creating specialized skill for domain-specific reviews
Integration
Commands: /review-pr [PR-number]
Agents: code-reviewer, security-specialist
Related Skills: writing-code, debugging-errors, testing-code
References
- PATTERNS.md - Common code patterns to look for
- SECURITY-CHECKLIST.md - Extended security review guide
**Template Characteristics:**
- ✅ **Opinionated:** Prescribes specific order (security → logic → style)
- ✅ **Best Practice:** Encodes expertise (one pass per concern, be specific)
- ✅ **Workflow:** Clear phases with quality gates
- ✅ **Specific Examples:** Real code with actual review output
- ✅ **Self-Validating:** Checklists and quality gates at each phase
- ✅ **Suggestions:** Guidance for handling novel situations
---
## 7. Level 3 Resource Templates
### 7.1 FORMS.md Template
```markdown
# Forms and Templates for [Skill Name]
## Template 1: [Template Name]
**Purpose:** [When to use this template]
**Format:**
[Template structure with placeholders]
**Example:**
[Filled-in example]
## Template 2: [Template Name]
[Additional templates...]
7.2 REFERENCE.md Template
# Reference Documentation for [Skill Name]
## API Specification
### Function 1: [Name]
**Signature:**
function_name(param1, param2, ...) -> return_type
**Parameters:**
- `param1` (type): Description
- `param2` (type): Description
**Returns:**
[Return value description]
**Example:**
[Usage example]
## Complete Index
[Comprehensive reference material...]
7.3 Scripts Directory Structure
scripts/
├── README.md # Script documentation
├── processor.py # Main processing script
├── validator.sh # Validation script
└── utils/ # Utility modules
├── __init__.py
└── helpers.py
Script Requirements:
- Executable permissions (
chmod +x) - Shebang line (#!/usr/bin/env python3 or #!/bin/bash)
- Error handling and exit codes
- Documentation in comments
- Safe input validation
8. Migration Guide
8.1 Converting Markdown Headers to YAML
BEFORE (Non-Compliant):
# Biographical Research Skill
**Skill Name:** biographical-research
**Category:** Research & Intelligence
**Status:** Production
AFTER (Compliant):
---
name: biographical-research
description: Systematic biographical research methodology for investigating individuals using structured web search with validation requirements
---
# Biographical Research
[Rest of content...]
8.2 Migration Script
python3 .coditect/scripts/migrate-skill-to-yaml.py skills/{skill-name}/SKILL.md
What it does:
- Extracts skill metadata from Markdown headers
- Generates YAML frontmatter
- Moves content after frontmatter
- Creates backup (.bak file)
- Validates output
8.3 Token Optimization
If SKILL.md body exceeds 5000 tokens:
Step 1: Measure current tokens
python3 scripts/count-tokens.py skills/{skill-name}/SKILL.md
Step 2: Extract detailed content to Level 3
- Move extensive examples → EXAMPLES.md
- Move API docs → REFERENCE.md
- Move templates → FORMS.md
Step 3: Update SKILL.md references
For detailed examples, see [EXAMPLES.md](EXAMPLES.md)
For complete API reference, see [REFERENCE.md](REFERENCE.md)
Step 4: Re-validate
python3 scripts/validate-skill.py skills/{skill-name}/
9. Common Violations and Fixes
Violation 1: Missing YAML Frontmatter
Problem:
# My Skill
This is my skill.
Fix:
---
name: my-skill
description: Brief description of my skill purpose and capabilities
---
# My Skill
This is my skill.
Violation 2: Invalid Name Format
Problem:
---
name: My_Skill
description: Description
---
Fix:
---
name: my-skill
description: Description
---
Violation 3: Description Too Long
Problem:
description: [1500 character description that exceeds limit...]
Fix:
description: Brief description (under 1024 chars). For details see SKILL.md body.
Violation 4: Markdown Headers Instead of YAML
Problem:
**Name:** my-skill
**Description:** My description
Fix: Use YAML frontmatter (see Violation 1)
10. Evaluation-Driven Development (Anthropic Jan 2026)
10.1 Build Evaluations First
Create evaluations BEFORE writing extensive documentation. This ensures your Skill solves real problems rather than documenting imagined ones.
Evaluation-Driven Workflow:
- Identify gaps: Run Claude on representative tasks without a Skill. Document specific failures or missing context
- Create evaluations: Build three scenarios that test these gaps
- Establish baseline: Measure Claude's performance without the Skill
- Write minimal instructions: Create just enough content to address the gaps and pass evaluations
- Iterate: Execute evaluations, compare against baseline, and refine
10.2 Evaluation Structure
{
"skills": ["processing-pdfs"],
"query": "Extract all text from this PDF file and save it to output.txt",
"files": ["test-files/document.pdf"],
"expected_behavior": [
"Successfully reads the PDF file using an appropriate library",
"Extracts text content from all pages without missing any",
"Saves extracted text to output.txt in a clear, readable format"
]
}
10.3 Test with All Models
Test your Skill with all models you plan to use:
| Model | Testing Focus |
|---|---|
| Claude Haiku | Does the Skill provide enough guidance? |
| Claude Sonnet | Is the Skill clear and efficient? |
| Claude Opus | Does the Skill avoid over-explaining? |
What works for Opus might need more detail for Haiku.
10.4 Iterative Development with Claude
Work with one Claude instance ("Claude A") to create a Skill used by others ("Claude B"):
- Complete a task with Claude A using normal prompting
- Identify reusable patterns and context you repeatedly provided
- Ask Claude A to create a Skill capturing the pattern
- Review for conciseness (remove unnecessary explanations)
- Test on similar tasks with Claude B (fresh instance)
- Return to Claude A with observations from Claude B's behavior
- Iterate based on real agent behavior
11. References
11.1 Anthropic Official Documentation
- Agent Skills Overview
- Skill Authoring Best Practices (Jan 2026)
- Introducing Agent Skills
- Skills Repository
- Agent Skills Standard
11.2 CODITECT Standards
CODITECT-STANDARD-AGENTS.md- Agents format standardCODITECT-STANDARD-COMMANDS.md- Commands format standardCODITECT-STANDARD-HOOKS.md- Hooks format standard
12. Anti-Patterns to Avoid (Anthropic Jan 2026)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Windows-style paths | Breaks on Unix systems | Always use forward slashes: scripts/helper.py |
| Too many options | Confuses Claude with choices | Provide a default with escape hatch for edge cases |
| First/second person descriptions | Breaks skill discovery | Always write description in third person |
| Nested file references | Claude partially reads, misses info | Keep references one level deep from SKILL.md |
| No table of contents | Claude can't see scope of long files | Add TOC for files >100 lines |
| Voodoo constants | Unexplained magic numbers | Document and justify all configuration values |
| Punting errors to Claude | Script fails, Claude improvises | Handle errors explicitly in scripts |
| Time-sensitive information | Content becomes outdated | Use "old patterns" section for deprecated info |
| Inconsistent terminology | Confuses Claude and users | Pick one term and use throughout |
| Over 500 lines in SKILL.md | Poor performance, bloated context | Split content into Level 3 resources |
13. Version History
| Version | Date | Changes | Author |
|---|---|---|---|
| 2.0.0 | 2026-01-23 | Major update with Anthropic Jan 2026 best practices: gerund naming, third-person descriptions, 500-line limit, Claude Code extensions, evaluation-driven development, anti-patterns | CODITECT Team |
| 1.0.0 | 2025-12-03 | Initial standard based on Anthropic Agent Skills documentation | CODITECT Team |
14. Enforcement
Compliance Deadline: All skills must achieve Grade B (80%) or higher within 30 days of standard publication.
Critical Requirement: YAML frontmatter is NON-NEGOTIABLE. Skills without YAML frontmatter will receive automatic Grade F.
Validation Method:
- Pre-commit hooks validate YAML frontmatter
- CI/CD pipeline blocks non-compliant skills
- Automated skill validation on all PRs
Review Process:
- Weekly automated compliance scans
- Monthly manual quality audits
- Quarterly standard review and updates
Document Control:
- Owner: CODITECT Core Team
- Approvers: Technical Lead, Documentation Lead
- Review Cycle: Quarterly
- Next Review: 2026-03-03