CODITECT STANDARD: CLAUDE.md Files
Version: 1.0.0 Status: Production Standard Last Updated: December 3, 2025 Authority: Based on Anthropic official documentation and Claude Code best practices Scope: All CLAUDE.md files in CODITECT framework and submodules
Executive Summary
CLAUDE.md is a special Markdown file that Claude Code automatically loads into every conversation, serving as a persistent configuration layer for project-specific context. This standard defines how to create effective CLAUDE.md files that optimize AI agent performance while respecting critical token budget constraints.
Why This Matters: CLAUDE.md prepends to every conversation turn. A bloated 300-line file consumed at 50 turns costs 15,000 lines worth of tokens - potentially 40-60% of your context window. Token efficiency is not optional; it's critical to performance.
Core Principle: Progressive disclosure. CLAUDE.md should be a roadmap to detailed documentation, not an encyclopedia. Use it to point to resources that Claude loads on-demand, not to embed everything upfront.
What This Standard Covers:
- File format, location, and hierarchical organization (user/project/framework levels)
- Token budget guidelines and length recommendations (under 150 lines ideal)
- Required and optional sections with concrete examples
- Progressive disclosure implementation patterns
- Multi-session continuity strategies (claude-progress.txt, feature_list.json, init.sh)
- Quality grading criteria and validation checklists
- Migration guide for refactoring existing files
- Complete templates for all three hierarchical levels
Compliance Requirement: All CLAUDE.md files in CODITECT repositories must achieve Grade B (80%) or higher compliance within 30 days of standard publication.
1. File Format and Location
1.1 File Naming
Required name: CLAUDE.md (all caps, .md extension)
Rules:
- Case sensitivity: Must be exact (not claude.md or Claude.md)
- Encoding: UTF-8 without BOM
- Line endings: LF (Unix-style)
- Extension:
.md(Markdown)
Alternative name: CLAUDE.local.md for personal/gitignored preferences
Invalid names:
claude.md(lowercase)Claude.md(mixed case)CLAUDE.txt(wrong extension)CLAUDE(no extension)
1.2 File Locations (Hierarchical)
Claude Code implements a three-tier hierarchical loading system with cascading precedence:
| Location | Priority | Purpose | Typical Size | Version Control |
|---|---|---|---|---|
~/.claude/CLAUDE.md | 1 (Highest) | User-level preferences across all projects | 20-50 lines | User's choice |
PROJECT_ROOT/CLAUDE.md | 2 | Project-wide context shared with team | 100-150 lines | YES (committed) |
PROJECT_ROOT/.coditect/CLAUDE.md | 3 | Framework/submodule-specific context | 150-200 lines | YES (committed) |
PROJECT_ROOT/CLAUDE.local.md | Override | Personal overrides (gitignored) | Variable | NO (gitignored) |
Loading Behavior:
Claude Code reads CLAUDE.md files recursively, starting from the current working directory and traversing upward to (but not including) the root directory. All discovered files are loaded and merged.
Precedence Rules:
- Most specific (deepest nested) takes precedence
- User-level
~/.claude/CLAUDE.mdhas highest priority - Local overrides (
CLAUDE.local.md) override committed files - Parent directory files load first, then child directory files override
Example hierarchy:
~/.claude/CLAUDE.md # Priority 1: User preferences
project-root/CLAUDE.md # Priority 2: Project context
project-root/.coditect/CLAUDE.md # Priority 3: Framework context
project-root/CLAUDE.local.md # Override: Personal settings
Monorepo Support:
For monorepos with multiple subprojects:
monorepo/
├── CLAUDE.md # Root project context
├── backend/
│ └── CLAUDE.md # Backend-specific context (inherits root)
└── frontend/
└── CLAUDE.md # Frontend-specific context (inherits root)
When working in backend/, Claude loads both root and backend CLAUDE.md files.
1.3 Dynamic Updates
The # Key Feature:
Press # during a Claude Code session to have Claude automatically incorporate instructions into the relevant CLAUDE.md file.
How it works:
- Press
#key during conversation - Claude analyzes context and determines appropriate CLAUDE.md location
- Claude proposes changes to add/update instructions
- User reviews and approves
- CLAUDE.md updated in-place
Use cases:
- Adding frequently-used commands after discovering them
- Documenting project conventions as they emerge
- Capturing architectural decisions during development
- Updating directory structure after reorganization
Best practice: Use # key iteratively to build CLAUDE.md based on actual usage patterns, not theoretical needs.
1.4 The /init Command
Bootstrap CLAUDE.md for new projects:
/init
What it does:
- Analyzes project structure (package.json, requirements.txt, Cargo.toml, etc.)
- Detects technology stack and build tools
- Identifies common commands (build, test, lint)
- Generates starter CLAUDE.md with detected patterns
- Includes directory structure and conventions
Official guidance:
"Think of /init as a starting point, not a finished product. The generated CLAUDE.md captures obvious patterns but may miss nuances specific to your workflow. Review what Claude produces and refine it based on your team's actual practices."
Can refine existing CLAUDE.md:
/init
# Claude reviews current CLAUDE.md and suggests improvements
After /init:
- Review generated content for accuracy
- Remove unnecessary sections
- Add project-specific conventions
- Test with Claude in actual workflows
- Iterate based on effectiveness
2. Token Budget and Length Guidelines
2.1 Critical Constraint
CLAUDE.md prepends to EVERY conversation turn - each line costs tokens repeatedly.
Example Cost Analysis:
| CLAUDE.md Size | Tokens per Load | 50-Turn Session | 100-Turn Session | % of 200K Context |
|---|---|---|---|---|
| 50 lines | ~500 tokens | 25,000 tokens | 50,000 tokens | 12.5% - 25% |
| 100 lines | ~1,000 tokens | 50,000 tokens | 100,000 tokens | 25% - 50% |
| 150 lines | ~1,500 tokens | 75,000 tokens | 150,000 tokens | 37.5% - 75% |
| 300 lines | ~3,000 tokens | 150,000 tokens | 300,000 tokens | 75% - 150% ⚠️ |
Reality check: A 300-line CLAUDE.md in a 100-turn session consumes 150% of a 200K context window just in CLAUDE.md overhead. This is catastrophic for performance.
2.2 Maximum Length Recommendations
| Location | Maximum Lines | Maximum Characters | Token Budget | Rationale |
|---|---|---|---|---|
| User CLAUDE.md | 50 | ~2,500 | ~500 tokens | Personal preferences only - applies to ALL projects |
| Project CLAUDE.md | 100-150 | ~5,000-7,500 | ~1,000-1,500 | High-level project context, links to details |
| Submodule CLAUDE.md | 150-200 | ~7,500-10,000 | ~1,500-2,000 | Detailed framework context, component references |
| Local override | Variable | Variable | Variable | Personal experimentation (not shared) |
Ideal targets:
- User: 20-30 lines (extreme minimalism)
- Project: 80-100 lines (comfortable balance)
- Submodule: 120-150 lines (comprehensive but efficient)
Hard limits (DO NOT EXCEED):
- User: 75 lines absolute maximum
- Project: 200 lines absolute maximum
- Submodule: 250 lines absolute maximum
Beyond these limits: Consider your CLAUDE.md non-compliant and refactor immediately using progressive disclosure.
2.3 Token Cost Impact
Calculation formula:
Total CLAUDE.md tokens = (Average tokens per line) × (Total lines) × (Conversation turns)
Assumptions:
- Average: ~10 tokens per line for concise bullet points
- Verbose: ~20 tokens per line for paragraphs
Example scenarios:
Scenario A: Optimized CLAUDE.md (100 lines)
- Tokens per load: 1,000
- 50-turn session: 50,000 tokens (25% of 200K context)
- 100-turn session: 100,000 tokens (50% of 200K context)
- Verdict: Acceptable for normal workflows
Scenario B: Bloated CLAUDE.md (300 lines)
- Tokens per load: 3,000
- 50-turn session: 150,000 tokens (75% of 200K context)
- 100-turn session: 300,000 tokens (150% of 200K context) ⚠️
- Verdict: Unacceptable - immediate refactoring required
Scenario C: Minimal CLAUDE.md (50 lines)
- Tokens per load: 500
- 50-turn session: 25,000 tokens (12.5% of 200K context)
- 100-turn session: 50,000 tokens (25% of 200K context)
- Verdict: Excellent - leaves maximum space for actual work
Key insight: Every 10 lines of CLAUDE.md costs 100 tokens per turn. In a 100-turn session, that's 10,000 tokens per 10 lines. Choose wisely.
2.4 Progressive Disclosure Strategy
Three-Tier Architecture:
Tier 1: Metadata (Always Loaded)
- CLAUDE.md with high-level overview
- Links to detailed documentation
- Size: 100-150 lines
- Token cost: ~1,000-1,500 tokens per turn
- Purpose: Navigation and context orientation
Tier 2: Full Content (Conditionally Loaded)
- Detailed guides referenced in CLAUDE.md
- Loaded only when Claude reads the file
- Size: Unlimited
- Token cost: Only when read (not on every turn)
- Purpose: Deep documentation and comprehensive guides
Tier 3: Deep Resources (On-Demand)
- Complete specifications, API docs, examples
- Loaded explicitly via Claude's Read tool
- Size: Unlimited
- Token cost: Only when explicitly accessed
- Purpose: Reference material and detailed implementation guides
Visual diagram:
┌─────────────────────────────────────┐
│ CLAUDE.md (100-150 lines) │ ← Always loaded (every turn)
│ ───────────────────────────────── │
│ - Project overview │
│ - Links to docs/GUIDE.md │
│ - Links to docs/ARCHITECTURE.md │
│ - Links to docs/API.md │
└──────────┬──────────────────────────┘
│ References (not loaded until read)
↓
┌─────────────────────────────────────┐
│ docs/GUIDE.md (500+ lines) │ ← Loaded on-demand
│ docs/ARCHITECTURE.md (1000+ lines) │
│ docs/API.md (2000+ lines) │
└──────────┬──────────────────────────┘
│ Deep references
↓
┌─────────────────────────────────────┐
│ docs/api/endpoints/*.md │ ← Loaded when explicitly accessed
│ examples/*.md │
│ specs/*.md │
└─────────────────────────────────────┘
Implementation example:
❌ BEFORE (embedded - bloats CLAUDE.md):
## API Endpoints
### GET /users
Returns list of users.
**Parameters:**
- `page` (integer, optional) - Page number
- `limit` (integer, optional) - Results per page
**Response:**
```json
{
"users": [...],
"total": 100,
"page": 1
}
POST /users
Creates new user...
[50+ more lines of API documentation]
**✅ AFTER (linked - keeps CLAUDE.md lean):**
```markdown
## API Documentation
See [docs/API.md](docs/API.md) for complete endpoint documentation.
**Quick references:**
- User endpoints: docs/api/users.md
- Auth endpoints: docs/api/auth.md
- Admin endpoints: docs/api/admin.md
Token savings: 50+ lines (500+ tokens per turn) → 5 lines (50 tokens per turn) = 90% reduction
3. Required Sections
3.1 Project Overview (REQUIRED)
Purpose: Provide immediate context about the project
Max Length: 5-10 lines
Location: Top of file, immediately after title
Required Information:
- Project name and primary purpose (1 line)
- Current development phase (Beta, Production, etc.) (1 line)
- Core technology stack (3-5 key technologies) (1-2 lines)
- Link to detailed PROJECT-PLAN.md or README.md (1 line)
Format:
# Project Name
**Status:** [Development Phase]
**Phase:** [Current Phase/Sprint]
**Tech Stack:** [Language], [Framework], [Database], [Infrastructure]
**Documentation:** See [PROJECT-PLAN.md](docs/PROJECT-PLAN.md)
Example (Grade A):
# CODITECT Rollout Master
**Status:** Beta Testing (Active - Week 3 of 4)
**Phase:** User Feedback Collection
**Tech Stack:** Python 3.10+, Git Submodules, Markdown, Shell Scripts
**Documentation:** See [PROJECT-PLAN.md](docs/project-management/PROJECT-PLAN.md)
What NOT to include:
- Long narrative project descriptions (link to README instead)
- Complete technology version list (link to package files)
- Historical context (use git history)
- Team member lists (link to CONTRIBUTORS.md)
3.2 Essential Reading (REQUIRED)
Purpose: Direct AI agents to critical documentation in priority order
Max Length: 5-10 lines
Format: Ordered list with links (1-2 is most critical, diminishing priority)
Required Elements:
- Clear "READ FIRST (in order):" header
- Numbered list (enforces priority)
- Relative links to documentation
- Brief description of each document (5-10 words max)
Format:
## Essential Reading
**READ FIRST (in order):**
1. [MAIN-DOC.md](path/to/MAIN-DOC.md) - [Brief description]
2. [SECONDARY-DOC.md](path/to/SECONDARY-DOC.md) - [Brief description]
3. [TERTIARY-DOC.md](path/to/TERTIARY-DOC.md) - [Brief description]
Example (Grade A):
## Essential Reading
**READ FIRST (in order):**
1. [PROJECT-PLAN.md](docs/PROJECT-PLAN.md) - Complete rollout strategy and roadmap
2. [TASKLIST.md](docs/TASKLIST.md) - 530+ tasks with checkbox tracking
3. [ARCHITECTURE.md](docs/ARCHITECTURE.md) - System design and component overview
4. [Subdirectory CLAUDE.md](docs/*/CLAUDE.md) - Task-specific AI agent context
Guidelines:
- Limit to 3-5 documents (not 10+)
- Order by importance (most critical first)
- Use relative paths (not absolute)
- Keep descriptions under 10 words
- Link to aggregated docs (not individual files)
Anti-pattern (DO NOT DO THIS):
## Documentation
Here's all our documentation:
- README.md - read this for an overview
- docs/api/users.md - user API endpoints
- docs/api/auth.md - authentication endpoints
- docs/api/admin.md - admin endpoints
- [15+ more files...]
Why this is bad:
- No clear priority
- Too many links (analysis paralysis)
- No aggregation (should link to docs/API.md)
- Verbose descriptions
3.3 Directory Structure (RECOMMENDED)
Purpose: Help AI agents navigate the codebase efficiently
Max Length: 15-20 lines
Format: Code block with tree structure and brief descriptions
When to include:
- Project has non-standard directory organization
- Directory names are not self-explanatory
- Multiple important top-level directories (5+)
When to skip:
- Standard framework structure (Rails, Django, Next.js with defaults)
- Directory names are obvious (src/, tests/, docs/)
- Only 3-4 top-level directories
Format:
## Directory Structure
\`\`\`
project-root/
├── src/ # [Description]
├── tests/ # [Description]
├── docs/ # [Description]
├── scripts/ # [Description]
└── config/ # [Description]
\`\`\`
Example (Grade A):
## Directory Structure
\`\`\`
coditect-rollout-master/
├── .coditect/ # CODITECT framework (symlink to submodules/core)
├── docs/ # Master documentation
│ ├── project-management/ # PROJECT-PLAN.md, TASKLIST.md
│ ├── adrs/ # Architecture Decision Records
│ └── security/ # Security advisories and compliance
├── diagrams/ # C4 architecture diagrams (24 diagrams, 7 phases)
├── scripts/ # Automation (19 Python + 6 shell)
├── submodules/ # 49 repositories across 8 categories
└── MEMORY-CONTEXT/ # Session exports and checkpoints
\`\`\`
Guidelines:
- Show 2-3 levels deep (not entire tree)
- Annotate non-obvious directories
- Skip obvious directories (node_modules/, .git/, etc.)
- Use consistent indentation (2 or 4 spaces)
- Keep descriptions under 8 words
Token optimization:
Instead of showing every subdirectory, use:
## Directory Structure
\`\`\`
project/
├── src/ # Application source code
├── tests/ # Test suites
└── docs/ # Documentation
├── api/ # API reference
├── guides/ # User guides
└── adrs/ # Architecture decisions
\`\`\`
For complete structure, see [STRUCTURE.md](docs/STRUCTURE.md)
4. Optional Sections
4.1 Quick Start Examples
When to include:
- Project requires specific setup or environment configuration
- Common workflows are non-obvious
- Team uses custom commands or scripts
When to skip:
- Standard framework workflows (npm start, cargo run, etc.)
- Setup is in README.md
- Custom commands documented elsewhere
Max Length: 10-15 lines
Format:
## Quick Start
**Setup:**
\`\`\`bash
[setup command]
\`\`\`
**Development:**
\`\`\`bash
[run command]
\`\`\`
**Testing:**
\`\`\`bash
[test command]
\`\`\`
Example (Grade A):
## Quick Start
**Common Tasks:**
- Run tests: `npm test`
- Start dev server: `npm run dev` (localhost:3000)
- Build production: `npm run build`
- Lint code: `npm run lint -- --fix`
**Full setup guide:** See [SETUP.md](docs/SETUP.md)
Guidelines:
- List 3-5 most common commands
- Include expected output (port numbers, etc.)
- Link to detailed setup docs
- Use bullet points (not code blocks for every command)
Anti-pattern:
## Quick Start
To get started with this project, first you'll need to install dependencies. You can do this by running npm install in the terminal. Once that's complete, you can start the development server with npm run dev. The server will start on port 3000 by default, but you can change this in the .env file. For production builds, use npm run build, which will create an optimized bundle in the dist/ directory. To run tests, use npm test, which will execute the Jest test suite with coverage reporting enabled.
Why this is bad:
- Paragraph format (use bullets)
- Too verbose (10+ words per command)
- 150+ words (should be 30-50)
4.2 AI Agent Best Practices
When to include:
- Project has specific patterns AI agents should follow
- Team has established conventions for AI-assisted development
- Common mistakes need to be prevented
When to skip:
- Generic best practices (covered in framework docs)
- No project-specific patterns
Max Length: 10-15 lines
Format:
## AI Agent Best Practices
- [Practice 1]
- [Practice 2]
- [Practice 3]
Example (Grade A):
## AI Agent Best Practices
- Consult PROJECT-PLAN.md before major architectural changes
- Update TASKLIST.md checkboxes as tasks complete
- Use conventional commits (feat, fix, docs, refactor)
- Request human approval for:
- Architecture changes affecting multiple submodules
- New dependencies or technology additions
- Security-related changes
Guidelines:
- Bullet points only (no paragraphs)
- Actionable guidance (not theoretical)
- Specific to this project
- 3-7 items maximum
4.3 Git Workflow
When to include:
- Custom git conventions (branch naming, commit format)
- Monorepo or submodule workflows
- Non-standard merge strategies
When to skip:
- Standard git workflows
- Conventions documented in CONTRIBUTING.md
Max Length: 5-10 lines
Format:
## Git Workflow
**Branch naming:** `[type]/[short-description]`
**Commit format:** `[type]([scope]): [subject]`
**Complete guide:** See [CONTRIBUTING.md](CONTRIBUTING.md)
Example (Grade A):
## Git Workflow
**CRITICAL:** Always commit in submodule FIRST, then update pointer in master.
**Quick commands:**
- Sync all submodules: `./scripts/sync-all-submodules.sh`
- Create checkpoint: `python3 .coditect/scripts/create-checkpoint.py "Description"`
**Complete workflow:** See [docs/GIT-WORKFLOW-SYSTEM.md](docs/GIT-WORKFLOW-SYSTEM.md)
Guidelines:
- Highlight critical patterns (CRITICAL, IMPORTANT)
- Link to detailed workflow docs
- Provide quick command shortcuts
- Avoid repeating full workflow steps
4.4 Common Tasks
When to include:
- Frequently executed operations not obvious from code
- Custom scripts or automation
- Multi-step workflows
When to skip:
- Standard framework commands
- Tasks documented in README.md
Max Length: 10-15 lines
Format:
## Common Tasks
**[Task category]:**
- [Task]: `[command]`
- [Task]: `[command]`
**[Task category]:**
- [Task]: `[command]`
Example (Grade A):
## Common Tasks
**Submodule Management:**
- Sync submodules: `./scripts/sync-all-submodules.sh`
- Update pointers: `git add submodules/ && git commit -m "Sync submodules"`
**Project Checkpoints:**
- Create checkpoint: `python3 .coditect/scripts/create-checkpoint.py "Sprint 3 complete"`
**Timeline Regeneration:**
- Update timeline: `cd scripts && python generate-enhanced-timeline.py`
Guidelines:
- Group by category
- One-line command descriptions
- Use backticks for commands
- 5-10 tasks maximum
5. Content Guidelines
5.1 Writing Style
DO:
✅ Use direct, imperative language
- "Read PROJECT-PLAN.md first" (not "You should read...")
- "Run
npm testbefore committing" (not "It's recommended to run...") - "Follow conventional commits" (not "Please consider following...")
✅ Prefer bullet points over paragraphs
✅ GOOD:
- Setup: `npm install`
- Dev server: `npm run dev`
- Tests: `npm test`
❌ BAD:
To set up the project, run npm install. Then you can start the development server with npm run dev. For testing, use npm test.
✅ Use bold for emphasis on critical information
**IMPORTANT:** Always run tests before committing code.
**CRITICAL:** Commit in submodule first, then update master pointer.
✅ Keep sentences short (10-15 words average)
✅ GOOD: "Run tests before committing. Use `npm test` for full suite."
❌ BAD: "Before you commit your code, you should run the test suite to make sure everything works, which you can do with npm test."
✅ Use active voice
✅ GOOD: "Run the tests"
❌ BAD: "Tests should be run"
DON'T:
❌ Write long explanatory paragraphs
❌ BAD:
This project uses a specific git workflow that involves working with submodules. When you make changes in a submodule, you need to commit those changes first in the submodule itself, and then you need to go back to the master repository and update the pointer to the new commit in the submodule. This is important because if you don't do this, your changes won't be reflected in the master repository and other developers won't see your updates.
❌ Include detailed code examples (link instead)
❌ BAD:
## Authentication Example
Here's how to implement authentication:
\`\`\`typescript
// 50+ lines of code
\`\`\`
✅ GOOD:
## Authentication
See [docs/guides/AUTHENTICATION.md](docs/guides/AUTHENTICATION.md) for implementation examples.
❌ Use passive voice
❌ BAD: "Tests should be run before code is committed"
✅ GOOD: "Run tests before committing code"
❌ Add unnecessary politeness
❌ BAD: "Please consider reading the documentation"
✅ GOOD: "Read documentation: [link]"
5.2 What to Include
✅ INCLUDE:
- High-level project context - Purpose, phase, tech stack (5-10 lines)
- Links to detailed documentation - Essential reading list (5-10 lines)
- Critical constraints or requirements - IMPORTANT/CRITICAL callouts (3-5 items)
- Navigation to key resources - Directory structure, workflow docs (10-20 lines)
- Essential workflow patterns - Git workflow, common tasks (10-20 lines)
Total included content: 50-100 lines for project CLAUDE.md, 100-150 for framework CLAUDE.md
5.3 What to Exclude
❌ EXCLUDE:
1. Code Style Guidelines - Use linters instead
Official Anthropic guidance:
"Never send an LLM to do a linter's job - LLMs are comparably expensive and incredibly slow compared to traditional linters."
❌ DON'T include in CLAUDE.md:
## Code Style
- Use 2 spaces for indentation
- Always use semicolons
- Max line length: 100 characters
- Use single quotes for strings
- [50+ more style rules]
✅ DO instead:
## Code Quality
- Linting: `npm run lint` (ESLint with auto-fix)
- Formatting: `npm run format` (Prettier)
- Type checking: `npm run typecheck` (TypeScript strict mode)
Configuration: See .eslintrc.js, .prettierrc
Why: Linters enforce this in <1 second with 100% accuracy. LLMs take 10+ seconds, cost tokens, and may miss violations.
2. Sensitive Data
❌ NEVER include:
- API keys, tokens, credentials
- Database connection strings
- SSL certificates or private keys
- Security vulnerabilities or exploits
- Internal system details (IP addresses, server names)
Why: CLAUDE.md may be committed to version control and shared. Even if gitignored, AI conversation history may be logged.
3. Task-Specific Instructions
❌ DON'T (applies to one feature only):
## User Authentication Implementation
When implementing user authentication:
1. Use bcrypt for password hashing
2. Store sessions in Redis
3. Implement JWT refresh tokens
4. [20+ more detailed steps]
✅ DO (reference separate guide):
## Authentication
See [docs/guides/AUTHENTICATION.md](docs/guides/AUTHENTICATION.md) for implementation guide.
Why: Task-specific instructions bloat CLAUDE.md but only help for one feature. Create separate docs or custom slash commands instead.
4. Documentation File @-mentions
Official Anthropic guidance:
"Don't @-mention documentation files in your CLAUDE.md as this bloats the context window by embedding the entire file on every run."
❌ WRONG:
For API details, see @docs/API.md
✅ CORRECT:
For API details, see [docs/API.md](docs/API.md)
Why: @filename loads the entire file into context on every turn. Markdown links let Claude load on-demand only when needed.
5. Excessive Content Without Testing
Official Anthropic guidance:
"A common mistake is adding extensive content without iterating on its effectiveness."
❌ Anti-pattern:
- Generate 300-line CLAUDE.md with /init
- Add every possible command and convention
- Never test if it actually helps Claude perform better
- Never measure token cost impact
✅ Best practice:
- Start with 50-line minimal CLAUDE.md
- Add one section at a time
- Test each addition: does Claude follow instructions better?
- Measure: did performance improve or degrade?
- Remove sections that don't help
6. Long Narrative Paragraphs
Official Anthropic guidance:
"You're writing for Claude, not onboarding a junior dev - use short, declarative bullet points and don't write long, narrative paragraphs."
❌ WRONG (paragraph format):
This project is a comprehensive platform for managing user accounts and authentication. It uses TypeScript for type safety and React for the frontend framework. The backend is built with Node.js and Express, and we use PostgreSQL for the database. We follow a microservices architecture with separate services for authentication, user management, and billing.
✅ CORRECT (bullet format):
## Tech Stack
- Frontend: TypeScript, React
- Backend: Node.js, Express
- Database: PostgreSQL
- Architecture: Microservices (auth, users, billing)
7. Theoretical Guidance
❌ Avoid:
- "Best practices" that don't match your actual workflow
- Generic programming advice
- Industry standards not relevant to your project
- Unrelated project documentation
✅ Include:
- Specific conventions your team actually follows
- Commands you actually use
- Workflows specific to this codebase
- Critical warnings about actual gotchas in your system
5.4 Special Guidelines
Never use @-mentions in CLAUDE.md:
❌ DON'T: "See @docs/API.md for details" (loads entire file on every turn)
✅ DO: "See [API documentation](docs/API.md)" (reference only, load on-demand)
❌ DON'T: "For examples, see @examples/auth.ts"
✅ DO: "For examples, see [examples/auth.ts](examples/auth.ts)"
Token cost difference:
- @-mention: 2000+ tokens per turn (file content embedded)
- Markdown link: 10 tokens per turn (reference only)
- 100-turn session: 200,000+ vs 1,000 tokens = 99.5% savings
Link, don't embed:
❌ DON'T (50-line code example embedded):
## Authentication Example
\`\`\`typescript
import { hash, compare } from 'bcrypt';
// ... 50+ lines of implementation code
\`\`\`
✅ DO (link to example):
## Authentication
Example implementation: [examples/authentication.ts](examples/authentication.ts:15-65)
Token savings: 500+ tokens per turn → 10 tokens per turn = 98% reduction
Use emphasis sparingly:
✅ GOOD (2-3 critical items):
**IMPORTANT:** Always run tests before committing.
**CRITICAL:** Commit in submodule first, then update master.
❌ BAD (overuse reduces effectiveness):
**IMPORTANT:** Use linting.
**CRITICAL:** Follow naming conventions.
**IMPORTANT:** Write tests.
**CRITICAL:** Update documentation.
**IMPORTANT:** Review code.
[15+ more IMPORTANT/CRITICAL items]
Why overuse is bad: When everything is important, nothing is important. Use emphasis for top 3-5 critical patterns only.
6. Multi-Session Continuity Patterns
6.1 Recommended Companion Files
Based on Anthropic's long-running agents research, use these companion files alongside CLAUDE.md:
claude-progress.txt (Chronological work log)
Format: Human-readable chronological append-only log
Purpose: Track what was accomplished each session
Example:
2025-12-03: Implemented user authentication with JWT tokens. Tests passing.
2025-12-03: Added password reset functionality. Email sending works locally.
2025-12-04: Fixed bug in session timeout. Deployed to staging.
2025-12-04: Code review feedback: refactor auth middleware. Started refactoring.
Best practices:
- One line per major accomplishment
- Date prefix (YYYY-MM-DD)
- Append only (never delete or modify previous entries)
- Factual, concise (10-20 words per line)
- Written by Claude at end of each session
feature_list.json (Structured state)
Official Anthropic guidance:
"The model is less likely to inappropriately change or overwrite JSON files compared to Markdown files."
Format: JSON format for feature status tracking
Purpose: Machine-readable feature completion state
Example:
{
"features": [
{
"category": "authentication",
"description": "User login system",
"verification": [
"Run dev server",
"Navigate to /login",
"Test valid credentials",
"Verify dashboard redirect"
],
"passes": true
},
{
"category": "billing",
"description": "Stripe payment integration",
"verification": [
"Test checkout flow",
"Verify webhook handling"
],
"passes": false
}
]
}
Best practices:
- JSON format (not Markdown) - less likely to be corrupted
- Boolean
passesfield for clear status - Verification steps as checklist
- Category grouping
- Never delete features (mark as deprecated if needed)
init.sh (Environment setup)
Format: Bash script executed at session start
Purpose: Verify environment and setup for development
Example:
#!/bin/bash
# Verify Node.js version
if ! command -v node &> /dev/null; then
echo "ERROR: Node.js not installed"
exit 1
fi
# Check version
NODE_VERSION=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 18 ]; then
echo "ERROR: Node.js 18+ required (found: $(node -v))"
exit 1
fi
# Install dependencies if needed
if [ ! -d "node_modules" ]; then
echo "Installing dependencies..."
npm install
fi
# Start dev server
echo "Starting development server..."
npm run dev &
# Run baseline tests
echo "Running baseline tests..."
npm test
Best practices:
- Verify tool versions (Node, Python, etc.)
- Check dependencies installed
- Start development server
- Run baseline tests
- Exit with error codes on failure
6.2 Session Continuity Strategies
Opening Ritual (Every Session):
Official Anthropic guidance suggests Claude should do this at the start of each session:
# Claude's mental checklist:
1. Verify working directory (pwd, ls)
2. Read progress notes (cat claude-progress.txt | tail -10)
3. Examine feature list (cat feature_list.json)
4. Check recent git commits (git log --oneline -10)
5. Launch development server (npm run dev or equivalent)
6. Run baseline tests (npm test)
CLAUDE.md can remind Claude:
## Session Startup
Run initialization: `./scripts/init.sh`
**Session checklist:**
1. Read `claude-progress.txt` (last 10 lines)
2. Review `feature_list.json` for current state
3. Check `git log --oneline -10` for recent work
4. Verify dev server running
Incremental Work Discipline:
Official Anthropic guidance:
- Focus on single features per session
- Prevents context exhaustion mid-feature
- Clear handoff points between sessions
CLAUDE.md can enforce:
## Work Discipline
- One feature per session (prevents context exhaustion)
- Always conclude with git commits
- Update `claude-progress.txt` at session end
- Update `feature_list.json` status before ending
Clean State Requirements:
Official Anthropic guidance:
- Conclude with git commits
- Update progress files
- Enable rollback if needed
- Mimic professional developer practices
6.3 Git-Based Continuity
Git history as session memory:
Claude can reconstruct context from git history:
# Last 20 commits show recent progress
git log --oneline -20
# Detailed view of last commit
git log -1 --stat
# Changes in last 3 commits
git log -3 --pretty=format:"%h - %s" --stat
CLAUDE.md should encourage this:
## Multi-Session Continuity
**Context restoration:**
- Recent progress: `git log --oneline -20`
- Current work: `cat claude-progress.txt | tail -10`
- Feature status: `cat feature_list.json`
**Before ending session:**
- Commit all changes with conventional commits
- Update `claude-progress.txt` (append accomplishments)
- Update `feature_list.json` (mark features complete/incomplete)
Checkpoint System:
For CODITECT projects:
## Checkpoints
Create checkpoints after major milestones:
\`\`\`bash
python3 .coditect/scripts/create-checkpoint.py "Sprint 3 complete" --auto-commit
\`\`\`
Restore from checkpoint:
\`\`\`bash
python3 .coditect/scripts/restore-session.py [checkpoint-id]
\`\`\`
Benefits:
- Recoverable state across sessions
- Clear progress markers
- Rollback capability
- Team synchronization
7. Quality Grading Criteria
Grade A (90-100% Compliance) - Exemplary
Criteria:
Length & Token Efficiency (30 points)
- Under 100 lines (project) or 150 lines (submodule)
- Estimated <1500 tokens per load
- Zero redundancy with linked documentation
- Every line serves a clear purpose
Progressive Disclosure (25 points)
- All detailed content in separate files
- Clear references to Level 2/3 resources
- @-mentions never used (markdown links only)
- Examples linked, not embedded
Required Sections (20 points)
- Project overview present and concise (5-10 lines)
- Essential reading section with ordered links (3-5 docs)
- Directory structure (if applicable, 15-20 lines)
- All links functional and relative paths
Content Quality (15 points)
- Bullet points over paragraphs throughout
- Direct, imperative language
- Short sentences (10-15 words avg)
- Active voice consistently
- Bold used for emphasis (2-3 critical items only)
Writing Style (10 points)
- No code style guidelines (deferred to linters)
- No long code examples (linked instead)
- No sensitive data
- No @-mentions
Grade A Example Characteristics:
- Immediately useful to AI agents on first read
- Zero wasted lines or redundant information
- Every section has clear purpose
- Easy to navigate and find information
- Token-efficient (50-100 lines project, 100-150 submodule)
Grade B (80-89% Compliance) - Production Ready
Criteria:
Length & Token Efficiency (25 points)
- 100-150 lines (project) or 150-200 lines (submodule)
- Estimated 1500-2000 tokens per load
- Minimal redundancy with linked docs
- Most lines serve clear purpose
Progressive Disclosure (20 points)
- Most detailed content in separate files
- Most references to Level 2/3 resources clear
- @-mentions avoided (1-2 acceptable if necessary)
- Most examples linked
Required Sections (20 points)
- Project overview present (may be slightly verbose, 10-15 lines)
- Essential reading section present (5-7 docs acceptable)
- Directory structure if applicable
- Most links functional
Content Quality (20 points)
- Mostly bullet points (some paragraphs acceptable)
- Mostly direct language (minor verbosity OK)
- Generally short sentences (some longer acceptable)
- Mostly active voice
Writing Style (15 points)
- Code style mostly deferred to linters (minor exceptions OK)
- Most examples linked (1-2 short embedded OK)
- No sensitive data
- @-mentions avoided
Acceptable Minor Issues:
- Slightly over line budget (10-15% over)
- Some bullet points could be more concise (not paragraphs)
- One or two sections could link to detailed docs instead of embedding
- Minor redundancy with linked documentation
Grade B verdict: Production-ready with minor optimization opportunities.
Grade C (70-79% Compliance) - Needs Improvement
Criteria:
Length & Token Efficiency (15 points)
- 150-250 lines (exceeds recommendations)
- Estimated 2000-3000 tokens per load
- Some redundancy with linked docs
- Some unnecessary content
Progressive Disclosure (15 points)
- Some detailed content embedded in CLAUDE.md
- Limited references to separate docs
- Some @-mentions used
- Some examples embedded
Required Sections (15 points)
- Project overview present but verbose (15-20 lines)
- Essential reading section present (may be unordered or too long)
- Directory structure may be too detailed
- Some broken links
Content Quality (15 points)
- Mix of bullets and paragraphs
- Some verbose descriptions
- Some long sentences
- Mix of active and passive voice
Writing Style (10 points)
- Some code style guidelines included
- Some code examples embedded
- No sensitive data
- Some @-mentions
Common Issues:
- Over recommended line limit by 50-100%
- Embedded examples instead of links (10-20 lines)
- Paragraphs instead of bullets in some sections
- Some redundancy with linked documentation
- Directory structure too detailed (30+ lines)
Grade C verdict: Functional but needs refactoring for token efficiency.
Grade D (60-69% Compliance) - Significant Issues
Criteria:
Length & Token Efficiency (10 points)
- Over 250 lines (significantly exceeds recommendations)
- Estimated 3000-4000+ tokens per load
- Significant redundancy with linked docs
- Much unnecessary content
Progressive Disclosure (10 points)
- Most detailed content embedded
- Few references to separate docs
- Many @-mentions used
- Most examples embedded
Required Sections (10 points)
- Project overview missing or very verbose (30+ lines)
- Essential reading missing or poorly organized
- Directory structure missing or extremely detailed
- Many broken links
Content Quality (10 points)
- Mostly paragraphs
- Very verbose
- Many long sentences
- Mostly passive voice
Writing Style (5 points)
- Extensive code style guidelines
- Many code examples embedded
- May have sensitive data
- Extensive @-mentions
Critical Issues:
- 250-500 lines (2-5x recommended length)
- Code style guidelines embedded (50+ lines)
- Large code blocks embedded (20+ lines each)
- @-mentions loading large files on every turn
- No progressive disclosure strategy
- Significant overlap with README.md or other docs
Grade D verdict: Major issues require substantial rework.
Grade F (<60% Compliance) - Does Not Meet Standards
Criteria:
Disqualifying factors (any one triggers Grade F):
- Over 500 lines (5x+ recommended length)
- Missing all required sections
- Contains sensitive data (API keys, credentials)
- Extensive @-mentions causing massive context bloat
- 5000+ tokens per load
- Requires complete rewrite
Common Grade F characteristics:
- Generated by /init and never reviewed
- Dumps all documentation into CLAUDE.md
- No progressive disclosure
- Bloated with irrelevant content
- Structurally broken or unreadable
Grade F verdict: Complete rewrite required using templates in this standard.
8. Validation Checklist
8.1 Structure Validation
File Format (Required)
- File named
CLAUDE.md(exact capitalization) - UTF-8 encoding with LF line endings
- Under recommended line limit for location
- All sections use Markdown formatting
Section Presence (Required)
- Project overview section present (5-10 lines)
- Essential reading section present (3-5 docs)
- Directory structure (if non-standard or 5+ dirs)
Section Order (Recommended)
- Project overview at top (after title)
- Essential reading second
- Directory structure third
- Optional sections follow
8.2 Content Validation
Project Overview (Required)
- Project name and purpose clear
- Current development phase stated
- Tech stack listed (3-5 key technologies)
- Link to detailed docs (PROJECT-PLAN.md or README.md)
- Length: 5-10 lines
Essential Reading (Required)
- Ordered list (numbered 1, 2, 3...)
- 3-5 documents maximum (not 10+)
- All links use relative paths
- All links functional (files exist)
- Brief descriptions (5-10 words each)
Directory Structure (If Present)
- Code block format with tree structure
- Annotations for non-obvious directories
- 2-3 levels deep maximum (not complete tree)
- Length: 15-20 lines
Optional Sections (If Present)
- Quick Start: 10-15 lines, links to detailed setup
- AI Agent Best Practices: 10-15 lines, bullet points
- Git Workflow: 5-10 lines, links to detailed guide
- Common Tasks: 10-15 lines, grouped by category
8.3 Token Efficiency Validation
Line Count
- User CLAUDE.md: Under 50 lines
- Project CLAUDE.md: Under 150 lines
- Submodule CLAUDE.md: Under 200 lines
Token Budget
- Estimated tokens calculated (wc -w filename × 1.3)
- User: ~500 tokens per load
- Project: ~1,000-1,500 tokens per load
- Submodule: ~1,500-2,000 tokens per load
Progressive Disclosure Applied
- No large code blocks (>5 lines embedded)
- No extensive examples (linked instead)
- No API documentation (linked instead)
- No detailed guides (linked instead)
- No redundancy with linked documentation
@-Mentions Avoided
- Zero @-mentions of documentation files
- All documentation references use markdown links
- Example:
[docs/API.md](docs/API.md)not@docs/API.md
8.4 Quality Validation
Writing Style
- Bullet points used over paragraphs
- Direct, imperative language throughout
- Short sentences (10-15 words average)
- Active voice consistently
- Bold used for emphasis (2-3 critical items only)
Content Appropriateness
- No code style guidelines (deferred to linters)
- No sensitive data (API keys, credentials)
- No task-specific instructions (use separate docs)
- No @-mentions of large files
- No theoretical guidance not specific to project
Navigation
- All links use relative paths (not absolute)
- All links functional (files exist)
- Clear hierarchy (sections properly nested)
- Easy to scan and find information
Testing
- Tested with AI agent in fresh Claude Code session
- AI agent successfully navigates to referenced resources
- AI agent follows critical instructions
- No confusion or errors during navigation
8.5 Grade Assignment
Calculate total score:
-
Count compliance items from Grade criteria:
- Grade A: 30 items (90-100 points)
- Grade B: 25 items (80-89 points)
- Grade C: 15 items (70-79 points)
- Grade D: 10 items (60-69 points)
- Grade F: <60 points
-
Assign grade based on percentage:
- 90-100%: Grade A (Exemplary)
- 80-89%: Grade B (Production Ready)
- 70-79%: Grade C (Needs Improvement)
- 60-69%: Grade D (Significant Issues)
- <60%: Grade F (Does Not Meet Standards)
-
Automatic Grade F disqualifiers:
- Over 500 lines
- Contains sensitive data
- Missing all required sections
- 5000+ tokens per load
Record grade in project documentation:
## CLAUDE.md Compliance
**Last Validated:** 2025-12-03
**Grade:** A (94% compliance)
**Line Count:** 98 lines (target: <150)
**Token Budget:** ~1,100 tokens per load (target: <1,500)
**Issues:** None
**Next Review:** 2026-01-03
9. Examples
9.1 Grade A Example: Minimal Project CLAUDE.md
File: PROJECT_ROOT/CLAUDE.md (Lines: 42, Tokens: ~450)
# Acme Project Management Platform
**Status:** Beta Testing (Week 3 of 4)
**Tech Stack:** TypeScript, React, PostgreSQL, Redis
**Public Launch:** March 11, 2026
---
## Essential Reading
**READ FIRST (in order):**
1. [PROJECT-PLAN.md](docs/PROJECT-PLAN.md) - Complete rollout strategy
2. [TASKLIST.md](docs/TASKLIST.md) - Current progress tracking
3. [ARCHITECTURE.md](docs/ARCHITECTURE.md) - System design
---
## Directory Structure
\`\`\`
acme-platform/
├── src/ # Application source (TypeScript)
├── docs/ # Documentation
├── tests/ # Test suites
└── scripts/ # Automation
\`\`\`
---
## Quick Start
**Common Tasks:**
- Run tests: `npm test`
- Start dev: `npm run dev` (localhost:3000)
- Build: `npm run build`
**Git Workflow:** See [CONTRIBUTING.md](CONTRIBUTING.md)
---
## AI Agent Best Practices
- Consult PROJECT-PLAN.md before major changes
- Update TASKLIST.md as tasks complete
- Use conventional commits (feat, fix, docs, refactor)
- Request human approval for architecture changes
---
**Last Updated:** 2025-12-03
**Lines:** 42 (Grade A: Under 100)
Why Grade A:
- Only 42 lines (well under 100-line target)
- All required sections present and concise
- Perfect progressive disclosure (links to detailed docs)
- Zero code examples or style guidelines
- Token-efficient writing (bullet points, short sentences)
- Clear navigation structure
- No @-mentions
- Tested and verified effective
Token cost: ~450 tokens per load × 100 turns = 45,000 tokens (22.5% of 200K context) ✅
9.2 Grade B Example: Standard Project CLAUDE.md
File: PROJECT_ROOT/CLAUDE.md (Lines: 128, Tokens: ~1,400)
# E-Commerce Backend API
**Status:** Production
**Phase:** Active Development (v2.3.0)
**Tech Stack:** Python 3.11, FastAPI, PostgreSQL, Redis, Docker
**Documentation:** See [PROJECT-PLAN.md](docs/project-management/PROJECT-PLAN.md)
---
## Essential Reading
**READ FIRST (in order):**
1. [PROJECT-PLAN.md](docs/project-management/PROJECT-PLAN.md) - Project roadmap
2. [ARCHITECTURE.md](docs/architecture/ARCHITECTURE.md) - System design
3. [API-REFERENCE.md](docs/api/API-REFERENCE.md) - Complete API docs
4. [DEPLOYMENT.md](docs/deployment/DEPLOYMENT.md) - Deployment procedures
5. [TASKLIST.md](docs/project-management/TASKLIST.md) - Current tasks
---
## Directory Structure
\`\`\`
ecommerce-api/
├── app/ # Application source code
│ ├── models/ # Database models (SQLAlchemy)
│ ├── routes/ # API endpoints
│ ├── services/ # Business logic
│ └── utils/ # Utility functions
├── tests/ # Test suites (pytest)
├── docs/ # Documentation
│ ├── api/ # API reference
│ ├── architecture/ # System design docs
│ └── guides/ # User guides
├── scripts/ # Automation scripts
└── docker/ # Docker configurations
\`\`\`
---
## Quick Start
**Setup:**
\`\`\`bash
# Install dependencies
uv sync
# Setup database
python scripts/setup_db.py
# Start dev server
uvicorn app.main:app --reload
\`\`\`
**Development:**
- Dev server: `uvicorn app.main:app --reload` (localhost:8000)
- API docs: http://localhost:8000/docs (Swagger UI)
- Run tests: `pytest`
- Coverage: `pytest --cov=app tests/`
**Code Quality:**
- Lint: `ruff check .`
- Format: `ruff format .`
- Type check: `pyright`
**Complete setup guide:** See [docs/guides/SETUP.md](docs/guides/SETUP.md)
---
## Git Workflow
**Branch naming:** `[type]/[short-description]`
- Types: feature, fix, docs, refactor, test
**Commit format:** `[type]([scope]): [subject]`
- Example: `feat(auth): Add JWT refresh token support`
**Before committing:**
- Run tests: `pytest`
- Run linting: `ruff check . && ruff format .`
- Update CHANGELOG.md if user-facing changes
**Complete workflow:** See [CONTRIBUTING.md](CONTRIBUTING.md)
---
## Common Tasks
**Database:**
- Create migration: `alembic revision --autogenerate -m "Description"`
- Run migrations: `alembic upgrade head`
- Rollback: `alembic downgrade -1`
**Testing:**
- Run all tests: `pytest`
- Run specific test: `pytest tests/test_auth.py`
- Watch mode: `pytest-watch`
**Deployment:**
- Build Docker image: `docker build -t ecommerce-api .`
- Run locally: `docker-compose up`
- Deploy to staging: `./scripts/deploy-staging.sh`
**Complete commands:** See [docs/guides/COMMANDS.md](docs/guides/COMMANDS.md)
---
## AI Agent Best Practices
- Read PROJECT-PLAN.md and ARCHITECTURE.md before major changes
- Update TASKLIST.md checkboxes as tasks complete
- Use conventional commits consistently
- Always run tests before committing
- Request human approval for:
- Database schema changes
- API endpoint modifications
- External service integrations
- Security-related changes
---
## Standards
**Code Quality:**
- Type hints required for all functions
- Docstrings required for public APIs (Google style)
- Test coverage target: 80%+
- PEP 8 naming conventions
**Configuration:** See .ruff.toml, pyproject.toml, .editorconfig
---
**Last Updated:** 2025-12-03
**Lines:** 128 (Grade B: Under 150)
**Token Budget:** ~1,400 tokens per load
Why Grade B:
- 128 lines (slightly over ideal 100, but under 150 limit)
- All required sections present
- Good progressive disclosure (most content linked)
- Token-efficient writing
- Some minor verbosity in Quick Start section (could link to setup guide instead)
- Directory structure slightly detailed (acceptable for complex project)
- 5 docs in Essential Reading (could reduce to 3-4)
Token cost: ~1,400 tokens per load × 100 turns = 140,000 tokens (70% of 200K context) ⚠️ (approaching limit)
Improvement opportunities:
- Reduce Essential Reading to 3-4 docs (PROJECT-PLAN, ARCHITECTURE, API-REFERENCE)
- Link Quick Start commands to SETUP.md instead of embedding
- Simplify directory structure (remove subdirectory descriptions)
9.3 Grade F Anti-Pattern Example
File: PROJECT_ROOT/CLAUDE.md (Lines: 547, Tokens: ~6,000+)
DO NOT CREATE FILES LIKE THIS - This is what to AVOID:
# My Awesome Project
This is my awesome project that does lots of amazing things. It's built with modern technologies and follows all the best practices. We've been working on this for 6 months and have implemented many features. The team is really excited about the direction we're going.
[... 20 more lines of narrative introduction ...]
## Project History
This project started in June 2024 when we realized we needed a better solution for managing our workflows. We evaluated several options including...
[... 50 lines of historical context ...]
## Complete Technology Stack
Here's every single technology we use:
- Frontend:
- React 18.2.0
- TypeScript 5.1.6
- Vite 4.4.5
- TailwindCSS 3.3.2
- [50+ more dependencies with versions]
- Backend:
- Node.js 20.5.0
- Express 4.18.2
- [40+ more dependencies with versions]
[... 80 lines of complete dependency list ...]
## Code Style Guidelines
**Important:** We have very specific code style requirements that must be followed at all times.
### JavaScript/TypeScript Style
1. **Indentation:** Use 2 spaces for indentation. Never use tabs.
2. **Semicolons:** Always use semicolons at the end of statements.
3. **Quotes:** Use single quotes for strings, except when using template literals.
4. **Variable Naming:**
- Use camelCase for variables and functions
- Use PascalCase for classes and components
- Use UPPER_SNAKE_CASE for constants
- Never use Hungarian notation
5. **Function Declaration:**
- Prefer arrow functions for anonymous functions
- Use function declarations for named functions
- Always use explicit return types in TypeScript
[... 150 more lines of detailed code style guidelines ...]
## Complete API Documentation
### User Endpoints
#### GET /api/users
Returns a list of users from the database.
**Parameters:**
- `page` (integer, optional) - The page number for pagination. Default: 1
- `limit` (integer, optional) - Number of results per page. Default: 10. Max: 100
- `sort` (string, optional) - Field to sort by. Options: 'name', 'email', 'created_at'
- `order` (string, optional) - Sort order. Options: 'asc', 'desc'. Default: 'asc'
- `search` (string, optional) - Search query for filtering users by name or email
**Response:**
\`\`\`json
{
"users": [
{
"id": "uuid-string",
"name": "John Doe",
"email": "john@example.com",
"created_at": "2025-01-15T10:30:00Z",
"updated_at": "2025-01-15T10:30:00Z",
"role": "user",
"status": "active"
}
],
"pagination": {
"total": 100,
"page": 1,
"limit": 10,
"pages": 10
}
}
\`\`\`
**Error Responses:**
400 Bad Request:
\`\`\`json
{
"error": "Invalid parameters",
"details": {
"page": "Must be a positive integer"
}
}
\`\`\`
401 Unauthorized:
\`\`\`json
{
"error": "Authentication required"
}
\`\`\`
[... 200 more lines of complete API endpoint documentation ...]
## Complete Setup Guide
To set up this project, follow these detailed steps:
1. **Clone the repository**
First, you need to clone the repository from GitHub. You can do this by running:
\`\`\`bash
git clone https://github.com/example/my-project.git
cd my-project
\`\`\`
2. **Install Node.js**
This project requires Node.js version 20.5.0 or higher. If you don't have Node.js installed, go to nodejs.org and download the installer for your operating system. Run the installer and follow the prompts.
[... 100 more lines of extremely detailed setup instructions ...]
## Every Possible Command
Here's every single command you might ever need:
### Development Commands
- `npm run dev` - Start development server
- `npm run dev:watch` - Start development server with file watching
- `npm run dev:debug` - Start development server with debugging enabled
- [50+ more variations of dev commands]
### Build Commands
- `npm run build` - Build for production
- `npm run build:staging` - Build for staging environment
- `npm run build:dev` - Build for development
- [30+ more build variations]
[... 80 more lines of every possible command ...]
## Team Members
Here's everyone who has ever worked on this project:
- John Doe (john@example.com) - Lead Developer - Joined June 2024
- Jane Smith (jane@example.com) - Frontend Developer - Joined July 2024
- [20+ more team members with details]
## Meeting Notes
### 2024-06-15 Kickoff Meeting
- Discussed project scope
- Decided on technology stack
- [30 lines of meeting notes]
### 2024-06-22 Architecture Review
[... 200 lines of historical meeting notes ...]
## Troubleshooting
Here's every possible problem you might encounter and how to fix it:
### Problem: npm install fails
This could happen for several reasons. First, check if you have Node.js installed...
[... 50 lines for each troubleshooting scenario ...]
---
**Last Updated:** 2025-12-03
**Lines:** 547 ❌ (5.5x over target)
Why Grade F (Automatic Failure):
-
Catastrophic length: 547 lines (5.5x over 100-line target for project CLAUDE.md)
-
Token disaster: ~6,000+ tokens per load
- 100-turn session: 600,000+ tokens (3x the entire 200K context window)
- Makes the system unusable
-
Massive violations:
- Long narrative paragraphs instead of bullets
- Complete dependency list (should be in package.json)
- 150+ lines of code style guidelines (should be in linters)
- 200+ lines of API documentation (should be in docs/API.md)
- 100+ lines of setup instructions (should be in SETUP.md)
- Complete command reference (should be in separate doc)
- Team member list (should be in CONTRIBUTORS.md)
- Historical meeting notes (irrelevant to AI agent)
- Extensive troubleshooting (should be in docs/TROUBLESHOOTING.md)
-
Zero progressive disclosure:
- Everything embedded instead of linked
- No references to separate documentation files
- Treats CLAUDE.md as encyclopedia instead of index
-
Anti-patterns throughout:
- Paragraphs instead of bullets
- Passive voice
- Verbose explanations
- Historical context (irrelevant)
- Theoretical guidance
Impact on Claude Code:
- Context window fills up in 30-50 turns (instead of 100+)
- Performance degrades rapidly
- Costs 10-20x more tokens than necessary
- Critical information buried in noise
- AI agent confused by excessive content
Required action: Complete rewrite using Grade A template.
10. Migration Guide
10.1 Refactoring Existing CLAUDE.md Files
When to refactor: Current CLAUDE.md exceeds recommended limits
Step 1: Measure Current State
# Count lines
wc -l CLAUDE.md
# Estimate tokens (rough: words × 1.3)
wc -w CLAUDE.md
# Results interpretation:
# - Under 100 lines: Acceptable for project CLAUDE.md
# - 100-150 lines: Review for optimization opportunities
# - 150-250 lines: Refactoring recommended
# - Over 250 lines: Refactoring REQUIRED
Step 2: Identify Content for Extraction
Audit current CLAUDE.md and identify sections to extract:
Code Style Guidelines → Extract to linter configs
❌ Current in CLAUDE.md (50+ lines):
## Code Style
- Use 2 spaces for indentation
- Always use semicolons
- [48 more style rules]
✅ After refactoring:
## Code Quality
- Linting: `npm run lint` (ESLint with auto-fix)
- Formatting: `npm run format` (Prettier)
- Config: See .eslintrc.js, .prettierrc
File location:
- Create
.eslintrc.jswith rules - Create
.prettierrcwith formatting config - Add pre-commit hooks to enforce automatically
Detailed Examples → Extract to examples/ directory
❌ Current in CLAUDE.md (30+ lines):
## Authentication Example
\`\`\`typescript
[50+ lines of implementation code]
\`\`\`
✅ After refactoring:
## Authentication
Example: [examples/authentication.ts](examples/authentication.ts:15-65)
Guide: [docs/guides/AUTHENTICATION.md](docs/guides/AUTHENTICATION.md)
File location:
- Create
examples/authentication.tswith full example - Create
docs/guides/AUTHENTICATION.mdwith explanation
API Documentation → Extract to docs/API.md
❌ Current in CLAUDE.md (100+ lines):
## API Endpoints
### GET /users
[Detailed documentation]
### POST /users
[Detailed documentation]
[10+ more endpoints]
✅ After refactoring:
## API Documentation
Complete reference: [docs/API.md](docs/API.md)
Quick links:
- User endpoints: docs/api/users.md
- Auth endpoints: docs/api/auth.md
File location:
- Create
docs/API.mdwith complete API documentation - Organize by resource:
docs/api/users.md,docs/api/auth.md, etc.
Historical Context → Remove (use git history)
❌ Current in CLAUDE.md (50+ lines):
## Project History
This project started in June 2024...
[48 more lines of history]
✅ After refactoring:
[REMOVE ENTIRELY]
Rationale: Git history provides complete project history. No need to duplicate in CLAUDE.md.
Personal Notes → Move to CLAUDE.local.md
❌ Current in CLAUDE.md:
## My Personal Preferences
- I prefer verbose explanations
- I like to see all options
[Personal preferences mixed with team conventions]
✅ After refactoring:
# Move to PROJECT_ROOT/CLAUDE.local.md (gitignored)
File location:
- Create
CLAUDE.local.mdin project root - Add to
.gitignore - Personal preferences don't affect team
Step 3: Apply Progressive Disclosure
Replace embedded content with links:
Template pattern:
❌ BEFORE (embedded):
## [Section Title]
[50+ lines of detailed content]
✅ AFTER (linked):
## [Section Title]
See [docs/SECTION-GUIDE.md](docs/SECTION-GUIDE.md) for complete documentation.
**Quick reference:**
- [Key point 1]
- [Key point 2]
- [Key point 3]
Example transformation:
❌ BEFORE (embedded - 60 lines):
## Testing Strategy
We use Jest for unit testing and Playwright for E2E testing.
### Unit Testing
Run unit tests with:
\`\`\`bash
npm test
\`\`\`
Unit tests are located in \`src/**/__tests__/\`. We use the following conventions:
- Test files end with \`.test.ts\`
- Mock files in \`__mocks__/\` directory
- Coverage target: 80%+
[50 more lines of detailed testing documentation]
✅ AFTER (linked - 8 lines):
## Testing
**Quick commands:**
- Unit tests: `npm test`
- E2E tests: `npm run test:e2e`
- Coverage: `npm run test:coverage`
**Complete testing guide:** [docs/guides/TESTING.md](docs/guides/TESTING.md)
Token savings: 60 lines (~600 tokens) → 8 lines (~80 tokens) = 87% reduction
Step 4: Validate Refactoring
After refactoring, validate changes:
# 1. Check line count (should be under target)
wc -l CLAUDE.md
# 2. Test with AI agent in fresh Claude Code session
# - Start new session
# - Verify agent follows instructions
# - Check navigation links work
# - Confirm agent finds referenced documentation
# 3. Compare token costs
# Before: [previous token count]
# After: [new token count]
# Savings: [percentage reduction]
Example validation report:
## CLAUDE.md Refactoring Results
**Before:**
- Lines: 287
- Est. tokens: 3,200
- Grade: D (65%)
**After:**
- Lines: 94
- Est. tokens: 1,050
- Grade: A (92%)
**Improvements:**
- 67% line reduction (287 → 94)
- 67% token reduction (3,200 → 1,050)
- Grade improvement: D → A
**Extracted content:**
- Code style → .eslintrc.js, .prettierrc (50 lines)
- API docs → docs/API.md (120 lines)
- Examples → examples/ directory (40 lines)
- Historical notes → Removed (20 lines)
**Validation:**
- ✅ All links functional
- ✅ AI agent navigation successful
- ✅ Token budget under 1,500 (target met)
- ✅ Grade A compliance achieved
10.2 Splitting Large CLAUDE.md Files
For 500+ line CLAUDE.md files requiring complete restructure:
Step 1: Create Hierarchical Structure
project-root/
├── CLAUDE.md # NEW: 100-150 lines (high-level)
├── .coditect/
│ └── CLAUDE.md # NEW: 150-200 lines (framework details)
├── docs/
│ ├── SETUP.md # EXTRACTED: Setup instructions
│ ├── ARCHITECTURE.md # EXTRACTED: Architecture details
│ ├── API.md # EXTRACTED: API documentation
│ └── guides/
│ ├── TESTING.md # EXTRACTED: Testing guide
│ ├── DEPLOYMENT.md # EXTRACTED: Deployment guide
│ └── TROUBLESHOOTING.md # EXTRACTED: Troubleshooting
Step 2: Distribute Content by Level
Level 1: Project CLAUDE.md (100-150 lines)
# Project Name
**Status:** [Phase]
**Tech Stack:** [Core technologies]
**Documentation:** See [PROJECT-PLAN.md](docs/PROJECT-PLAN.md)
## Essential Reading
**READ FIRST (in order):**
1. [PROJECT-PLAN.md](docs/PROJECT-PLAN.md) - Project roadmap
2. [ARCHITECTURE.md](docs/ARCHITECTURE.md) - System design
3. [Framework Details](.coditect/CLAUDE.md) - CODITECT framework context
## Directory Structure
\`\`\`
[High-level structure only - 2 levels deep]
\`\`\`
## Quick Start
**Common tasks:**
- [3-5 most common commands]
**Full setup:** See [docs/SETUP.md](docs/SETUP.md)
## Git Workflow
**Quick commands:**
- [2-3 essential git commands]
**Complete workflow:** See [docs/GIT-WORKFLOW.md](docs/GIT-WORKFLOW.md)
## Framework Details
For CODITECT framework context, see [.coditect/CLAUDE.md](.coditect/CLAUDE.md)
Level 2: .coditect/CLAUDE.md (150-200 lines)
# CODITECT Framework Context
## Component Inventory
**Available components:**
- All agents, commands, skills, and scripts (see config/component-counts.json)
**Component references:**
- Agents: See [agents/README.md](agents/README.md)
- Commands: See [docs/SLASH-COMMANDS-REFERENCE.md](docs/SLASH-COMMANDS-REFERENCE.md)
- Skills: See [skills/README.md](skills/README.md)
## Component Activation
**Check status:** `python3 scripts/update-component-activation.py status [type] [name]`
**Activate:** `python3 scripts/update-component-activation.py activate [type] [name]`
**Details:** See [docs/COMPONENT-ACTIVATION-GUIDE.md](docs/COMPONENT-ACTIVATION-GUIDE.md)
## Multi-Session Integration
**Component discovery workflow:**
1. Search: Use Grep on agents/, skills/, commands/
2. Read: Read component definition file
3. Check: Read component-activation-status.json
4. Request: Provide activation command to human
**Complete guide:** See [docs/MULTI-SESSION-INTEGRATION.md](docs/MULTI-SESSION-INTEGRATION.md)
Level 3: Detailed Guides (Unlimited size)
Each extracted guide can be as detailed as needed:
docs/SETUP.md- Complete setup instructions (200+ lines OK)docs/ARCHITECTURE.md- Detailed architecture (500+ lines OK)docs/API.md- Complete API reference (1000+ lines OK)docs/guides/TESTING.md- Testing strategies (300+ lines OK)
Step 3: Use Hierarchical References
In project CLAUDE.md:
## Framework Details
See [.coditect/CLAUDE.md](.coditect/CLAUDE.md) for CODITECT framework context.
In .coditect/CLAUDE.md:
## Detailed Documentation
For comprehensive guides, see:
- [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) - System architecture
- [docs/API.md](docs/API.md) - Complete API reference
- [docs/guides/](docs/guides/) - User guides and tutorials
Step 4: Test Hierarchical Loading
# Verify both files load correctly
# Start Claude Code session
# Check that:
# 1. Project CLAUDE.md loads (high-level context)
# 2. .coditect/CLAUDE.md loads (framework context)
# 3. Precedence works correctly (project overrides framework)
Benefits of hierarchical structure:
- Each file stays within token budget
- Clear separation of concerns
- Project context vs. framework context
- Detailed docs loaded on-demand only
10.3 Token Optimization Techniques
Technique 1: Bullet Points Instead of Paragraphs
❌ BEFORE (verbose paragraph - 45 words):
This project uses TypeScript for type safety and React for building user interfaces. We've chosen these technologies because they provide excellent developer experience and have strong community support. The combination allows us to build robust, maintainable applications.
✅ AFTER (bullet points - 15 words):
**Tech Stack:**
- TypeScript - Type safety
- React - UI framework
- Benefits: Developer experience, community support
Token savings: 45 words (~60 tokens) → 15 words (~20 tokens) = 67% reduction
Technique 2: Link Instead of Embed
❌ BEFORE (embedded example - 30 lines, ~300 tokens):
## Authentication Example
\`\`\`typescript
import { authenticateUser } from './auth';
export async function loginHandler(req, res) {
const { email, password } = req.body;
// [25 more lines of implementation]
}
\`\`\`
✅ AFTER (linked - 2 lines, ~20 tokens):
## Authentication
Example: [examples/auth.ts](examples/auth.ts:10-35)
Token savings: 300 tokens → 20 tokens = 93% reduction
Technique 3: Aggregate Documentation
❌ BEFORE (individual file links - 12 lines, ~150 tokens):
## API Documentation
User Endpoints:
- [GET /users](docs/api/users/get-users.md)
- [POST /users](docs/api/users/create-user.md)
- [PUT /users/:id](docs/api/users/update-user.md)
- [DELETE /users/:id](docs/api/users/delete-user.md)
Auth Endpoints:
- [POST /auth/login](docs/api/auth/login.md)
- [POST /auth/logout](docs/api/auth/logout.md)
- [POST /auth/refresh](docs/api/auth/refresh.md)
[5+ more categories with endpoints]
✅ AFTER (aggregated - 4 lines, ~50 tokens):
## API Documentation
Complete reference: [docs/API.md](docs/API.md)
Quick access: [Users](docs/api/users.md) | [Auth](docs/api/auth.md) | [Billing](docs/api/billing.md)
Token savings: 150 tokens → 50 tokens = 67% reduction
Technique 4: Remove Redundancy
❌ BEFORE (redundant with README.md - 20 lines, ~200 tokens):
## About This Project
This is a project management platform designed to help teams collaborate...
[Content duplicates README.md]
✅ AFTER (reference only - 2 lines, ~20 tokens):
## Project Overview
See [README.md](README.md) for complete project description.
Token savings: 200 tokens → 20 tokens = 90% reduction
Technique 5: Compress Command Lists
❌ BEFORE (verbose command descriptions - 15 lines, ~200 tokens):
## Common Commands
To start the development server, run:
\`\`\`bash
npm run dev
\`\`\`
This will start the server on port 3000.
To run the test suite, use:
\`\`\`bash
npm test
\`\`\`
This executes Jest with coverage reporting.
[More verbose command descriptions]
✅ AFTER (compact command list - 5 lines, ~60 tokens):
## Common Commands
- Dev server: `npm run dev` (localhost:3000)
- Tests: `npm test` (Jest with coverage)
- Build: `npm run build` (production bundle)
Token savings: 200 tokens → 60 tokens = 70% reduction
Cumulative Impact:
Applying all 5 techniques to a 300-line CLAUDE.md:
Before:
- Lines: 300
- Tokens: ~3,300
- 100-turn cost: 330,000 tokens (165% of 200K context) ❌
After optimization:
- Lines: 95
- Tokens: ~1,050
- 100-turn cost: 105,000 tokens (52.5% of 200K context) ✅
Total savings: 68% token reduction
11. References and Authority
11.1 Official Anthropic Documentation
Primary Sources:
-
- Official Claude Code documentation
- CLAUDE.md file discovery and loading behavior
- Hierarchical file organization
-
- Configuration options
- User vs. project vs. local settings
- File precedence rules
-
- Best practices for daily use
- Multi-session patterns
- Checkpoint systems
-
- /init command documentation
-
key dynamic updates
- Custom command creation
11.2 Engineering Blog Posts
Key Research Sources:
-
- Critical quote: "Never send an LLM to do a linter's job"
- Progressive disclosure architecture
- Token efficiency strategies
- Instruction-following limits (~150-200 instructions)
-
Effective Harnesses for Long-Running Agents
- Multi-session continuity patterns
- claude-progress.txt chronological logging
- feature_list.json structured state
- init.sh environment setup
- Critical quote: "The model is less likely to inappropriately change or overwrite JSON files compared to Markdown files"
-
Effective Context Engineering for AI Agents
- Progressive disclosure principles
- Three-tier architecture (metadata → full content → resources)
- Token optimization techniques
- Just-in-time context loading
-
Equipping Agents with Agent Skills
- Agent Skills progressive disclosure implementation
- YAML frontmatter requirements
- Level 1 (metadata) vs. Level 2 (instructions) vs. Level 3 (resources)
- Unbounded context potential
11.3 Official Blog Resources
-
- Official introduction to CLAUDE.md
- When and why to use CLAUDE.md
- Common patterns and anti-patterns
-
How Anthropic Teams Use Claude Code
- Internal Anthropic usage patterns
- Real-world examples from Anthropic engineers
- Lessons learned from production use
11.4 GitHub Official Repositories
-
anthropics/anthropic-quickstarts
- Official example projects
- Production-quality CLAUDE.md examples
- Best practices demonstrated in code
-
- Main Claude Code repository
- Source code and implementation details
- Issue tracker for known problems
11.5 Authoritative Community Sources
Best Practice Guides:
-
- Comprehensive community guide
- Real-world examples and anti-patterns
- Token efficiency case studies
-
Apidog: Claude.md Best Practices
- 5 essential best practices
- Common mistakes to avoid
- Migration strategies
-
- Community resource hub
- Curated examples and templates
Real-World Examples:
-
ArthurClune/claude-md-examples
- Production CLAUDE.md examples
- Python, TypeScript, Rust projects
- Infrastructure as code examples
-
- Practical templates
- Industry-specific examples
Technical Guides:
-
Managing Claude Code's Context
- Context window management handbook
- Token budget optimization
- Multi-session strategies
-
- Complete guide for developers
- Workflow patterns
- Productivity tips
-
Claude Code Professional Guide
- Frontend development focus
- React/Next.js examples
- Component-driven development
Advanced Topics:
-
Writing CLAUDE.md for Mature Codebases
- Enterprise patterns
- Large-scale project strategies
- Team collaboration
-
Learning from Anthropic: Nested Skills
- Progressive disclosure deep dive
- Skill organization patterns
- Scalable architecture
-
Progressive Disclosure in Agent Skills
- Architecture analysis
- Implementation patterns
- Performance implications
Tool & Configuration:
-
- Commands and shortcuts
- Configuration reference
- Quick reference guide
-
- Memory system architecture
- Context persistence strategies
- Session continuity patterns
11.6 CODITECT-Specific Standards
Related Standards:
- CODITECT-STANDARD-README-MD.md - README file standards
- CODITECT-STANDARD-AGENTS.md - Agent definition standards
- CODITECT-STANDARD-SKILLS.md - Skill definition standards
- CODITECT-STANDARD-COMMANDS.md - Slash command standards
HOW-TO Guides:
- HOW-TO-CREATE-CLAUDE-MD.md (to be created)
- HOW-TO-OPTIMIZE-TOKEN-BUDGET.md (to be created)
- HOW-TO-IMPLEMENT-PROGRESSIVE-DISCLOSURE.md (to be created)
Appendix A: Token Budget Calculator
A.1 Token Estimation Formulas
Basic Formula:
Estimated tokens = Words × 1.3
Detailed Formula (more accurate):
Tokens = (Words × 1.3) + (Code blocks × 1.5) + (Special characters × 0.1)
Claude Code Specific:
CLAUDE.md tokens per session = (CLAUDE.md tokens per load) × (Conversation turns)
A.2 Calculation Examples
Example 1: Simple CLAUDE.md
File: PROJECT_ROOT/CLAUDE.md
Lines: 50
Words: 350
Code blocks: 2 (30 words total)
Calculation:
- Regular words: 320 × 1.3 = 416 tokens
- Code blocks: 30 × 1.5 = 45 tokens
- Total per load: 461 tokens
50-turn session: 461 × 50 = 23,050 tokens (11.5% of 200K context) ✅
100-turn session: 461 × 100 = 46,100 tokens (23% of 200K context) ✅
Example 2: Bloated CLAUDE.md
File: PROJECT_ROOT/CLAUDE.md
Lines: 300
Words: 2,500
Code blocks: 10 (200 words total)
Calculation:
- Regular words: 2,300 × 1.3 = 2,990 tokens
- Code blocks: 200 × 1.5 = 300 tokens
- Total per load: 3,290 tokens
50-turn session: 3,290 × 50 = 164,500 tokens (82% of 200K context) ⚠️
100-turn session: 3,290 × 100 = 329,000 tokens (164.5% of 200K context) ❌
Example 3: Optimized CLAUDE.md
File: PROJECT_ROOT/CLAUDE.md
Lines: 95
Words: 680
Code blocks: 3 (40 words total)
Calculation:
- Regular words: 640 × 1.3 = 832 tokens
- Code blocks: 40 × 1.5 = 60 tokens
- Total per load: 892 tokens
50-turn session: 892 × 50 = 44,600 tokens (22% of 200K context) ✅
100-turn session: 892 × 100 = 89,200 tokens (44.6% of 200K context) ✅
A.3 Quick Reference Table
| CLAUDE.md Size | Words | Est. Tokens/Load | 50 Turns | 100 Turns | % of 200K Context (100 turns) | Verdict |
|---|---|---|---|---|---|---|
| 25 lines | 150 | ~200 | 10,000 | 20,000 | 10% | ✅ Excellent |
| 50 lines | 350 | ~450 | 22,500 | 45,000 | 22.5% | ✅ Very Good |
| 75 lines | 550 | ~700 | 35,000 | 70,000 | 35% | ✅ Good |
| 100 lines | 700 | ~900 | 45,000 | 90,000 | 45% | ✅ Acceptable |
| 150 lines | 1,100 | ~1,400 | 70,000 | 140,000 | 70% | ⚠️ High |
| 200 lines | 1,500 | ~1,900 | 95,000 | 190,000 | 95% | ⚠️ Very High |
| 250 lines | 1,900 | ~2,400 | 120,000 | 240,000 | 120% | ❌ Excessive |
| 300 lines | 2,300 | ~2,900 | 145,000 | 290,000 | 145% | ❌ Critical |
| 500 lines | 3,800 | ~4,900 | 245,000 | 490,000 | 245% | ❌ Catastrophic |
Color coding:
- ✅ Green (under 50% of context): Safe for long sessions
- ⚠️ Yellow (50-100% of context): Approaching limits
- ❌ Red (over 100% of context): Unacceptable
A.4 Online Token Calculator
Use official Anthropic tokenizer for exact counts:
# Install tiktoken (OpenAI tokenizer - close approximation)
pip install tiktoken
# Count tokens
python3 -c "import tiktoken; enc = tiktoken.get_encoding('cl100k_base'); print(len(enc.encode(open('CLAUDE.md').read())))"
Note: Claude uses a proprietary tokenizer, but tiktoken provides a good approximation (±10%).
Appendix B: Progressive Disclosure Template
B.1 Three-Tier Template Structure
Tier 1: CLAUDE.md (100-150 lines)
# Project Name
**Status:** [Development Phase]
**Tech Stack:** [Core Technologies]
**Documentation:** See [PROJECT-PLAN.md](docs/PROJECT-PLAN.md)
---
## Essential Reading
**READ FIRST (in order):**
1. [PROJECT-PLAN.md](docs/PROJECT-PLAN.md) - [Brief description]
2. [ARCHITECTURE.md](docs/ARCHITECTURE.md) - [Brief description]
3. [Detailed Guide](docs/GUIDE.md) - [Brief description]
---
## Directory Structure
\`\`\`
project-root/
├── src/ # [Description]
├── docs/ # [Description]
└── tests/ # [Description]
\`\`\`
---
## Quick Start
**Common tasks:**
- [Command 1]: `[command]`
- [Command 2]: `[command]`
- [Command 3]: `[command]`
**Full setup:** See [docs/SETUP.md](docs/SETUP.md)
---
## Git Workflow
**Quick commands:**
- [Essential command 1]
- [Essential command 2]
**Complete workflow:** See [docs/GIT-WORKFLOW.md](docs/GIT-WORKFLOW.md)
---
## AI Agent Best Practices
- [Practice 1]
- [Practice 2]
- [Practice 3]
---
**Last Updated:** [Date]
**Lines:** [Count] (Target: <150)
Tier 2: Detailed Guides (200-500 lines each)
# [Topic] Guide
## Overview
[1-2 paragraph overview of topic]
## Prerequisites
- [Prerequisite 1]
- [Prerequisite 2]
## Step-by-Step Instructions
### Step 1: [Action]
[Detailed instructions]
### Step 2: [Action]
[Detailed instructions]
## Examples
### Example 1: [Scenario]
[Complete example with code]
### Example 2: [Scenario]
[Complete example with code]
## Advanced Topics
For advanced usage, see:
- [ADVANCED-TOPIC-1.md](ADVANCED-TOPIC-1.md)
- [ADVANCED-TOPIC-2.md](ADVANCED-TOPIC-2.md)
## Troubleshooting
[Common issues and solutions]
## References
- [External resource 1]
- [External resource 2]
Tier 3: Deep Resources (Unlimited size)
# [Advanced Topic] Reference
## Complete Specification
[Comprehensive documentation - no size limit]
## API Reference
[Complete API documentation]
## Real-World Examples
[Extensive examples with full context]
## Research and Background
[In-depth technical analysis]
B.2 Content Distribution Strategy
What goes in Tier 1 (CLAUDE.md):
- ✅ Project overview (5-10 lines)
- ✅ Essential reading links (3-5 docs)
- ✅ Directory structure (15-20 lines)
- ✅ Quick start commands (3-5 commands)
- ✅ Critical git workflow patterns (2-3 commands)
- ✅ AI agent best practices (3-7 items)
- ❌ Code examples (link instead)
- ❌ Detailed instructions (link instead)
- ❌ Complete API docs (link instead)
What goes in Tier 2 (docs/*.md):
- ✅ Step-by-step guides (200-500 lines)
- ✅ Comprehensive examples (50-100 lines each)
- ✅ Workflow documentation (100-300 lines)
- ✅ Architecture explanations (200-400 lines)
- ✅ API endpoint documentation (500-1000 lines)
- ❌ Raw specifications (link to Tier 3 instead)
- ❌ Research papers (link to Tier 3 instead)
What goes in Tier 3 (docs/reference/*.md, specs/):
- ✅ Complete specifications (unlimited)
- ✅ Research and background (unlimited)
- ✅ Extensive examples (unlimited)
- ✅ External resource links
- ✅ Generated documentation
B.3 Link Reference Pattern
Standard linking pattern:
## [Topic]
[1-2 sentence overview]
**Quick reference:**
- [Key point 1]
- [Key point 2]
- [Key point 3]
**Complete guide:** See [docs/[TOPIC]-GUIDE.md](docs/[TOPIC]-GUIDE.md)
Example implementation:
## Testing
Run tests with `npm test`. We use Jest for unit tests and Playwright for E2E.
**Quick reference:**
- Unit tests: `npm test` (Jest with coverage)
- E2E tests: `npm run test:e2e` (Playwright)
- Watch mode: `npm run test:watch`
**Complete testing guide:** See [docs/guides/TESTING.md](docs/guides/TESTING.md)
Document Control
Version: 1.0.0 Status: Production Standard Created: December 3, 2025 Last Updated: December 3, 2025 Next Review: March 3, 2026 Owner: CODITECT Core Standards Team Approvers: Technical Lead, Documentation Lead Review Cycle: Quarterly
Changelog:
| Version | Date | Changes | Author |
|---|---|---|---|
| 1.0.0 | 2025-12-03 | Initial standard based on Anthropic research | CODITECT Documentation Team |
Related Documents:
- CLAUDE-MD-BEST-PRACTICES-RESEARCH.md - Research findings
- CODITECT-STANDARD-AGENTS.md - Agent standards
- CODITECT-STANDARD-SKILLS.md - Skill standards
- CODITECT-STANDARD-README-MD.md - README standards
Compliance:
- All CLAUDE.md files must achieve Grade B (80%) or higher
- Compliance deadline: 30 days from standard publication (January 2, 2026)
- Automated validation via CI/CD pipeline
- Manual quarterly audits
End of Standard