AI Curriculum Development
AI Curriculum Development
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Expert skill for creating comprehensive, multi-level AI educational content with pedagogical excellence, assessment integration, and NotebookLM optimization for adaptive learning.
When to Use This Skill
✅ Use this skill when:
- Multi-Level Content Creation: Developing educational materials for multiple skill levels simultaneously
- AI Domain Expertise: Creating content for machine learning, deep learning, NLP, computer vision, etc.
- Assessment Integration: Building quizzes, projects, and evaluation frameworks alongside content
- NotebookLM Optimization: Preparing content for AI-powered book/quiz/flashcard generation
- Pedagogical Framework: Applying Bloom's taxonomy, scaffolding, and mastery learning principles
- Learning Analytics: Implementing progress tracking and adaptive learning systems
- Large-Scale Curriculum: Developing comprehensive courses or certification programs
❌ Don't use this skill when:
- Simple single-level content creation (use regular content generation)
- Non-educational content development
- Basic documentation writing (not learning-focused)
- Quick tutorials or simple how-to guides
Core Capabilities
1. Multi-Level Content Architecture
Design and generate content that scales across skill levels:
Skill Level Framework:
beginner:
cognitive_load: minimal
learning_style: story-driven, analogical
time_investment: 5-10 hours/week
assessment: recognition, recall, basic application
intermediate:
cognitive_load: moderate
learning_style: project-based, hands-on
time_investment: 10-15 hours/week
assessment: application, analysis, evaluation
advanced:
cognitive_load: high
learning_style: research-oriented, optimization-focused
time_investment: 15-25 hours/week
assessment: synthesis, evaluation, complex problem-solving
expert:
cognitive_load: very high
learning_style: innovation-driven, theory-based
time_investment: 20-40 hours/week
assessment: creation, original research, contribution
Progressive Complexity Patterns:
- Concept Introduction: Analogy → Definition → Mathematical → Theoretical
- Code Examples: Pseudo-code → Guided implementation → Independent coding → Algorithm design
- Projects: Guided tutorials → Modified implementations → Original applications → Research contributions
2. Bloom's Taxonomy Integration
Structure learning objectives using systematic cognitive progression:
bloom_levels = {
"remember": {
"keywords": ["list", "identify", "recall", "recognize"],
"assessments": ["multiple choice", "true/false", "matching"],
"beginner_weight": 40,
"expert_weight": 5
},
"understand": {
"keywords": ["explain", "describe", "interpret", "summarize"],
"assessments": ["short answer", "concept mapping"],
"beginner_weight": 35,
"expert_weight": 10
},
"apply": {
"keywords": ["implement", "execute", "use", "demonstrate"],
"assessments": ["coding exercises", "practical problems"],
"beginner_weight": 20,
"expert_weight": 25
},
"analyze": {
"keywords": ["compare", "categorize", "examine", "break down"],
"assessments": ["case studies", "algorithm analysis"],
"beginner_weight": 5,
"expert_weight": 25
},
"evaluate": {
"keywords": ["critique", "assess", "judge", "recommend"],
"assessments": ["peer review", "research critique"],
"beginner_weight": 0,
"expert_weight": 20
},
"create": {
"keywords": ["design", "develop", "formulate", "produce"],
"assessments": ["original projects", "research proposals"],
"beginner_weight": 0,
"expert_weight": 15
}
}
3. Assessment Framework Design
Create comprehensive evaluation systems aligned with learning objectives:
Assessment Types by Skill Level:
formative_assessment:
beginner: ["concept checks", "guided exercises", "self-reflection"]
intermediate: ["coding challenges", "mini-projects", "peer discussions"]
advanced: ["research summaries", "optimization challenges", "case analyses"]
expert: ["literature reviews", "original implementations", "theoretical proofs"]
summative_assessment:
beginner: ["module quizzes", "guided projects", "concept demonstrations"]
intermediate: ["independent projects", "algorithm implementations", "presentations"]
advanced: ["research projects", "performance optimization", "system design"]
expert: ["original research", "publication drafts", "innovation challenges"]
portfolio_assessment:
all_levels: ["learning journals", "code repositories", "project documentation", "reflection essays"]
Adaptive Quiz Generation:
def generate_adaptive_quiz(topic, skill_level, bloom_distribution):
"""Generate skill-appropriate quiz with adaptive difficulty"""
question_bank = {
"beginner": {
"remember": generate_recall_questions(topic),
"understand": generate_comprehension_questions(topic),
"apply": generate_simple_application_questions(topic)
},
"intermediate": {
"understand": generate_detailed_explanation_questions(topic),
"apply": generate_implementation_questions(topic),
"analyze": generate_comparison_questions(topic)
},
"advanced": {
"apply": generate_optimization_questions(topic),
"analyze": generate_algorithmic_analysis_questions(topic),
"evaluate": generate_critique_questions(topic)
},
"expert": {
"analyze": generate_research_analysis_questions(topic),
"evaluate": generate_peer_review_questions(topic),
"create": generate_innovation_questions(topic)
}
}
return build_adaptive_quiz(question_bank[skill_level], bloom_distribution)
4. NotebookLM Content Optimization
Structure content for optimal AI processing and generation:
Metadata Enhancement:
content_metadata:
# Learning Structure
skill_level: [beginner|intermediate|advanced|expert]
bloom_levels: [list of cognitive levels addressed]
learning_objectives: [specific, measurable objectives]
prerequisites: [required prior knowledge]
# Content Organization
module: [module number and name]
week: [week number within module]
topic: [specific topic/subtopic]
estimated_time: [learning hours]
difficulty_score: [1-5 scale]
# Cross-References
related_concepts: [connected topics]
prerequisite_topics: [foundational concepts]
follow_up_topics: [next learning steps]
external_resources: [additional materials]
# Assessment Integration
formative_assessments: [embedded checks]
summative_assessments: [module evaluations]
project_connections: [related projects]
# Accessibility
learning_styles: [visual, auditory, kinesthetic, reading]
accommodation_notes: [accessibility features]
language_complexity: [reading level indicator]
Cross-Reference Optimization:
<!-- Knowledge Graph Connections -->
[concept: neural_networks] → [prerequisite: linear_algebra, calculus]
[concept: neural_networks] → [application: computer_vision, nlp]
[concept: neural_networks] → [advanced: transformer_architecture]
<!-- Skill Progression Links -->
[beginner: understand_neurons] → [intermediate: implement_perceptron]
[intermediate: implement_perceptron] → [advanced: design_custom_architecture]
[advanced: design_custom_architecture] → [expert: theoretical_analysis]
<!-- Assessment Connections -->
[concept: backpropagation] ↔ [quiz: gradient_calculation]
[concept: backpropagation] ↔ [project: neural_network_training]
[concept: backpropagation] ↔ [portfolio: optimization_comparison]
Content Generation Patterns
Pattern 1: Story-Driven Beginner Content
# Alice's Journey into Neural Networks
## Chapter 1: The Brain Inspiration
Alice wondered how computers could learn like humans. She discovered that
scientists created "artificial neurons" inspired by brain cells...
### Visual Analogy: The Neuron Factory
Imagine a factory where:
- **Inputs** = Raw materials (numbers) coming in
- **Weights** = Quality filters that determine importance
- **Activation** = Decision maker that says "produce" or "don't produce"
- **Output** = Final product (prediction)
### Simple Example: Email Spam Detection
Alice's first neural network job: decide if emails are spam
Pattern 2: Project-Based Intermediate Content
# Project: Build Your First Neural Network
## Learning Goals
- Implement a neural network from scratch using NumPy
- Train the network on the MNIST digit dataset
- Evaluate performance and analyze results
- Optimize hyperparameters for better accuracy
## Step-by-Step Implementation
### Part 1: Network Architecture Design
### Part 2: Forward Propagation Implementation
### Part 3: Backpropagation Algorithm
### Part 4: Training Loop and Optimization
### Part 5: Evaluation and Analysis
## Expected Outcomes
- Working neural network with 85%+ MNIST accuracy
- Understanding of gradient descent optimization
- Experience with debugging ML models
- Portfolio project for job applications
Pattern 3: Research-Oriented Expert Content
# Research Frontier: Attention Mechanisms and Transformers
## Theoretical Foundations
### Mathematical Framework for Attention
- Query-Key-Value formulation: $\text{Attention}(Q,K,V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V$
- Multi-head attention extensions
- Positional encoding strategies
## Current Research Directions
### Open Problems
1. Attention pattern interpretability
2. Computational efficiency improvements
3. Long sequence handling limitations
4. Cross-modal attention mechanisms
## Innovation Challenge
Design a novel attention mechanism that addresses one of the current limitations.
Submit your approach as a research proposal following academic conference format.
Learning Analytics Integration
Progress Tracking Framework
class LearningAnalytics:
def track_learner_progress(self, learner_id, activity_data):
"""Track and analyze learner progress across skill levels"""
# Competency mapping
competencies = self.map_activities_to_competencies(activity_data)
# Skill level progression analysis
current_level = self.assess_current_skill_level(competencies)
# Learning path optimization
next_activities = self.recommend_next_learning(current_level, competencies)
# Intervention detection
intervention_needed = self.detect_learning_struggles(activity_data)
return {
"current_skill_level": current_level,
"mastered_competencies": competencies["mastered"],
"in_progress_competencies": competencies["developing"],
"recommended_activities": next_activities,
"intervention_recommendations": intervention_needed
}
def generate_adaptive_content(self, learner_profile, topic):
"""Generate personalized content based on learner profile"""
# Determine optimal difficulty level
difficulty = self.calculate_optimal_difficulty(learner_profile)
# Select appropriate teaching strategies
strategies = self.select_teaching_strategies(learner_profile.learning_style)
# Generate content with appropriate scaffolding
content = self.create_scaffolded_content(topic, difficulty, strategies)
return content
Best Practices
✅ Do This
- Start with Learning Objectives: Always begin with clear, measurable learning goals
- Apply Progressive Complexity: Ensure smooth progression between skill levels
- Integrate Assessments: Embed evaluation throughout the learning experience
- Use Multiple Modalities: Include visual, auditory, and kinesthetic learning elements
- Provide Scaffolding: Offer appropriate support that gradually decreases
- Test with Real Learners: Validate content effectiveness through user testing
- Optimize for NotebookLM: Structure content with rich metadata and cross-references
- Track Learning Analytics: Monitor progress and effectiveness continuously
❌ Avoid This
- Don't Skip Skill Level Analysis: Always consider the target learner's background
- Don't Create Isolated Content: Ensure connections between concepts and levels
- Don't Ignore Assessment Alignment: Make sure evaluations match learning objectives
- Don't Overwhelm Beginners: Manage cognitive load appropriately for each level
- Don't Neglect Advanced Learners: Provide sufficient challenge for expert levels
- Don't Forget Accessibility: Consider diverse learning needs and accommodations
- Don't Create Static Content: Build in adaptability and personalization features
Integration with AI Curriculum Project
Directory Structure Alignment
module[X]_[topic]/
├── content_sources/
│ ├── beginner/concepts/ # Story-driven, analogical content
│ ├── intermediate/projects/ # Hands-on, implementation-focused
│ ├── advanced/research/ # Paper-based, optimization-focused
│ └── expert/innovation/ # Original research, contribution-focused
├── assessments/
│ ├── adaptive_quizzes/ # Skill-level appropriate evaluations
│ ├── projects/ # Authentic assessment scenarios
│ └── portfolios/ # Progressive skill documentation
└── analytics/
├── learning_objectives.yaml # Bloom's taxonomy alignment
├── skill_progression.yaml # Level advancement criteria
└── cross_references.yaml # Knowledge graph connections
Agent Integration
- ai-curriculum-specialist: Primary agent for comprehensive curriculum development
- assessment-creation-agent: Specialized assessment design and validation
- notebooklm-content-optimizer: Content formatting and metadata enhancement
- learning-analytics-specialist: Progress tracking and adaptive personalization
Command Integration
/generate-module: Create complete module with all skill levels/create-assessment: Design adaptive evaluation framework/optimize-notebooklm: Format content for AI processing/analyze-learning: Generate progress reports and recommendations
Troubleshooting
"Content too complex for target skill level"
Problem: Generated content exceeds cognitive load capacity
Solution:
- Review Bloom's taxonomy distribution for skill level
- Add more scaffolding and prerequisite content
- Break complex concepts into smaller chunks
- Include more analogies and visual examples
"Assessment doesn't align with learning objectives"
Problem: Evaluation measures different skills than taught
Solution:
- Map each assessment item to specific learning objective
- Ensure Bloom's level alignment between content and assessment
- Use authentic assessment scenarios that mirror real applications
- Include multiple assessment types (formative, summative, portfolio)
"Cross-level progression unclear"
Problem: Learners can't understand how to advance between skill levels
Solution:
- Create explicit competency frameworks with clear advancement criteria
- Provide skill level assessment tools for self-evaluation
- Build bridge content that connects adjacent levels
- Include prerequisite verification and remediation
Quality Metrics
- Learning Objective Achievement: 90%+ learners meet stated objectives
- Skill Level Progression: 80%+ advance to next level within expected timeframe
- Content Engagement: 85%+ completion rates across all skill levels
- Assessment Validity: Strong correlation between performance and competency
- NotebookLM Optimization: Enhanced AI processing and generation capability
Multi-Context Window Support
This skill supports long-running curriculum development tasks across multiple context windows using Claude 4.5's enhanced state management capabilities.
State Tracking
Checkpoint State (JSON):
{
"checkpoint_id": "curriculum_20251129_151500",
"curriculum_project": "AI Fundamentals Course",
"modules_created": [
{
"module_id": "module1_introduction",
"skill_levels": ["beginner", "intermediate"],
"assessments": ["quiz", "project"],
"status": "complete"
},
{
"module_id": "module2_neural_networks",
"skill_levels": ["beginner"],
"assessments": ["quiz"],
"status": "in_progress"
}
],
"total_content_generated": {
"word_count": 15000,
"skill_levels_complete": 3,
"assessments_created": 5,
"notebooklm_optimized": true
},
"token_usage": 45000,
"created_at": "2025-11-29T15:15:00Z"
}
Progress Notes (Markdown):
# AI Curriculum Development Progress - 2025-11-29
## Completed Modules
- Module 1: Introduction to AI (all skill levels)
- Beginner: Story-driven content (3500 words)
- Intermediate: Project-based content (4200 words)
- Advanced: Research-oriented content (3800 words)
- Assessments: Adaptive quiz + 2 projects
## In Progress
- Module 2: Neural Networks
- Beginner content: 80% complete
- Need: Intermediate/advanced levels + assessments
## Next Actions
- Complete Module 2 beginner content (remaining 700 words)
- Generate intermediate content with coding projects
- Create adaptive quiz with Bloom's taxonomy alignment
- Optimize all Module 2 content for NotebookLM
Session Recovery
When starting a fresh context window after curriculum development work:
- Load Checkpoint State: Read
.coditect/checkpoints/curriculum-latest.json - Review Progress Notes: Check
curriculum-development-progress.mdfor module status - Verify Content Files: Use Glob to locate generated content files
- Check Assessment Alignment: Review Bloom's taxonomy distribution
- Resume Generation: Continue from last completed skill level
Recovery Commands:
# 1. Check latest curriculum checkpoint
cat .coditect/checkpoints/curriculum-latest.json | jq '.modules_created'
# 2. Review progress notes
tail -30 curriculum-development-progress.md
# 3. Locate generated content
find content_sources/ -name "*.md" -mtime -1
# 4. Count word totals
wc -w content_sources/module*/*.md
# 5. Check assessment files
ls -lh assessments/adaptive_quizzes/
State Management Best Practices
Checkpoint Files (JSON Schema):
- Store in
.coditect/checkpoints/curriculum-{timestamp}.json - Track modules completed vs in-progress with granular status
- Record word counts and skill level distribution for scope verification
- Include assessment creation status and NotebookLM optimization flags
- Document learning objectives achieved per module
Progress Tracking (Markdown Narrative):
- Maintain
curriculum-development-progress.mdwith module breakdowns - Document pedagogical decisions (why certain learning paths chosen)
- Note Bloom's taxonomy distribution for quality validation
- List cross-module dependencies for knowledge graph coherence
- Track NotebookLM optimization status per content piece
Git Integration:
- Create checkpoint after each module completion
- Commit content files with descriptive module/skill level tags
- Use conventional commits:
feat(curriculum): Add Module 2 beginner content - Tag major milestones:
git tag curriculum-module2-complete
Progress Checkpoints
Natural Breaking Points:
- After completing each skill level within a module
- After creating all assessments for a module
- After NotebookLM optimization for batch of content
- After cross-module knowledge graph validation
- After generating learning analytics metadata
Checkpoint Creation Pattern:
# Automatic checkpoint creation at critical phases
if modules_completed > 0 or word_count > 10000:
create_checkpoint({
"modules": modules_status,
"assessments": assessments_created,
"content_stats": {
"words": total_word_count,
"skill_levels": levels_complete
},
"tokens": current_token_usage
})
Example: Multi-Context Curriculum Development
Context Window 1: Module 1 Complete + Module 2 Start
{
"checkpoint_id": "curriculum_module1_complete",
"phase": "module1_complete",
"modules_created": 1,
"skill_levels_generated": 4,
"assessments_created": 3,
"word_count": 12000,
"next_action": "Begin Module 2 beginner content",
"token_usage": 35000
}
Context Window 2: Module 2 Completion
# Resume from checkpoint
cat .coditect/checkpoints/curriculum_module1_complete.json
# Continue Module 2 generation
# (Context restored in 3 minutes vs 20 minutes from scratch)
# Complete all skill levels and assessments
{
"checkpoint_id": "curriculum_module2_complete",
"phase": "module2_complete",
"modules_created": 2,
"total_word_count": 27000,
"all_assessments_created": true,
"notebooklm_optimized": true,
"token_usage": 25000
}
Token Savings: 35000 (first context) + 25000 (second context) = 60000 total vs. 95000 without checkpoint = 37% reduction
See docs/CLAUDE-4.5-BEST-PRACTICES.md for complete multi-context patterns.
Success Output
When this skill completes successfully, you should see:
✅ SKILL COMPLETE: ai-curriculum-development
Completed:
- [x] Module content created across all skill levels (beginner/intermediate/advanced/expert)
- [x] Learning objectives aligned with Bloom's taxonomy
- [x] Adaptive assessments designed with appropriate difficulty distribution
- [x] NotebookLM metadata optimization complete
- [x] Cross-module knowledge graph connections documented
- [x] Learning analytics metadata generated
Outputs:
- content_sources/module[X]_[topic]/beginner/ - Story-driven analogical content (3500 words)
- content_sources/module[X]_[topic]/intermediate/ - Project-based hands-on content (4200 words)
- content_sources/module[X]_[topic]/advanced/ - Research-oriented content (3800 words)
- content_sources/module[X]_[topic]/expert/ - Innovation-driven theoretical content (3200 words)
- assessments/adaptive_quizzes/module[X]_quiz.json - Skill-appropriate evaluations
- assessments/projects/module[X]_project_[level].md - Authentic assessment scenarios
- analytics/learning_objectives.yaml - Bloom's taxonomy alignment
- analytics/skill_progression.yaml - Level advancement criteria
- analytics/cross_references.yaml - Knowledge graph connections
Module Completion: 100%
Skill Levels: 4 (beginner through expert)
Assessments: 8 (quizzes + projects)
Word Count: 14,700
NotebookLM Optimized: Yes
Completion Checklist
Before marking this skill as complete, verify:
- All skill levels created: beginner, intermediate, advanced, expert
- Bloom's taxonomy distribution appropriate per level (remember/understand weighted for beginner, create/evaluate for expert)
- Content word counts meet targets (3000-4000 words per level)
- Learning objectives documented in YAML with measurable verbs
- Adaptive assessments created with difficulty aligned to skill level
- NotebookLM metadata complete: skill_level, bloom_levels, learning_objectives, prerequisites
- Cross-references documented: prerequisite_topics, follow_up_topics, related_concepts
- Progressive complexity verified: smooth transition between levels
- Code examples included: pseudo-code (beginner) → implementation (advanced)
- Projects scaffolded: guided tutorials (beginner) → original research (expert)
- Learning analytics metadata generated
- Checkpoint created:
.coditect/checkpoints/curriculum-[timestamp].json
Failure Indicators
This skill has FAILED if:
- ❌ Missing skill levels (less than 4 complete)
- ❌ Bloom's taxonomy distribution inappropriate for level
- ❌ Content too complex for beginner level (excessive jargon, no analogies)
- ❌ Content too simple for expert level (no theoretical depth, no research)
- ❌ Assessment misalignment: testing different skills than taught
- ❌ NotebookLM metadata missing or incomplete
- ❌ Cross-references broken or missing
- ❌ Cognitive load too high: beginner content causes overwhelm
- ❌ Cognitive load too low: expert learners unchallenged
- ❌ No scaffolding: learners can't progress between levels
- ❌ Learning objectives vague or unmeasurable
When NOT to Use
Do NOT use ai-curriculum-development when:
- Single-level content needed - Use standard content generation for one skill level only
- Non-educational content - Use appropriate documentation skill for API docs, system docs, etc.
- Quick tutorials - Use simple how-to guides for one-off tasks
- Domain outside AI/ML - This skill specialized for AI curriculum; use domain-appropriate skill
- No assessment needed - If evaluation not required, simpler content creation sufficient
- Existing curriculum modification - Use content editing tools for updates to existing materials
- Time constraint <2 hours - Multi-level curriculum requires significant time investment
Alternative Approaches:
- Single-level tutorial: Standard documentation for one audience
- API documentation: Use
codi-documentation-writerfor technical reference - Video scripts: Different pedagogical approach, use media-specific skill
- Interactive coding exercises: Use platform-specific tooling (Jupyter, Observable)
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Same content for all levels | No progression, inappropriate difficulty | Create distinct content per level with progressive complexity |
| Skipping learning objectives | Content lacks focus, unmeasurable outcomes | Start with Bloom's taxonomy-aligned objectives |
| Assessments created last | Misalignment between teaching and testing | Design assessments alongside content |
| Missing analogies for beginners | High cognitive load, learner frustration | Use real-world analogies and stories for complex concepts |
| No code examples for intermediate | Theory-practice gap, frustration | Provide hands-on implementation examples |
| Shallow expert content | Advanced learners unchallenged | Include theoretical depth, research papers, open problems |
| Ignoring prerequisite verification | Learners lack foundation, high dropout | Document prerequisites, provide prerequisite testing |
| No cross-module connections | Isolated knowledge, missing big picture | Explicitly link concepts across modules |
| Overlooking NotebookLM optimization | Poor AI-generated supplementary materials | Add rich metadata, cross-references, knowledge graph |
| Missing scaffolding | Learners can't advance between levels | Provide bridge content, skill progression criteria |
Principles
This skill embodies these CODITECT principles:
- Progressive Complexity - Content scales smoothly from beginner analogies to expert theory
- Bloom's Taxonomy Foundation - Learning objectives aligned with cognitive development stages
- Assessment Integration - Evaluation designed alongside content, not as afterthought
- Multi-Modal Learning - Visual, auditory, kinesthetic elements for diverse learning styles
- Scaffolding - Appropriate support that gradually decreases as learner advances
- Measurable Outcomes - Learning objectives specified with verifiable completion criteria
- NotebookLM Optimization - Rich metadata enables AI-powered supplementary generation
- Knowledge Graph - Explicit cross-references create interconnected understanding
Full Principles: CODITECT-STANDARD-AUTOMATION.md
Version History
- v1.0.0 - Multi-level content generation, Bloom's taxonomy, NotebookLM optimization