Skip to main content

Quick Start - Automated Prompt Library Generation

TL;DR: Generate 90 prompt files across 15 categories in 3 sessions using automation + AI orchestration.

Created: 2025-11-29 Time Required: 5-8 hours total (across 3 sessions) Quality: 90+/100 (matching Aviation 98/100 exemplar)


⚑ 30-Second Quick Start​

# Install dependencies
pip install pyyaml colorama tqdm

# Preview what will be generated
python3 generate-categories.py --priority high --dry-run

# Generate Session 1 (6 categories, 36 files)
python3 generate-categories.py --priority high

# Continue with AI orchestration for content
# (Use Task tool with orchestrator - see below)

πŸ“Š What You're Building​

Final Output (After 3 Sessions)​

prompts/
β”œβ”€β”€ 01-educational-content/
β”‚ β”œβ”€β”€ instructor-segments/ (6 files)
β”‚ β”œβ”€β”€ concept-explanations/ (6 files)
β”‚ β”œβ”€β”€ procedure-demonstrations/ (6 files)
β”‚ └── assessment-content/ (6 files)
β”‚
β”œβ”€β”€ 02-technical-training/
β”‚ β”œβ”€β”€ aviation/ βœ… COMPLETE (exemplar)
β”‚ β”œβ”€β”€ technology/ (6 files)
β”‚ └── professional/ (6 files)
β”‚
β”œβ”€β”€ 03-b-roll-footage/
β”‚ β”œβ”€β”€ establishing-shots/ (6 files)
β”‚ β”œβ”€β”€ detail-shots/ (6 files)
β”‚ └── transition-shots/ (6 files)
β”‚
β”œβ”€β”€ 04-animations/
β”‚ β”œβ”€β”€ diagrams/ (6 files)
β”‚ β”œβ”€β”€ processes/ (6 files)
β”‚ └── data-visualization/ (6 files)
β”‚
└── 05-specialized/
β”œβ”€β”€ multilingual/ (6 files)
β”œβ”€β”€ accessibility/ (6 files)
└── interactive/ (6 files)

Total: 16 categories Γ— 6 files = 96 files (90 new + 6 Aviation existing)

🎯 3-Session Roadmap​

Session 1: High-Priority (2-3 hours)​

Goal: Generate 6 most important categories (36 files)

Step 1: Generate Framework (5 minutes)

# This creates directory structure + README.md + CLAUDE.md
python3 generate-categories.py --priority high

# Output:
# βœ… instructor-segments: 6/6 files (86.2 KB)
# βœ… concept-explanations: 6/6 files (87.1 KB)
# βœ… procedure-demonstrations: 6/6 files (88.5 KB)
# βœ… technology: 6/6 files (85.9 KB)
# βœ… diagrams: 6/6 files (86.7 KB)
# βœ… accessibility: 6/6 files (85.0 KB)

What's Created:

  • βœ… README.md (fully generated, user-facing guide)
  • βœ… CLAUDE.md (fully generated, AI automation patterns)
  • ⏸️ basic-template.md (placeholder - needs content)
  • ⏸️ advanced-template.md (placeholder - needs content)
  • ⏸️ examples.md (placeholder - needs content)
  • ⏸️ customization-guide.md (placeholder - needs content)

Step 2: Generate Template Content (2-3 hours using AI orchestration)

Option A: One Category at a Time (Highest Quality - Recommended)

# For each of the 6 categories, use Claude Code Task tool:

In Claude Code conversation:

Use orchestrator subagent to complete instructor-segments category:

1. Read Aviation exemplar files:
- 02-technical-training/aviation/basic-template.md
- 02-technical-training/aviation/advanced-template.md
- 02-technical-training/aviation/examples.md
- 02-technical-training/aviation/customization-guide.md

2. Extract structural patterns and quality elements

3. Generate 4 files for instructor-segments:
- basic-template.md (5 simple 10-20 word prompts for HeyGen instructor content)
- advanced-template.md (5 production 150-300 word prompts with timing)
- examples.md (10 tested prompts with results, costs, lessons learned)
- customization-guide.md (adaptation guide for instructor content)

4. Use category config:
Platform: HeyGen
Cost: $0.016/sec
Use cases: Course intros, concept explanations, demos, assessments

5. Quality target: 90+/100 (matching Aviation 98/100)

Location: .coditect-docs/docs/06-curriculum/video-generation-toolset/prompts/01-educational-content/instructor-segments/

Repeat for remaining 5 high-priority categories (20-30 min each).

Option B: Batch All 6 Categories (Faster, 85%+ Quality)

Use orchestrator subagent to batch-generate template content for 6 high-priority categories:

Categories:
1. instructor-segments (HeyGen, $0.016/sec) - Talking head presentations
2. concept-explanations (Veo 3.1, $0.15-0.40/sec) - Abstract visualizations
3. procedure-demonstrations (Runway Gen-3, $0.025/sec) - Step-by-step how-tos
4. technology (Veo 3.1, $0.15-0.40/sec) - Software tutorials
5. diagrams (Veo 3.1, $0.15-0.40/sec) - Flowcharts, architecture
6. accessibility (HeyGen, $0.016/sec) - WCAG compliance content

For EACH category:
1. Extract patterns from Aviation exemplar
2. Generate 4 template files (basic, advanced, examples, customization)
3. Adapt for category-specific use cases and platforms
4. Quality target: 85+/100

Token budget: 120K
Output: 24 files (6 categories Γ— 4 files)

Location: .coditect-docs/docs/06-curriculum/video-generation-toolset/prompts/

Session 1 Output:

  • βœ… 36 files total (6 categories Γ— 6 files)
  • βœ… 6 README.md + 6 CLAUDE.md (fully generated)
  • βœ… 24 template files (AI-generated content)
  • βœ… Quality: 90+/100 (Option A) or 85+/100 (Option B)

Session 2: Medium-Priority (2-3 hours)​

Goal: Generate next 6 categories (36 files)

Step 1: Generate Framework (5 minutes)

python3 generate-categories.py --priority medium

Categories:

  • assessment-content
  • professional
  • establishing-shots
  • detail-shots
  • processes
  • data-visualization

Step 2: Generate Template Content (2-3 hours)

Use same approach as Session 1 (Option A or B).

Tip: Apply lessons learned from Session 1 for faster generation.

Session 2 Output:

  • βœ… 36 files total (6 categories Γ— 6 files)
  • βœ… Cumulative: 72/90 files (80% complete)

Session 3: Low-Priority + Master Index (1-2 hours)​

Goal: Complete remaining 3 categories + create navigation (18 files + index)

Step 1: Generate Framework (3 minutes)

python3 generate-categories.py --priority low

Categories:

  • transition-shots
  • multilingual
  • interactive

Step 2: Generate Template Content (45-60 minutes)

Use same approach as Session 1/2.

Step 3: Generate Master Index (15-30 minutes)

In Claude Code:

Create master index and navigation for complete prompt library:

1. Generate MASTER-INDEX.md with:
- Complete category catalog (16 categories including Aviation)
- Platform comparison matrix
- Cost estimation guide
- Use case cross-reference

2. Update parent README.md files:
- prompts/README.md (add completion status)
- Video generation toolset README.md
- CODITECT docs README.md

3. Generate CATEGORY-REFERENCE.md with:
- Related categories
- Workflow combinations
- Common multi-category use cases

Location: .coditect-docs/docs/06-curriculum/video-generation-toolset/prompts/

Session 3 Output:

  • βœ… 18 files (3 categories Γ— 6 files)
  • βœ… Master index and navigation
  • βœ… Cumulative: 90/90 files (100% complete)

βœ… Quality Checklist​

After each session, validate quality:

Per Category (6 files)​

☐ README.md exists (8-10 KB, user-facing guide)
☐ CLAUDE.md exists (15-18 KB, AI automation)
☐ basic-template.md has 5 simple prompts (10-20 words each)
☐ advanced-template.md has 5 production prompts (150-300 words each)
☐ examples.md has 10 tested prompts with results/costs/lessons
☐ customization-guide.md has adaptation guide (18-22 KB)

☐ All files reference correct primary platform
☐ All cost estimates match platform pricing
☐ All use cases specific to category
☐ All compliance notes appropriate
☐ No broken cross-references

☐ Quality score >= 85/100 (minimum acceptable)
☐ Quality score >= 90/100 (target)

Quality Scoring​

ScoreStatusAction
90-100EXCELLENTProduction-ready, publish
85-89GOODMinor improvements optional
75-84ACCEPTABLEFunctional, plan enhancements
<75NEEDS IMPROVEMENTRegenerate with fixes

πŸ”§ Troubleshooting​

Issue: Script Won't Run​

# Check Python version (need 3.7+)
python3 --version

# Install dependencies
pip install pyyaml colorama tqdm

# Verify config exists
ls category-config.yaml

Issue: Low Quality Scores (<85)​

Cause: Not following Aviation exemplar patterns closely enough.

Fix:

# Review Aviation exemplar
cat 02-technical-training/aviation/basic-template.md

# Use Task tool for analysis
Use codebase-analyzer subagent to extract Aviation quality patterns:

Analyze what makes Aviation category 98/100 quality:
1. Structural elements
2. Content density
3. Prompt quality
4. Documentation completeness

Apply findings to regenerate low-scoring category.

Issue: Token Budget Exhausted​

Cause: Generating too many categories in one Task call.

Fix:

# Break into smaller batches (3 categories instead of 6)
# OR do one category at a time (Option A)

πŸ“š Key Files Reference​

Automation Framework​

FilePurposeSize
automation-framework.mdComplete framework documentation30+ KB
category-config.yamlCategory definitions & metadata15 KB
generate-categories.pyAutomated generation script20+ KB
quick-start-automation.mdThis quick start guide12 KB

Aviation Exemplar (98/100 Quality)​

FilePurposeSize
README.mdUser-facing guide9.3 KB
CLAUDE.mdAI automation16.5 KB
basic-template.md5 simple prompts4.2 KB
advanced-template.md5 production prompts14.8 KB
examples.md10 tested prompts12+ KB
customization-guide.mdAdaptation guide19.2 KB

🎯 Success Metrics​

Framework Complete When:

  • βœ… All 90 files generated (15 categories Γ— 6 files)
  • βœ… Average quality score >= 90/100
  • βœ… Master index created
  • βœ… All cross-references valid
  • βœ… No empty placeholders remaining

Current Status:

  • βœ… Aviation category: 100% complete (98/100 quality)
  • βœ… Automation framework: Complete
  • ⏸️ Remaining 15 categories: Ready for Session 1

πŸ’‘ Pro Tips​

Tip 1: Use Dry Run First​

# Always preview before generating
python3 generate-categories.py --priority high --dry-run

Tip 2: Generate One Category First​

# Test workflow with single category before batch
python3 generate-categories.py --category instructor-segments

Tip 3: Save Orchestrator Prompts​

# Save successful prompts for reuse
cat > orchestrator-prompt-template.txt <<EOF
Use orchestrator subagent to complete {CATEGORY} category...
EOF

Tip 4: Track Quality Scores​

# Keep log of quality scores per category
echo "instructor-segments: 92/100" >> quality-log.txt

Tip 5: Apply Lessons Learned​

  • Session 1 teaches optimal prompting patterns
  • Session 2 goes faster with learned patterns
  • Session 3 is fastest (3 categories in 1 hour possible)

πŸ“ž Need Help?​

Automation Issues:

Quality Issues:

  • Study Aviation exemplar
  • Use codebase-analyzer to extract patterns
  • Increase specificity in orchestrator prompts

Content Issues:

  • Review platform documentation
  • Validate cost estimates
  • Check use case alignment

Time to Complete: 5-8 hours (across 3 sessions) Quality Target: 90+/100 Based on Exemplar: Aviation Category (98/100) Created: 2025-11-29


Part of CODITECT Training Framework - Video Generation Toolset

Ready to start? Run: python3 generate-categories.py --priority high --dry-run