Agent Team Assembly Strategy for Prompt Library Generation
Task: Extract prompts from 151,000+ words of research and create categorized prompt library Created: 2025-11-29 Strategy: Multi-agent orchestration with specialized roles
🎯 Optimal Agent Team (7 Agents + 1 Orchestrator)
Phase 1: Research Extraction & Pattern Discovery
Agent 1: codebase-locator
Role: Initial reconnaissance - find WHERE prompts live Tools: Grep, Glob, LS Why This Agent:
- ✅ Specialized in finding locations - Designed specifically for "WHERE is X?" queries
- ✅ Efficient search patterns - Uses optimized Grep/Glob strategies
- ✅ Organized results - Returns structured file listings by category
- ✅ Fast execution - Haiku model for speed, perfect for discovery phase
Specific Task:
Search all 9 research documents for:
- Sections containing "prompt", "template", "example"
- Code blocks with video generation syntax
- Sections titled "Prompt Template", "Prompting", "Examples"
- Return file:line references for all matches
Expected Output:
research-video-instructional-material-prompting-techniques.md:198-350 (Part 107 Prompt Templates)
research-video-instructional-material-prompting-techniques.md:651-800 (Master Style Reference)
google-veo-comprehensive-guide.md:859-975 (Veo Prompt Templates)
ai-video-generation-howto.md:450-600 (Tool-Specific Prompts)
...
Agent 2: codebase-pattern-finder
Role: Extract concrete prompt examples and identify patterns Tools: Read, Write, Edit, Grep, Glob, LS, TodoWrite Why This Agent:
- ✅ Pattern recognition specialist - Designed to find similar implementations
- ✅ Provides code details - Not just locations, but actual prompt content
- ✅ Auto-categorization - Can organize prompts by complexity (simple vs advanced)
- ✅ Confidence indicators - Scores pattern quality and relevance
Specific Task:
Based on locations from codebase-locator:
1. Read each identified section
2. Extract all prompt examples (code blocks, templates, syntax patterns)
3. Identify prompt structure patterns (anatomy, components, formatting)
4. Group similar prompts (instructor-led, B-roll, animations, etc.)
5. Assess prompt quality and completeness
Expected Output:
{
"instructor_prompts": [
{
"source": "RESEARCH...md:210-225",
"pattern": "HeyGen avatar + professional tone",
"confidence": 0.95,
"example": "Medium shot of [instructor]...",
"complexity": "basic"
},
...
],
"b_roll_prompts": [...],
"animation_prompts": [...]
}
Agent 3: codebase-analyzer
Role: Deep-dive technical analysis of prompt anatomy and best practices Tools: Read, Write, Edit, Grep, Glob, LS, TodoWrite Why This Agent:
- ✅ Understands HOW things work - Analyzes implementation details
- ✅ Architectural pattern recognition - Identifies prompt design patterns
- ✅ Technical documentation - Can explain complex technical concepts
- ✅ Integration analysis - Understands how prompts work with different tools (HeyGen, Veo, etc.)
Specific Task:
Analyze extracted prompts to understand:
1. Prompt anatomy (7-component structure from research)
2. Platform-specific requirements (Veo vs HeyGen vs Runway syntax)
3. Consistency techniques (Ingredients to Video, style references)
4. Quality factors (what makes a prompt effective)
5. Common mistakes and anti-patterns
6. Integration with production workflows
Expected Output:
## Prompt Anatomy Analysis
### Universal Structure (7 Components):
1. Shot Type - Determines framing (MS, CU, WS, etc.)
2. Subject - Primary focus (instructor, drone, diagram)
3. Action - Movement/behavior
4. Environment - Setting/background
5. Style - Visual aesthetic
6. Camera Movement - Motion (dolly, pan, static)
7. Audio Description - Sound requirements
### Platform-Specific Variations:
- **HeyGen**: Requires avatar selection, voice clone ID, minimal shot direction
- **Veo**: Supports cinematic terminology, duration specification, "Ingredients to Video"
- **Runway**: Creative freedom, abstract descriptions acceptable
...
Phase 2: Content Organization & Documentation
Agent 4: documentation-librarian
Role: Organize extracted prompts into category structure Tools: Read, Write, Edit, Bash, Glob, Grep, LS, TodoWrite Why This Agent:
- ✅ Documentation organization specialist - Designed for creating navigable systems
- ✅ Auto-indexing - Can generate indexes and navigation files automatically
- ✅ Cross-reference management - Links related content across categories
- ✅ Freshness monitoring - Ensures content stays up-to-date
Specific Task:
Organize extracted prompts into directory structure:
/prompts/
01-educational-content/
instructor-segments/
- Categorize all instructor-related prompts here
- Group by difficulty (basic, intermediate, advanced)
concept-explanations/
procedure-demonstrations/
assessment-content/
02-technical-training/
aviation/ - PRIORITY for Part 107
technology/
professional/
03-b-roll-footage/
04-animations/
05-specialized/
Create taxonomy and ensure no duplicate prompts across categories.
Expected Output:
✅ Prompt catalog organized (120+ prompts categorized)
✅ Cross-reference map created (related prompts linked)
✅ Duplicate detection complete (3 duplicates found, resolved)
✅ Priority ranking applied (aviation=high, general=medium, specialized=low)
Agent 5: codi-documentation-writer
Role: Generate comprehensive README.md and CLAUDE.md for each category Tools: Read, Write, Edit, Grep, Glob, TodoWrite, Bash Why This Agent:
- ✅ Comprehensive technical writer - Creates API docs, user guides, system documentation
- ✅ Quality assurance expert - Ensures documentation meets standards
- ✅ Enterprise-grade quality - 40/40 scoring, ADR compliance
- ✅ Cross-document consistency - Maintains uniform style across all files
Specific Task:
For EACH of 17 directories (5 parent + 12 child):
1. README.md:
- Category overview and purpose
- When to use these prompts
- Best practices specific to this category
- Quick reference tables
- Related categories
- 5-10 prompt examples with explanations
2. CLAUDE.md:
- AI agent automation guidance
- Integration with CODITECT workflow
- Quality standards for this prompt type
- Common mistakes to avoid
- Programmatic usage examples
Expected Output: 17 directories × 2 files = 34 documentation files (README + CLAUDE)
Phase 3: Prompt Template Generation
Agent 6: educational-content-generator
Role: Create prompt examples and templates at multiple skill levels Tools: Read, Write, Edit, Bash, Grep, Glob, TodoWrite Why This Agent:
- ✅ Multi-level content creation - Generates beginner, intermediate, advanced versions
- ✅ Pedagogical expertise - Applies learning science principles
- ✅ NotebookLM optimization - Formats content for AI consumption
- ✅ Progressive difficulty - Ensures appropriate complexity scaling
Specific Task:
For each of 12 subcategories, create:
1. basic-template.md
- Simple, beginner-friendly structure
- Minimal parameters
- Clear annotations explaining each component
- Example: "Medium shot of instructor at desk"
2. advanced-template.md
- Complex, production-ready structure
- All optional parameters
- Professional cinematography terms
- Example: "Medium shot (f/2.8), professional instructor..."
3. examples.md
- 5-10 real-world examples
- Platform-specific variations (HeyGen, Veo, Runway)
- Cost estimates
- Quality tier recommendations
4. customization-guide.md
- How to adapt templates for different subjects
- Variable substitution patterns
- Platform migration guide
Expected Output: 12 subcategories × 4 files = 48 template files
Agent 7: ai-curriculum-specialist
Role: Validate educational prompts and ensure pedagogical soundness Tools: Read, Write, Edit, Bash, Grep, Glob, TodoWrite Why This Agent:
- ✅ Curriculum development expert - Master of educational architecture
- ✅ Learning objective alignment - Ensures prompts support learning goals
- ✅ Quality validation - Reviews content for pedagogical effectiveness
- ✅ Standards compliance - Checks against educational best practices
Specific Task:
Review all educational prompts (categories 01-educational-content):
1. Verify alignment with Mayer's Multimedia Learning Principles
2. Ensure appropriate cognitive load (6-minute video length guideline)
3. Validate accessibility considerations
4. Check for clear learning objectives
5. Assess engagement and retention strategies
6. Recommend improvements for educational effectiveness
Expected Output:
✅ 45 educational prompts reviewed
✅ 3 prompts flagged for cognitive overload (simplified)
✅ 2 prompts enhanced for accessibility (captions guidance added)
✅ All prompts aligned with learning science principles
✅ Quality score: 92/100 (enterprise-grade)
Phase 4: Coordination & Quality Assurance
Agent 8: orchestrator (Coordinator)
Role: Coordinate all 7 agents across 4 phases Tools: TodoWrite, Read, Write, Edit, Grep, Glob, Bash Why This Agent:
- ✅ Multi-agent coordination specialist - Designed for complex workflows
- ✅ Token budget management - Prevents context collapse
- ✅ Recursive workflow execution - Handles cascading tasks
- ✅ Quality gate enforcement - Ensures standards compliance
Specific Task:
Phase 1: Orchestrate Research Extraction
1. Launch codebase-locator → get file:line references
2. Launch codebase-pattern-finder → extract prompts (depends on step 1)
3. Launch codebase-analyzer → analyze patterns (depends on step 2)
4. Quality gate: Verify 100+ prompts extracted
Phase 2: Orchestrate Content Organization
1. Launch documentation-librarian → organize prompts
2. Launch codi-documentation-writer → generate docs (depends on step 1)
3. Quality gate: Verify 34 documentation files created
Phase 3: Orchestrate Template Generation
1. Launch educational-content-generator → create templates
2. Launch ai-curriculum-specialist → validate (depends on step 1)
3. Quality gate: Verify 48 template files created
Phase 4: Final QA & Delivery
1. Comprehensive review of all 82 files
2. Cross-reference validation
3. Generate master index
4. Create continuation plan
Expected Output:
✅ Phase 1 Complete: 120+ prompts extracted and analyzed
✅ Phase 2 Complete: 34 documentation files generated
✅ Phase 3 Complete: 48 template files created
✅ Phase 4 Complete: 82 total files, all quality gates passed
✅ Deliverable: Production-ready prompt library
📊 Agent Selection Rationale
Why NOT Use Other Agents?
❌ research-agent
- Too broad: Designed for web search and external research
- Not optimized: Would search web instead of analyzing local documents
- Better fit: codebase-pattern-finder for local pattern extraction
❌ web-search-researcher
- External focus: Searches web sources, not local files
- Redundant: We already have 151,000 words of research locally
- Use case: Would only use if researching NEW prompt techniques not in our docs
❌ thoughts-analyzer
- Wrong domain: Designed for strategic analysis of research notes
- Overkill: We need prompt extraction, not strategic synthesis
- Better fit: codebase-analyzer for technical prompt analysis
❌ generative-ui- agents*
- UI-specific: Designed for React/Vue/Svelte component generation
- Wrong domain: We're generating prompts for video, not UI code
⚡ Execution Strategy
Sequential vs Parallel Execution
Phase 1: SEQUENTIAL (agents depend on each other)
codebase-locator → codebase-pattern-finder → codebase-analyzer
(find WHERE) (extract WHAT) (analyze HOW)
Phase 2: PARALLEL (agents work independently after Phase 1)
documentation-librarian ← organizes extracted prompts
codi-documentation-writer ← generates docs (can work simultaneously)
Phase 3: SEQUENTIAL (validation depends on generation)
educational-content-generator → ai-curriculum-specialist
(create templates) (validate quality)
🎯 Success Criteria
Quantitative Metrics
- ✅ 120+ prompts extracted from research
- ✅ 82 total files created (34 docs + 48 templates)
- ✅ 100% coverage across all 5 categories
- ✅ Zero duplicates across categories
- ✅ Quality score 90+ on all documentation
Qualitative Metrics
- ✅ Production-ready templates (copy-paste usable)
- ✅ Clear navigation system (users find prompts in <2 minutes)
- ✅ Educational soundness (aligned with learning science)
- ✅ Platform compatibility (HeyGen, Veo, Runway variations)
- ✅ Integration guidance (works with CODITECT workflows)
💡 Key Insights
Why This Team Composition is Optimal
- Specialized Skills: Each agent has a distinct, non-overlapping role
- Complementary Tools: Different tool access matches different needs
- codebase-locator: Grep, Glob, LS (search)
- codebase-pattern-finder: Read, Write, Edit (extract and organize)
- codebase-analyzer: Deep technical analysis (understand)
- documentation-librarian: Bash, organization (structure)
- codi-documentation-writer: Comprehensive writing (document)
- educational-content-generator: Multi-level creation (teach)
- ai-curriculum-specialist: Pedagogical validation (quality)
- Efficient Workflow: Sequential where dependencies exist, parallel where possible
- Quality Assurance: Multiple validation checkpoints throughout process
- Avoid Redundancy: No overlapping agent capabilities
Alternative Approaches (Less Optimal)
❌ Single Agent Approach: Use orchestrator for everything
- Problem: Token limit exceeded, context collapse, no specialization
- Better: Divide work among specialists
❌ Too Many Agents: Add research-agent, web-search-researcher, etc.
- Problem: Redundant work, coordination overhead, diminishing returns
- Better: Minimal effective team (7 agents is optimal)
❌ Wrong Agent Mix: Use generative-ui agents for prompts
- Problem: Wrong domain expertise, poor quality output
- Better: Use domain-specific agents (educational, documentation)
📋 Implementation Checklist
Pre-Execution
- Verify all 9 research documents accessible
- Confirm directory structure created (17 directories)
- Allocate token budget (50K per phase = 200K total)
- Set up quality gates and acceptance criteria
Phase 1 Execution
- Launch codebase-locator for file:line discovery
- Launch codebase-pattern-finder for prompt extraction
- Launch codebase-analyzer for technical analysis
- Quality gate: 100+ prompts extracted ✅
Phase 2 Execution
- Launch documentation-librarian for organization
- Launch codi-documentation-writer for docs generation
- Quality gate: 34 documentation files created ✅
Phase 3 Execution
- Launch educational-content-generator for templates
- Launch ai-curriculum-specialist for validation
- Quality gate: 48 template files created ✅
Phase 4 Execution
- Orchestrator runs final QA review
- Cross-reference validation complete
- Master index generated
- Continuation plan created
- Quality gate: 82 total files, all standards met ✅
Recommended Execution: Use orchestrator agent to coordinate this entire strategy automatically.
Command:
Task(
subagent_type="orchestrator",
prompt="Execute the 4-phase prompt library generation workflow using the 7-agent team defined in agent-team-strategy.md"
)
Expected Outcome: Complete, production-ready prompt library with 82 files in 4-6 hours of agent work.
Agent Team Strategy v1.0 - Optimized for prompt library generation from 151,000+ words of research