Interactive Project Builder
Interactive Project Builder
How to Use This Skill
- Review the patterns and examples below
- Apply the relevant patterns to your implementation
- Follow the best practices outlined in this skill
Purpose
This skill provides a fully interactive, general-purpose framework for analyzing any aspect of a codebase and generating comprehensive project plans. It dynamically adapts to user needs by:
- Asking what to analyze (presents choices + allows custom input)
- Inventorying the codebase (folders, files, metrics)
- Running appropriate analysis tools (lint, security, coverage, etc.)
- Generating deliverables (project-plan.md, tasklist-with-checkboxes.md)
When to Use
- Starting a new analysis or improvement project
- Creating structured project plans from analysis results
- Building infrastructure for recurring analysis workflows
- Generating documentation from codebase exploration
Instructions
Phase 1: Interactive Discovery
Objective: Understand what the user wants to analyze and build.
Step 1: Present Analysis Type Options
Ask the user to select an analysis type. ALWAYS include an "Other (specify)" option:
## What do you want to analyze and build a project for?
1. **Markdown Quality** - Lint errors, formatting, documentation quality
2. **Code Quality** - Python/JS lint, type coverage, test coverage
3. **Security Audit** - Vulnerability scan, dependency audit, secrets detection
4. **Documentation Coverage** - Missing docs, outdated content, broken links
5. **Performance Analysis** - Bottlenecks, optimization opportunities
6. **Architecture Review** - Patterns, dependencies, technical debt
7. **Other** - Describe your analysis type: ___________
Step 2: Present Scope Options
Ask the user to define analysis scope:
## What is the scope of analysis?
1. **Single Folder** - Analyze one specific directory
2. **Full Repository** - Analyze entire codebase
3. **Multiple Submodules** - Analyze across git submodules
4. **Custom Selection** - I'll specify which folders: ___________
Step 3: Present Deliverable Options
Ask what outputs the user needs:
## What deliverables do you need?
1. **PROJECT-PLAN + TASKLIST** - Standard planning documents with checkboxes
2. **Full Infrastructure** - Also create agents, commands, scripts, skills
3. **Automation Only** - Scripts to fix issues automatically
4. **Report Only** - Analysis report without action plan
5. **Other** - Describe your deliverables: ___________
Phase 2: Codebase Inventory
Objective: Build a complete map of the analysis scope.
Step 1: List All Folders
Use Bash/LS to inventory top-level directories:
ls -d */ | grep -v node_modules | grep -v backups
Step 2: Count Files Per Folder
For each folder, count relevant files:
find <folder> -name "*.<extension>" -type f | wc -l
Step 3: Generate Inventory Table
Create a structured inventory:
| Folder | File Count | File Types | Purpose |
|---|---|---|---|
| agents/ | 68 | .md | Agent definitions |
| commands/ | 100 | .md | Slash commands |
| ... | ... | ... | ... |
Phase 3: Run Analysis
Objective: Execute appropriate analysis tools based on user selection.
Analysis Type: Markdown Quality
markdownlint-cli2 "<folder>/**/*.md" 2>&1 | grep -c "MD[0-9]"
Analysis Type: Code Quality
# Python
ruff check <folder> --statistics
pylint <folder> --score=n
# TypeScript/JavaScript
eslint <folder> --format=json
Analysis Type: Security
# Dependency audit
pip-audit
npm audit
# Secret detection
trufflehog filesystem <folder>
Analysis Type: Documentation
# Check for README files
find <folder> -name "README.md" | wc -l
# Check for broken links
markdown-link-check <folder>/**/*.md
Phase 4: Generate Deliverables
Objective: Create project planning documents from analysis results.
Step 1: Create project-plan.md
Use the template from CODITECT-CORE-STANDARDS/TEMPLATES/PROJECT-PLAN-TEMPLATE.md:
# [ANALYSIS-TYPE] - Project Plan
**Product:** [Analysis Type] Analysis Project
**Repository:** [Repository Name]
**Status:** Planning Phase
**Last Updated:** [DATE]
## Executive Summary
[Summary of analysis findings and planned remediation]
## Analysis Results
### Inventory Summary
| Folder | Files | Issues | Priority |
|--------|-------|--------|----------|
| ... | ... | ... | ... |
### Issue Distribution
[Charts/tables showing issue types and counts]
## Implementation Roadmap
### Phase 1: [Name]
- [ ] Task 1
- [ ] Task 2
### Phase 2: [Name]
- [ ] Task 3
- [ ] Task 4
## Success Metrics
- [ ] Metric 1: [Target]
- [ ] Metric 2: [Target]
Step 2: Create tasklist-with-checkboxes.md
Use the template from CODITECT-CORE-STANDARDS/TEMPLATES/TASKLIST-WITH-CHECKBOXES-TEMPLATE.md:
# [ANALYSIS-TYPE] - Task List with Checkboxes
## Legend
- [ ] Pending
- [x] Completed
- [>] In Progress
- [!] Blocked
## Phase 0: Discovery (Complete)
- [x] User interview complete
- [x] Analysis type selected
- [x] Scope defined
- [x] Inventory generated
## Phase 1: Analysis
- [ ] Run analysis tools
- [ ] Document findings
- [ ] Categorize by severity
## Phase 2: Planning
- [ ] Create task breakdown
- [ ] Assign priorities
- [ ] Estimate effort
## Phase 3: Implementation
[Tasks generated from analysis]
Examples
Example 1: Markdown Quality Analysis
User Request: "Analyze markdown quality in the docs folder"
Discovery Responses:
- Analysis Type: Markdown Quality
- Scope: Single Folder (docs/)
- Deliverables: PROJECT-PLAN + TASKLIST
Inventory Output:
docs/: 370 .md files
docs/01-getting-started/: 12 files
docs/02-architecture/: 45 files
...
Analysis Output:
Total Errors: 4,138
MD040: 1,200 (fenced code language)
MD036: 800 (emphasis as heading)
MD032: 600 (lists need blank lines)
Deliverables Created:
MARKDOWN-QUALITY-project-plan.mdMARKDOWN-QUALITY-tasklist-with-checkboxes.md
Example 2: Security Audit
User Request: "Run a security audit on the entire repository"
Discovery Responses:
- Analysis Type: Security Audit
- Scope: Full Repository
- Deliverables: Full Infrastructure
Infrastructure Created:
agents/security-audit-specialist.mdcommands/security-audit.mdskills/security-audit/SKILL.mdscripts/run-security-audit.pySECURITY-AUDIT-project-plan.mdSECURITY-AUDIT-tasklist-with-checkboxes.md
Integration
Primary Agents
- orchestrator - Coordinates multi-agent analysis workflows
- codebase-locator - Finds files and directories
- codebase-analyzer - Analyzes code patterns
- project-organizer - Organizes project structure
- codi-documentation-writer - Creates documentation
Related Commands
/build-project- User-facing interface for this skill/research-codebase- Comprehensive codebase research/create-plan- Interactive plan creation
Supporting Scripts
scripts/interactive-project-builder.py- Main orchestration scriptscripts/codebase-inventory.py- Folder/file inventoryscripts/analysis-runner.py- Run analysis tools
Configuration
Required Tools
Depending on analysis type, ensure these are installed:
- Markdown:
markdownlint-cli2 - Python:
ruff,pylint,mypy - JavaScript:
eslint,typescript - Security:
pip-audit,npm audit,trufflehog - Documentation:
markdown-link-check
Environment Variables
CODITECT_ANALYSIS_OUTPUT_DIR="./analysis-output"
CODITECT_TEMPLATE_DIR="./CODITECT-CORE-STANDARDS/TEMPLATES"
References
Level 3 Resources
- FORMS.md - Question templates and forms
- REFERENCE.md - Complete analysis tool reference
scripts/- Automation utilities
Templates
CODITECT-CORE-STANDARDS/TEMPLATES/PROJECT-PLAN-TEMPLATE.mdCODITECT-CORE-STANDARDS/TEMPLATES/TASKLIST-WITH-CHECKBOXES-TEMPLATE.md
Related Documentation
Success Output
When this skill completes successfully, output:
✅ SKILL COMPLETE: interactive-project-builder
Completed:
- [x] User interview conducted (analysis type, scope, deliverables)
- [x] Codebase inventory generated (folders, files, metrics)
- [x] Analysis tools executed (results captured)
- [x] project-plan.md created
- [x] tasklist-with-checkboxes.md created
- [x] Infrastructure components generated (if requested)
Outputs:
- [ANALYSIS-TYPE]-project-plan.md
- [ANALYSIS-TYPE]-tasklist-with-checkboxes.md
- agents/[analysis]-specialist.md (if full infrastructure)
- commands/[analysis].md (if full infrastructure)
- skills/[analysis]/SKILL.md (if full infrastructure)
- scripts/[analysis]-runner.py (if automation requested)
Project Details:
- Analysis Type: [type]
- Scope: [scope description]
- Total Files Analyzed: XXX
- Issues Found: XXX
- Tasks Created: XXX
Completion Checklist
Before marking this skill as complete, verify:
- User interview completed (all 3 discovery questions answered)
- Codebase inventory generated with file counts per folder
- Analysis tools ran successfully (no execution errors)
- project-plan.md created following template structure
- tasklist-with-checkboxes.md created with Phase 0 marked complete
- Infrastructure components created if requested (agents/commands/skills/scripts)
- Issue distribution documented (by severity/type/folder)
- Recommendations prioritized (P0/P1/P2)
- Success metrics defined with target values
Failure Indicators
This skill has FAILED if:
- ❌ User interview incomplete (skipped discovery questions)
- ❌ Inventory missing folders (incomplete traversal)
- ❌ Analysis tool errors not handled (crashes without recovery)
- ❌ project-plan.md doesn't follow template (missing sections)
- ❌ TASKLIST has no checkboxes (not using correct format)
- ❌ Infrastructure components malformed (invalid YAML, syntax errors)
- ❌ No issue categorization (all marked same priority)
- ❌ Recommendations too vague ("fix issues" without specifics)
- ❌ No success metrics defined (can't measure completion)
When NOT to Use
Do NOT use this skill when:
- Requirements already clear - Don't interview if you know exactly what to build
- No codebase to analyze - Can't inventory non-existent code
- One-off quick task - Overhead of full project plan not justified
- Emergency hotfix - No time for discovery phase, just fix the bug
- External project - You don't control scope/requirements
- Recurring analysis - Use existing automation, don't rebuild each time
- Simple single-file change - Project planning overkill for trivial changes
Use these alternatives instead:
- Clear requirements: Skip to Phase 2 (analysis) directly
- No codebase: Use
project-organizationto create structure first - Quick task: Just do the work, document after if needed
- Emergency: Fix immediately, retrospective analysis later
- External: Adapt to their project structure and processes
- Recurring: Invoke existing scripts/agents, don't recreate
- Simple changes: Direct edit with commit message, no plan needed
Anti-Patterns (Avoid)
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Skipping user interview | Build wrong thing, wasted effort | ALWAYS ask 3 discovery questions |
| No "Other" option | Forces user into predefined boxes | Include "Other (specify)" for flexibility |
| Incomplete inventory | Analysis misses files, inaccurate results | Traverse all folders in scope, verify counts |
| Ignoring analysis errors | Tool failures go unnoticed | Check exit codes, capture stderr |
| Generic PROJECT-PLAN | Copy-paste without customization | Populate with actual analysis results |
| Tasks without checkboxes | Can't track progress | Use - [ ] format for all tasks |
| No Phase 0 documentation | Discovery phase undocumented | Mark Phase 0 complete with user choices |
| Infrastructure without need | Creates unused components | Only build infrastructure if user requested |
| No success metrics | Can't tell when done | Define measurable targets for each goal |
| Deliverables in wrong location | Hard to find outputs | Follow standard paths (root for plans, categorized for infra) |
Principles
This skill embodies these CODITECT principles:
#1 Recycle → Extend → Re-Use → Create
- Reuse PROJECT-PLAN and TASKLIST templates
- Extend templates with analysis-specific sections
- Create new infrastructure only when existing doesn't fit
#2 First Principles Thinking
- Understand WHY user wants analysis (problem to solve, not tool to run)
- Question scope assumptions: does full repo analysis add value?
- Design from user goals down, not tool capabilities up
#3 Keep It Simple
- Start with 3 discovery questions, expand only if needed
- Use templates instead of creating from scratch
- Automate inventory and analysis execution
#5 Eliminate Ambiguity
- Explicit analysis type (not "some kind of code check")
- Clear scope boundaries (specific folders, not "maybe these")
- Unambiguous deliverables (list exact file paths)
#6 Clear, Understandable, Explainable
- PROJECT-PLAN explains WHY each task matters
- TASKLIST shows progress with visual checkboxes
- Discovery questions guide user through decision process
#8 No Assumptions
- Don't assume user knows what they want; ask questions
- Don't assume tools installed; verify before execution
- Don't assume inventory complete; verify folder counts
#11 Automate Everything
- Automated discovery question presentation
- Automated inventory generation via scripts
- Automated deliverable creation from templates
- Automated infrastructure generation when requested
Full Standard: CODITECT-STANDARD-AUTOMATION.md
Version: 1.1.0 | Updated: 2026-01-04 | Quality Standard: SKILL-QUALITY-STANDARD.md v1.0.0