1-2-3 Research to Project: Quick Start How-To
Time Required: 15-45 minutes (depending on scope) Difficulty: Beginner Prerequisites: CODITECT Core installed, Claude Code CLI active Output: 9 markdown artifacts, 4-6 interactive dashboards, 15-25 follow-up prompts, and optionally a bootstrapped project
Overview
The CODITECT Research Pipeline (/deep-dive) evaluates any technology, framework, or concept and produces a complete analysis suite — then optionally hands off to /new-project to bootstrap a production-ready project from the findings.
What You Get:
| Phase | Output | Count |
|---|---|---|
| Research | Markdown artifacts (SDD, TDD, C4, ADRs, Quick Start, Impact, Exec Summary, Glossary, Diagrams) | 9 |
| Dashboards | Interactive JSX dashboards (Architecture, Strategy, Integration, Decision Brief, + 2 extended) | 4-6 |
| Ideation | Categorized follow-up research prompts | 15-25 |
| Genesis | Bootstrapped CODITECT project with submodule, tracks, ADRs | 1 (optional) |
Step 1: Gather Your Sources
Before running the pipeline, collect everything you have about the technology you want to evaluate.
What to Prepare
| Source Type | Examples | How It Helps |
|---|---|---|
| URLs | Documentation sites, blog posts, API references | Web crawler extracts structured context |
| GitHub repos | Source repos, example projects, SDKs | Architecture analysis, code patterns |
| Local documents | PDFs, prior research, whitepapers, specs | Enriches context with proprietary knowledge |
| Original documents | RFPs, requirement docs, internal memos | Sets the problem framing |
Example: Evaluating a QMS Work Order System
Topic: "Bioscience QMS Work Order System"
URLs: https://www.fda.gov/regulatory-information/search-fda-guidance-documents
https://www.iso.org/standard/59752.html
GitHub: https://github.com/coditect-ai/coditect-biosciences-qms-platform
Local: ./reference-docs/fda-21-cfr-part-11.pdf
Originals: ./prior-research/qms-requirements-brief.md
You do not need all source types. The pipeline works with as little as a topic name — it will auto-search the web for sources.
Step 2: Run the Pipeline
Option A: Interactive Mode (Recommended for First Run)
/deep-dive
The pipeline prompts you for everything:
What technology or concept to research?
> Bioscience QMS Work Order System
Do you have URLs for web research?
> Yes, I have URLs
[provide URLs when prompted]
Do you have local documents to include?
> Yes, local docs
[provide path when prompted]
What output mode?
> Extended (9+6) ← 9 markdown + 6 dashboards
Option B: Single Command (Everything Specified)
/deep-dive "Bioscience QMS Work Order System" \
--urls "https://www.fda.gov/guidance" "https://www.iso.org/standard/59752.html" \
--github "https://github.com/coditect-ai/coditect-biosciences-qms-platform" \
--docs ./reference-docs/ \
--originals ./prior-research/ \
--output ./research-output/bio-qms/ \
--extended
Option C: Minimal (Topic Only)
/deep-dive "Temporal.io workflow engine"
The pipeline auto-searches the web and generates all artifacts from public information.
Option D: Dry Run (Preview Without Executing)
/deep-dive "CopilotKit AG-UI" --dry-run
Shows the execution plan (which agents, which models, estimated tokens) without running anything.
What Happens Behind the Scenes
The pipeline dispatches 14 specialized agents in batched parallel:
Phase 0: Intake ─────────────────────── Collect sources, configure output
│
Phase 1: Research (9 artifacts) │
Batch 0: Web Crawler ──────────────── Crawl all URLs, repos, docs → context JSON
Batch 1: ┌ Quick Start Generator ─┐
│ Impact Analyzer ├── 3 agents in parallel
└ Exec Summary Writer ──┘
Batch 2: ┌ SDD Generator ────────┐
│ TDD Generator ├── 3 agents in parallel
└ C4 Modeler ──────────┘
Batch 3: ┌ ADR Generator ────────┐
│ Glossary Builder ├── 3 agents in parallel
└ Mermaid Creator ─────┘
Quality Gate 1 ────────────────────── Validate all 9 artifacts
│
Phase 2: Dashboards (4-6 JSX) │
Batch 4: 4 core dashboards ────────── Architecture, Strategy, Integration, Decision
Batch 5: 2 extended dashboards ─────── Competitive, Implementation (if --extended)
Quality Gate 2 ────────────────────── Validate all dashboards
│
Phase 3: Ideation ───────────────────── 15-25 follow-up prompts across 6 categories
Model routing optimizes cost: Haiku for glossary/mermaid/crawling (~40% of tokens), Sonnet for design docs/dashboards (~55%), Opus for critical analysis like impact/exec-summary/SDD (~5%).
Step 3: Use the Results (or Bootstrap a Project)
Option A: Use the Artifacts Directly
After the pipeline completes, your output directory contains:
research-output/bio-qms/
├── README.md # Index with quality scores
├── pipeline-report.json # Execution metrics and timing
├── research-context.json # Structured source material
│
├── 1-2-3-detailed-quick-start.md # Dense setup guide
├── coditect-impact.md # CODITECT integration analysis
├── executive-summary.md # CTO-level decision brief
├── sdd.md # System Design Document
├── tdd.md # Technical Design Document
├── c4-architecture.md # C4 model (Context → Code)
├── glossary.md # A-Z terminology
├── mermaid-diagrams.md # Architecture diagrams
│
├── adrs/ # 3-7 Architecture Decision Records
│ ├── ADR-001-adoption-decision.md
│ ├── ADR-002-integration-pattern.md
│ └── ...
│
├── dashboards/ # Interactive JSX dashboards
│ ├── tech-architecture-analyzer.jsx
│ ├── strategic-fit-dashboard.jsx
│ ├── coditect-integration-playbook.jsx
│ ├── executive-decision-brief.jsx
│ ├── competitive-comparison.jsx # (--extended)
│ └── implementation-planner.jsx # (--extended)
│
└── follow-up-prompts.md # 15-25 categorized next-step prompts
Key artifacts by audience:
| Audience | Start With |
|---|---|
| Executive / CTO | executive-summary.md → executive-decision-brief.jsx |
| Architect | sdd.md → c4-architecture.md → tdd.md |
| Engineer | 1-2-3-detailed-quick-start.md → tdd.md |
| Compliance | coditect-impact.md (compliance surface section) |
| Product | strategic-fit-dashboard.jsx → competitive-comparison.jsx |
Option B: Bootstrap a Project (Genesis)
Add --genesis to automatically hand off research artifacts to /new-project:
/deep-dive "Bioscience QMS Work Order System" --extended --genesis
This runs the full pipeline, then invokes:
/new-project "Bioscience QMS Work Order System" \
--from-research ./research-output/bio-qms/
The genesis handoff:
- Reads your executive summary, SDD, TDD, C4, and ADRs
- Skips the discovery interview (already answered by research)
- Creates a submodule with proper CODITECT structure
- Generates TRACK files, initial ADRs, and project configuration
- Registers the project in
projects.db
Option C: Promote Artifacts to Permanent Locations
If you want research artifacts to live in the CODITECT framework (not just in a staging directory), use the promotion workflow:
# Preview what would be promoted
/research-promote bio-qms --dry-run
# Execute promotion (moves artifacts per ADR-207 taxonomy)
/research-promote bio-qms
Promotion destinations (ADR-207):
| Artifact | Promoted To |
|---|---|
| SDD | internal/architecture/sdd/{topic}-sdd.md |
| TDD | internal/architecture/tdd/{topic}-tdd.md |
| C4 | internal/architecture/c4-models/{topic}/ |
| ADRs | internal/architecture/adrs/ADR-NNN-*.md |
| Executive Summary | internal/research/executive-summaries/ |
| Glossary | internal/research/glossaries/ |
| Quick Start | internal/research/quick-start-guides/ |
| Dashboards | internal/dashboards/research/ |
| Manifest | internal/research/manifests/ |
Common Workflows
Evaluate a New Technology
/deep-dive "Temporal.io" --extended
# Review executive-summary.md for Go/No-Go recommendation
# Check coditect-impact.md for integration feasibility
# Open executive-decision-brief.jsx for interactive analysis
Research and Build
/deep-dive "E-Signature Platform" --extended --genesis
# Pipeline runs → artifacts generated → project bootstrapped
# cd into new submodule and start building
Quick Competitive Analysis
/deep-dive "LangGraph vs CrewAI vs AutoGen" --extended
# Check competitive-comparison.jsx for feature matrix
# Review follow-up-prompts.md for deeper investigation areas
Enrich Existing Research
/deep-dive "FDA 21 CFR Part 11 Compliance" \
--docs ./existing-analysis/ \
--originals ./regulatory-docs/
# Builds on your existing work rather than starting from scratch
Command Reference
| Command | Aliases | Purpose |
|---|---|---|
/research-pipeline | /deep-dive, /rp | Run the full research pipeline |
/new-project --from-research | — | Bootstrap project from research output |
/research-promote | /promote | Move staging artifacts to permanent locations |
/deep-dive Flags
| Flag | Short | Description |
|---|---|---|
--urls | -u | URLs for web research (space-separated) |
--github | -g | GitHub repository URLs |
--docs | -d | Path to local reference documents |
--originals | -o | Path to prior research / original documents |
--output | Output directory (default: auto-generated) | |
--extended | -x | 6 dashboards instead of 4 |
--genesis | Hand off to /new-project after pipeline | |
--skip-phase | Skip a phase (1, 2, 3, or 4) | |
--model | -m | Override model routing (haiku, sonnet, opus) |
--dry-run | -n | Preview plan without executing |
--batch-size | Max parallel agents per batch (default: 3) |
Troubleshooting
| Issue | Solution |
|---|---|
| Pipeline takes too long | Use --skip-phase 2 to skip dashboards, or --model haiku for faster (lower quality) |
| URL unreachable | Pipeline skips unreachable URLs and logs a warning — other sources still used |
| Quality gate fails | Failed artifacts auto-retry once with Opus. Check pipeline-report.json for details |
| Out of token budget | Reduce --batch-size to 2, or use --model haiku for economy mode |
| Genesis fails | Run /new-project --from-research <path> manually after pipeline |
| Want to re-run one artifact | Use /deep-dive "topic" --skip-phase 2 --skip-phase 3 to regenerate only Phase 1 |
Related
- Command:
/research-pipeline— Full command reference - Command:
/new-project— Project bootstrapping - Command:
/research-promote— Artifact promotion - ADR: ADR-206 — Pipeline architecture
- ADR: ADR-207 — Artifact organization
- Standard: CODITECT-STD-019 — Research pipeline standard
- Workflow: WF-RESEARCH-PIPELINE — Workflow reference
Guide Version: 1.0.0 Created: 2026-02-16 Author: Hal Casteel, CEO/CTO AZ1.AI Inc. Owner: AZ1.AI INC
Copyright 2026 AZ1.AI Inc.