Skip to main content

1-2-3 Research to Project: Quick Start How-To

Time Required: 15-45 minutes (depending on scope) Difficulty: Beginner Prerequisites: CODITECT Core installed, Claude Code CLI active Output: 9 markdown artifacts, 4-6 interactive dashboards, 15-25 follow-up prompts, and optionally a bootstrapped project


Overview

The CODITECT Research Pipeline (/deep-dive) evaluates any technology, framework, or concept and produces a complete analysis suite — then optionally hands off to /new-project to bootstrap a production-ready project from the findings.

What You Get:

PhaseOutputCount
ResearchMarkdown artifacts (SDD, TDD, C4, ADRs, Quick Start, Impact, Exec Summary, Glossary, Diagrams)9
DashboardsInteractive JSX dashboards (Architecture, Strategy, Integration, Decision Brief, + 2 extended)4-6
IdeationCategorized follow-up research prompts15-25
GenesisBootstrapped CODITECT project with submodule, tracks, ADRs1 (optional)

Step 1: Gather Your Sources

Before running the pipeline, collect everything you have about the technology you want to evaluate.

What to Prepare

Source TypeExamplesHow It Helps
URLsDocumentation sites, blog posts, API referencesWeb crawler extracts structured context
GitHub reposSource repos, example projects, SDKsArchitecture analysis, code patterns
Local documentsPDFs, prior research, whitepapers, specsEnriches context with proprietary knowledge
Original documentsRFPs, requirement docs, internal memosSets the problem framing

Example: Evaluating a QMS Work Order System

Topic:     "Bioscience QMS Work Order System"
URLs: https://www.fda.gov/regulatory-information/search-fda-guidance-documents
https://www.iso.org/standard/59752.html
GitHub: https://github.com/coditect-ai/coditect-biosciences-qms-platform
Local: ./reference-docs/fda-21-cfr-part-11.pdf
Originals: ./prior-research/qms-requirements-brief.md

You do not need all source types. The pipeline works with as little as a topic name — it will auto-search the web for sources.


Step 2: Run the Pipeline

/deep-dive

The pipeline prompts you for everything:

What technology or concept to research?
> Bioscience QMS Work Order System

Do you have URLs for web research?
> Yes, I have URLs
[provide URLs when prompted]

Do you have local documents to include?
> Yes, local docs
[provide path when prompted]

What output mode?
> Extended (9+6) ← 9 markdown + 6 dashboards

Option B: Single Command (Everything Specified)

/deep-dive "Bioscience QMS Work Order System" \
--urls "https://www.fda.gov/guidance" "https://www.iso.org/standard/59752.html" \
--github "https://github.com/coditect-ai/coditect-biosciences-qms-platform" \
--docs ./reference-docs/ \
--originals ./prior-research/ \
--output ./research-output/bio-qms/ \
--extended

Option C: Minimal (Topic Only)

/deep-dive "Temporal.io workflow engine"

The pipeline auto-searches the web and generates all artifacts from public information.

Option D: Dry Run (Preview Without Executing)

/deep-dive "CopilotKit AG-UI" --dry-run

Shows the execution plan (which agents, which models, estimated tokens) without running anything.

What Happens Behind the Scenes

The pipeline dispatches 14 specialized agents in batched parallel:

Phase 0: Intake ─────────────────────── Collect sources, configure output

Phase 1: Research (9 artifacts) │
Batch 0: Web Crawler ──────────────── Crawl all URLs, repos, docs → context JSON
Batch 1: ┌ Quick Start Generator ─┐
│ Impact Analyzer ├── 3 agents in parallel
└ Exec Summary Writer ──┘
Batch 2: ┌ SDD Generator ────────┐
│ TDD Generator ├── 3 agents in parallel
└ C4 Modeler ──────────┘
Batch 3: ┌ ADR Generator ────────┐
│ Glossary Builder ├── 3 agents in parallel
└ Mermaid Creator ─────┘
Quality Gate 1 ────────────────────── Validate all 9 artifacts

Phase 2: Dashboards (4-6 JSX) │
Batch 4: 4 core dashboards ────────── Architecture, Strategy, Integration, Decision
Batch 5: 2 extended dashboards ─────── Competitive, Implementation (if --extended)
Quality Gate 2 ────────────────────── Validate all dashboards

Phase 3: Ideation ───────────────────── 15-25 follow-up prompts across 6 categories

Model routing optimizes cost: Haiku for glossary/mermaid/crawling (~40% of tokens), Sonnet for design docs/dashboards (~55%), Opus for critical analysis like impact/exec-summary/SDD (~5%).


Step 3: Use the Results (or Bootstrap a Project)

Option A: Use the Artifacts Directly

After the pipeline completes, your output directory contains:

research-output/bio-qms/
├── README.md # Index with quality scores
├── pipeline-report.json # Execution metrics and timing
├── research-context.json # Structured source material

├── 1-2-3-detailed-quick-start.md # Dense setup guide
├── coditect-impact.md # CODITECT integration analysis
├── executive-summary.md # CTO-level decision brief
├── sdd.md # System Design Document
├── tdd.md # Technical Design Document
├── c4-architecture.md # C4 model (Context → Code)
├── glossary.md # A-Z terminology
├── mermaid-diagrams.md # Architecture diagrams

├── adrs/ # 3-7 Architecture Decision Records
│ ├── ADR-001-adoption-decision.md
│ ├── ADR-002-integration-pattern.md
│ └── ...

├── dashboards/ # Interactive JSX dashboards
│ ├── tech-architecture-analyzer.jsx
│ ├── strategic-fit-dashboard.jsx
│ ├── coditect-integration-playbook.jsx
│ ├── executive-decision-brief.jsx
│ ├── competitive-comparison.jsx # (--extended)
│ └── implementation-planner.jsx # (--extended)

└── follow-up-prompts.md # 15-25 categorized next-step prompts

Key artifacts by audience:

AudienceStart With
Executive / CTOexecutive-summary.mdexecutive-decision-brief.jsx
Architectsdd.mdc4-architecture.mdtdd.md
Engineer1-2-3-detailed-quick-start.mdtdd.md
Compliancecoditect-impact.md (compliance surface section)
Productstrategic-fit-dashboard.jsxcompetitive-comparison.jsx

Option B: Bootstrap a Project (Genesis)

Add --genesis to automatically hand off research artifacts to /new-project:

/deep-dive "Bioscience QMS Work Order System" --extended --genesis

This runs the full pipeline, then invokes:

/new-project "Bioscience QMS Work Order System" \
--from-research ./research-output/bio-qms/

The genesis handoff:

  1. Reads your executive summary, SDD, TDD, C4, and ADRs
  2. Skips the discovery interview (already answered by research)
  3. Creates a submodule with proper CODITECT structure
  4. Generates TRACK files, initial ADRs, and project configuration
  5. Registers the project in projects.db

Option C: Promote Artifacts to Permanent Locations

If you want research artifacts to live in the CODITECT framework (not just in a staging directory), use the promotion workflow:

# Preview what would be promoted
/research-promote bio-qms --dry-run

# Execute promotion (moves artifacts per ADR-207 taxonomy)
/research-promote bio-qms

Promotion destinations (ADR-207):

ArtifactPromoted To
SDDinternal/architecture/sdd/{topic}-sdd.md
TDDinternal/architecture/tdd/{topic}-tdd.md
C4internal/architecture/c4-models/{topic}/
ADRsinternal/architecture/adrs/ADR-NNN-*.md
Executive Summaryinternal/research/executive-summaries/
Glossaryinternal/research/glossaries/
Quick Startinternal/research/quick-start-guides/
Dashboardsinternal/dashboards/research/
Manifestinternal/research/manifests/

Common Workflows

Evaluate a New Technology

/deep-dive "Temporal.io" --extended
# Review executive-summary.md for Go/No-Go recommendation
# Check coditect-impact.md for integration feasibility
# Open executive-decision-brief.jsx for interactive analysis

Research and Build

/deep-dive "E-Signature Platform" --extended --genesis
# Pipeline runs → artifacts generated → project bootstrapped
# cd into new submodule and start building

Quick Competitive Analysis

/deep-dive "LangGraph vs CrewAI vs AutoGen" --extended
# Check competitive-comparison.jsx for feature matrix
# Review follow-up-prompts.md for deeper investigation areas

Enrich Existing Research

/deep-dive "FDA 21 CFR Part 11 Compliance" \
--docs ./existing-analysis/ \
--originals ./regulatory-docs/
# Builds on your existing work rather than starting from scratch

Command Reference

CommandAliasesPurpose
/research-pipeline/deep-dive, /rpRun the full research pipeline
/new-project --from-researchBootstrap project from research output
/research-promote/promoteMove staging artifacts to permanent locations

/deep-dive Flags

FlagShortDescription
--urls-uURLs for web research (space-separated)
--github-gGitHub repository URLs
--docs-dPath to local reference documents
--originals-oPath to prior research / original documents
--outputOutput directory (default: auto-generated)
--extended-x6 dashboards instead of 4
--genesisHand off to /new-project after pipeline
--skip-phaseSkip a phase (1, 2, 3, or 4)
--model-mOverride model routing (haiku, sonnet, opus)
--dry-run-nPreview plan without executing
--batch-sizeMax parallel agents per batch (default: 3)

Troubleshooting

IssueSolution
Pipeline takes too longUse --skip-phase 2 to skip dashboards, or --model haiku for faster (lower quality)
URL unreachablePipeline skips unreachable URLs and logs a warning — other sources still used
Quality gate failsFailed artifacts auto-retry once with Opus. Check pipeline-report.json for details
Out of token budgetReduce --batch-size to 2, or use --model haiku for economy mode
Genesis failsRun /new-project --from-research <path> manually after pipeline
Want to re-run one artifactUse /deep-dive "topic" --skip-phase 2 --skip-phase 3 to regenerate only Phase 1


Guide Version: 1.0.0 Created: 2026-02-16 Author: Hal Casteel, CEO/CTO AZ1.AI Inc. Owner: AZ1.AI INC


Copyright 2026 AZ1.AI Inc.