Skip to main content

Research Pipeline

System Prompt

EXECUTION DIRECTIVE: When the user invokes /research-pipeline (or aliases /deep-dive, /rp), you MUST:

  1. IMMEDIATELY execute — no questions, no explanations first
  2. Follow the 4-phase pipeline defined below
  3. Use parallel agent dispatch for independent artifact generation
  4. Validate all outputs before presenting results

Usage

# Interactive mode (prompts for all inputs)
/research-pipeline

# Topic-only (interactive for additional inputs)
/research-pipeline "Temporal.io workflow engine"

# Full specification
/research-pipeline "Temporal.io" \
--urls "https://temporal.io/docs" "https://docs.temporal.io/develop" \
--github "https://github.com/temporalio/temporal" \
--docs ./reference-docs/ \
--originals ./prior-research/ \
--output ./research-output/temporal/ \
--extended \
--genesis

# Aliases
/deep-dive "Motia framework"
/rp "CopilotKit AG-UI protocol" --extended

Arguments

$ARGUMENTS - Research Topic (optional)

ArgumentDescription
topicTechnology, framework, or concept to research

Flags

FlagShortDefaultDescription
--urls-unoneURLs for web research (space-separated)
--github-gnoneGitHub repository URLs to analyze
--docs-dnonePath to local reference documents
--originals-ononePath to prior research/original documents
--outputauto-generatedOutput directory for artifacts
--extended-xfalseGenerate 6 dashboards instead of 4
--genesisfalseHand off to /new-project after completion
--venture-vfalseFull venture lifecycle: genesis + submodule + TRACKs + registration + classify
--extendnoneExtend research to additional market (e.g., --extend "Canada")
--strategicfalseGenerate GTM, marketing, and opportunity assessment docs
--skip-phasenoneSkip specific phase (1, 2, 3, or 4)
--model-mautoOverride model routing (haiku, sonnet, opus)
--dry-run-nfalsePreview pipeline plan without execution
--batch-size3Max parallel agents per batch

Pipeline Execution

Phase 0: Interactive Intake

When invoked, collect all required inputs. If flags are provided, use them. Otherwise, prompt interactively.

Step 0.1: Collect Topic

If no topic argument provided:

AskUserQuestion: "What technology, framework, or concept would you like to research?"

Step 0.2: Collect Sources

If no source flags provided, ask interactively:

questions = [
{
"question": "Do you have URLs for web research?",
"header": "URLs",
"options": [
{"label": "Yes, I have URLs", "description": "Provide URLs for documentation, blogs, repos"},
{"label": "No, auto-search", "description": "Pipeline will search the web automatically"},
]
},
{
"question": "Do you have local documents to include?",
"header": "Documents",
"options": [
{"label": "Yes, local docs", "description": "Provide path to reference documents"},
{"label": "Yes, prior research", "description": "Provide path to prior research artifacts"},
{"label": "No local docs", "description": "Research from web sources only"},
]
},
{
"question": "What output mode do you want?",
"header": "Mode",
"options": [
{"label": "Standard (9+4)", "description": "9 markdown artifacts + 4 JSX dashboards"},
{"label": "Extended (9+6)", "description": "9 markdown + 6 JSX dashboards (adds competitive + implementation)"},
{"label": "Full + Genesis", "description": "Extended mode + hand off to /new-project for project creation"},
]
}
]

Step 0.3: Configure Output

# Auto-generate output directory if not specified
output_dir = args.output or f"analyze-new-artifacts/{slugify(topic)}/artifacts/"

# Create directory structure
mkdir -p {output_dir}/adrs
mkdir -p {output_dir}/dashboards

Step 0.4: Generate Research Context

Write pipeline-config.json with all collected inputs for reproducibility.


Phase 1: Research Artifact Generation

Execute in batched parallel using Task agents. Each agent receives the research context and produces one artifact.

Batch 0 — Foundation (Sequential, must complete first):

# Web research agent crawls all sources, produces structured context
context = Task(
subagent_type="web-search-researcher",
prompt=f"""Research "{topic}" comprehensively.

Sources to crawl:
- URLs: {urls}
- GitHub repos: {github_urls}
- Local documents: {doc_paths}

Produce a structured JSON research context covering:
1. Architecture and runtime model
2. Language/runtime support (TypeScript, Python priority)
3. State management, observability, operations
4. Security, multi-tenancy, isolation
5. AI/agent capabilities and orchestration model
6. Deployment/hosting models and ecosystem maturity
7. Compliance surface area

Output: {output_dir}/research-context.json
"""
)

Batch 1 — Core Analysis (3 parallel agents):

Launch these simultaneously after Batch 0 completes:

# Agent 1: Quick Start
Task(subagent_type="research-agent", model="haiku", prompt="""
Using research context at {output_dir}/research-context.json,
generate a dense quick-start guide following the v7.0 Artifact 1 spec.
... [full spec from system prompt Section 4.4, Artifact 1]
Output: {output_dir}/1-2-3-detailed-quick-start.md
""")

# Agent 2: CODITECT Impact
Task(subagent_type="research-agent", model="sonnet", prompt="""
Using research context at {output_dir}/research-context.json,
generate a CODITECT impact analysis following v7.0 Artifact 2 spec.
... [full spec from system prompt Section 4.4, Artifact 2]
Output: {output_dir}/coditect-impact.md
""")

# Agent 3: Executive Summary
Task(subagent_type="research-agent", model="sonnet", prompt="""
Using research context at {output_dir}/research-context.json,
generate an executive summary following v7.0 Artifact 3 spec.
... [full spec from system prompt Section 4.4, Artifact 3]
Output: {output_dir}/executive-summary.md
""")

Batch 2+3 — Design Docs + Reference Materials (6 parallel agents):

Launch all 6 simultaneously after Batch 1 completes:

# Batch 2: Design Documents (depend on Batch 0 + 1)
Task(subagent_type="software-design-document-specialist", model="sonnet",
prompt="Generate SDD... Output: {output_dir}/sdd.md")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate TDD... Output: {output_dir}/tdd.md")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate C4 architecture... Output: {output_dir}/c4-architecture.md")

# Batch 3: Reference Materials (depend on Batch 0 only)
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate 3-7 ADRs... Output: {output_dir}/adrs/")
Task(subagent_type="research-agent", model="haiku",
prompt="Generate glossary... Output: {output_dir}/glossary.md")
Task(subagent_type="research-agent", model="haiku",
prompt="Generate mermaid diagrams... Output: {output_dir}/mermaid-diagrams.md")

Phase 1 Quality Gate:

After all Batch 1-3 agents complete, validate:

  • All 9 artifacts exist and have content
  • YAML frontmatter is valid
  • No empty sections
  • Cross-references are consistent

If validation fails for any artifact, retry that agent once with Opus model.


Phase 2: Visualization (JSX Dashboards)

Step 2.1: Aggregate Phase 1 artifacts

Read all 9 markdown artifacts and structure into a unified data object for dashboard generation.

Step 2.2: Generate dashboards (parallel)

# Core dashboards (always)
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate tech-architecture-analyzer.jsx... Output: {output_dir}/dashboards/")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate strategic-fit-dashboard.jsx...")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate coditect-integration-playbook.jsx...")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate executive-decision-brief.jsx...")

# Extended dashboards (if --extended)
if extended:
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate competitive-comparison.jsx...")
Task(subagent_type="research-agent", model="sonnet",
prompt="Generate implementation-planner.jsx...")

JSX Design System Rules (MANDATORY):

All dashboards MUST follow these rules:

  • Background: #FFFFFF, #F8FAFC, #F1F5F9 (light mode ONLY)
  • Text: #111827 (primary), #374151 (secondary) — NEVER light gray on white
  • Container: max-w-6xl mx-auto
  • Tabs via useState, expandable accordions, text filter for large tables
  • Single file, all data inline, default export
  • Only useState, useCallback, useMemo from React
  • Only lucide-react@0.263.1 for icons
  • No dark backgrounds, no pie charts, no text < 14px

Phase 3: Ideation (Follow-up Prompts)

Generate 15-25 categorized prompts across 6 categories:

Task(subagent_type="research-agent", model="sonnet", prompt="""
Based on all research artifacts generated for "{topic}",
generate 15-25 follow-up research prompts across these categories:

1. Architecture Deep-Dives (3-5 prompts)
2. Compliance & Regulatory (2-4 prompts)
3. Multi-Agent Orchestration (2-4 prompts)
4. Competitive & Market Intelligence (2-3 prompts)
5. Product Feature Extraction (2-4 prompts)
6. Risk & Mitigation (2-3 prompts)

Each prompt must be self-contained with CODITECT context.
Output: {output_dir}/follow-up-prompts.md
""")

Phase 4: Project Genesis (Optional)

If --genesis flag is set:

# Transform research artifacts into /new-project discovery brief
project_brief = {
"source": "research-pipeline",
"topic": topic,
"executive_summary": read(f"{output_dir}/executive-summary.md"),
"sdd": read(f"{output_dir}/sdd.md"),
"tdd": read(f"{output_dir}/tdd.md"),
"c4": read(f"{output_dir}/c4-architecture.md"),
"adrs": read_dir(f"{output_dir}/adrs/"),
"impact": read(f"{output_dir}/coditect-impact.md"),
}

# Write project brief for /new-project consumption
write(f"{output_dir}/project-brief.json", project_brief)

# Invoke /new-project with --from-research flag
# This skips the discovery interview and proceeds to submodule creation
invoke("/new-project", f"{topic} --from-research {output_dir}")

Phase 4b: Market Extension (Optional)

If --extend "<market>" flag is set, run a secondary research phase targeting the specified market:

if args.extend:
market = args.extend # e.g., "Canada"

# Step 4b.1: Secondary web research targeting the new market
Task(subagent_type="web-search-researcher", model="sonnet", prompt=f"""
Research "{topic}" specifically for the {market} market.
Produce: {output_dir}/{slugify(market)}-research-context.json
Cover: market size, regulations, key players, cultural considerations,
bilateral trade, regulatory bodies, language requirements.
""")

# Step 4b.2: Update all existing artifacts with market-specific data
for artifact in all_artifacts:
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Read {artifact} and the {market} research context.
Update with {market}-specific data: regulatory, market size,
competitive landscape, compliance requirements.
Preserve all existing content. ADD new sections, don't replace.
""")

# Step 4b.3: Generate market-specific ADR (e.g., cross-border strategy)
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Generate an ADR for {market} expansion strategy.
Output: {output_dir}/adrs/ADR-00N-{slugify(market)}-expansion-strategy.md
""")

Phase 4c: Strategic Document Generation (Optional)

If --strategic flag is set (or --venture implies it), generate business strategy documents:

if args.strategic or args.venture:
# Step 4c.1: GTM Strategy
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Using all research artifacts for "{topic}",
generate a comprehensive Go-to-Market strategy.
Include: phased entry, pricing, team, budget, timeline.
Output: {output_dir}/us-canada-gtm-strategy.md (or market-appropriate name)
""")

# Step 4c.2: Marketing Strategy (AI adoption gap)
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Generate an AI adoption marketing strategy targeting businesses
that haven't formally deployed AI.
Output: {output_dir}/ai-adoption-marketing-strategy.md
""")

# Step 4c.3: Opportunity Assessment
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Generate a combined opportunity assessment with TAM/SAM/SOM,
financial projections, competitive landscape, investment thesis.
Output: {output_dir}/opportunity-assessment.md
""")

Phase 5: Generate Pipeline Report + README

# Generate pipeline-report.json with execution metrics
report = {
"topic": topic,
"started_at": start_time,
"completed_at": end_time,
"duration_seconds": elapsed,
"phases_completed": [0, 1, 2, 3],
"artifacts_generated": artifact_count,
"agents_dispatched": agent_count,
"tokens_consumed": total_tokens,
"quality_score": avg_quality,
"model_usage": {"haiku": n, "sonnet": n, "opus": n},
"errors": error_list,
}

# Generate README.md index
# Lists all artifacts with descriptions and quality scores

Phase 6: Venture Lifecycle (Optional — --venture)

If --venture flag is set, execute the full post-genesis venture lifecycle. This automates every step that was previously manual:

if args.venture:
project_name = slugify(topic) # e.g., "chamber-of-commerce-canada"
venture_path = f"submodules/ventures/{project_name}"

# Step 6.1: Create GitHub repository
Bash(f"gh repo create coditect-ai/{project_name} --private --description '{topic}'")

# Step 6.2: Initialize as venture submodule
Bash(f"""
cd {ROLLOUT_MASTER}
git submodule add https://github.com/coditect-ai/{project_name}.git {venture_path}
cd {venture_path}
""")

# Step 6.3: Create CODITECT symlinks
Bash(f"""
cd {venture_path}
ln -s ~/.coditect .coditect
ln -s .coditect .claude
""")

# Step 6.4: Create standard directory structure
Task(subagent_type="project-structure-optimizer", prompt=f"""
Create CODITECT standard directory structure at {venture_path}:
docs/{{guides,reference,adrs,research,session-logs}}
src/{{components,services,utils}}
tests/{{unit,integration}}
scripts/, config/, data/{{raw,processed}}
internal/{{analysis,project/plans/tracks}}
""")

# Step 6.5: Copy research artifacts to docs/research/
Bash(f"cp -r {output_dir}/* {venture_path}/docs/research/")

# Step 6.6: Generate research-aware README
Task(subagent_type="research-agent", model="sonnet", prompt=f"""
Using the executive summary and research context,
generate a project README.md with:
- Project overview from executive summary
- Key market data (TAM/SAM/SOM, competitors, recommendation)
- Quick start for contributors
- Link to docs/research/ artifacts
Output: {venture_path}/README.md
""")

# Step 6.7: Generate TRACK files (NOT TASKLIST.md)
# Invoke /new-project with --from-research --tracks
Task(subagent_type="orchestrator", prompt=f"""
Generate CODITECT TRACK files from research artifacts:
- Derive tracks from SDD sections, TDD APIs, ADR topics
- Create TRACK-R-RESEARCH.md (auto-complete from pipeline)
- Create TRACK-V-VALIDATION.md (from exec summary recommendations)
- Create TRACK-D-DEVELOPMENT.md (from SDD/TDD)
- Create TRACK-G-GTM.md (from GTM/marketing strategy, if --strategic)
- Create TRACK-O-OPERATIONS.md (from compliance ADRs)
- Create MASTER-TRACK-INDEX.md with phase timeline
Output: {venture_path}/internal/project/plans/tracks/
""")

# Step 6.8: Generate PROJECT-PLAN.md
Task(subagent_type="software-design-document-specialist", prompt=f"""
Generate PROJECT-PLAN.md from research artifacts.
Include phased development plan, milestones, go/no-go gates.
Output: {venture_path}/docs/project-management/PROJECT-PLAN.md
""")

# Step 6.9: Initial commit and push
Bash(f"""
cd {venture_path}
git add -A
git commit -m "feat: Initial project setup — research complete, TRACK files generated"
# Verify remote before push
REMOTE=$(git remote get-url origin)
if echo "$REMOTE" | grep -q "github.com/coditect-ai/"; then
git push -u origin main
else
echo "ERROR: Remote is not coditect-ai. Aborting push."
exit 1
fi
""")

# Step 6.10: Register in projects.db
# Invoke /register-project programmatically
Bash(f"""
cd {CODITECT_CORE}
source .venv/bin/activate
python3 scripts/project_registration.py register \\
--name "{topic}" \\
--path "{venture_path}" \\
--github "coditect-ai/{project_name}" \\
--type submodule
""")

# Step 6.11: Create session log inception
# Invoke /session-log --new-session
invoke("/session-log", f'--new-session "{topic} — Research Pipeline Genesis" --project {project_id}')

# Step 6.12: Frontmatter classification
invoke("/classify", f"{venture_path}/")

# Step 6.13: Post-classification commit
Bash(f"""
cd {venture_path}
git add -A
git commit -m "chore: Standardize CODITECT frontmatter"
git push
""")

# Step 6.14: Update parent submodule pointer
Bash(f"""
cd {ROLLOUT_MASTER}
git add {venture_path}
git commit -m "chore(submodules): Add {project_name} venture submodule"
""")

# Step 6.15: Generate provenance manifest
write(f"{venture_path}/docs/provenance/genesis-manifest.json", {{
"pipeline_version": "3.0.0",
"topic": topic,
"flags": ["--venture", "--extended", ...],
"phases_executed": [0, 1, 2, 3, 4, 5, 6],
"artifacts_generated": artifact_count,
"tracks_generated": track_count,
"project_registered": True,
"session_log_created": True,
"classification_complete": True,
"created_at": timestamp,
}})

Venture Lifecycle Summary:

StepActionDuration
6.1-6.3GitHub repo + submodule + symlinks~30s
6.4-6.5Directory structure + artifact copy~10s
6.6-6.8README + TRACKs + PROJECT-PLAN~3-5 min (agents)
6.9Initial commit + push~15s
6.10-6.11Registration + session log~5s
6.12-6.13Classification + commit~2-3 min
6.14-6.15Parent pointer + provenance~10s

Total Phase 6: ~6-9 minutes (vs ~2 hours manual)


Success Output

RESEARCH PIPELINE COMPLETE: {topic}

Phase 0: Intake .......................... done (3 sources collected)
Phase 1: Research (9 artifacts) .......... done (4 batches, 10 agents)
Phase 2: Visualization (6 dashboards) .... done (6 agents)
Phase 3: Ideation (22 prompts) ........... done (1 agent)
Phase 4: Genesis ......................... done (project brief written)
Phase 4b: Market Extension ............... done (Canada — 34 sources) # --extend
Phase 4c: Strategic Docs ................. done (3 GTM/marketing docs) # --strategic
Phase 5: Report + README ................. done
Phase 6: Venture Lifecycle ............... done (15 steps automated) # --venture

Artifacts: {output_dir}/
Markdown: 12 files (9 core + 3 strategic)
Dashboards: 6 files (Tech Arch, Strategic Fit, Integration, Exec Brief, Competitive, Implementation)
Prompts: 22 follow-up prompts across 6 categories
TRACKs: 5 files (R, V, D, G, O) + MASTER-TRACK-INDEX.md
Metadata: README.md, research-context.json, pipeline-report.json, genesis-manifest.json

Venture: submodules/ventures/{project-name}/
GitHub: coditect-ai/{project-name} (private)
Registered: projects.db (UUID assigned)
Session: Log created at SSOT location
Classified: 27 files with CODITECT frontmatter

Quality: 85% average | Duration: 18m 45s | Tokens: ~65,000
Models used: Haiku (3), Sonnet (18), Opus (2 retries)

Completion Checklist

  • Topic and sources collected
  • Output directory created
  • research-context.json generated
  • All 9 markdown artifacts generated
  • All dashboards generated (4 or 6)
  • Follow-up prompts generated
  • Quality gate passed
  • pipeline-report.json written
  • README.md index generated
  • Market extension complete (if --extend)
  • Strategic docs generated (if --strategic or --venture)
  • Venture submodule created (if --venture)
  • TRACK files generated (if --venture)
  • Project registered in projects.db (if --venture)
  • Session log inception created (if --venture)
  • Frontmatter classified (if --venture)
  • Parent submodule pointer updated (if --venture)
  • Session log updated

Error Handling

ErrorAction
No topic providedEnter interactive mode
URL unreachableSkip URL, log warning, continue
Agent failsRetry once with Opus, then skip with warning
Quality < 70%Re-run failed artifacts
All agents failSave partial results, report error

  • ADR: ADR-206
  • Standard: CODITECT-STANDARD-RESEARCH-PIPELINE
  • Workflow: WF-RESEARCH-PIPELINE
  • Genesis: /new-project (accepts --from-research)
  • Agents: research-web-crawler, research-quick-start-generator, research-impact-analyzer, research-exec-summary-writer, research-sdd-generator, research-tdd-generator, research-c4-modeler, research-adr-generator, research-glossary-builder, research-mermaid-creator, research-artifact-aggregator, research-dashboard-generator, research-ideation-generator, research-quality-validator

Command Version: 2.0.0 Created: 2026-02-16 Updated: 2026-02-18 Author: Hal Casteel, CEO/CTO AZ1.AI Inc. Owner: AZ1.AI INC


Copyright 2026 AZ1.AI Inc.