Skip to main content

CODITECT Research Artifact Organization — Taxonomy & Information Architecture Proposal

Status: Proposed — 2026-02-16 Author: Claude (Sonnet 4.5) Audience: CODITECT Contributors, Documentation Librarians, Framework Architects


Executive Summary

CODITECT has accumulated 98 research directories (6.1GB, 1229+ markdown files, 384 PDFs) in a gitignored staging area (analyze-new-artifacts/). Only a single "assessment document" gets promoted to permanent storage (internal/analysis/), leaving SDDs, TDDs, ADRs, C4 diagrams, JSX dashboards, and source materials orphaned.

This proposal establishes:

  1. 6-category research taxonomy with clear inclusion criteria
  2. 4-stage artifact lifecycle (Staging → Analysis → Integration → Archive)
  3. Promotion criteria matrix defining when/how artifacts move between stages
  4. Permanent directory structure replacing ad-hoc organization
  5. Manifest system tracking research lineage and outputs
  6. Integration with ADR-206 pipeline (autonomous research workflow)

Impact:

  • 87% reduction in staging clutter (98 dirs → 13 permanent topics)
  • Zero loss of valuable artifacts — systematic promotion replaces deletion
  • Searchable knowledge base — all research findable via manifests and indexes
  • Pipeline integration — ADR-206 outputs directly to permanent locations

1. Problem Statement

1.1 Current State Analysis

Staging Area (analyze-new-artifacts/):

  • 98 directories containing:
    • 1,229 markdown files
    • 384 PDF papers
    • 70 JSX dashboards
    • ~200+ ADRs (duplicates, drafts, production)
    • 6.1GB total size
  • Naming inconsistency: coditect-, CODITECT-, mixed case, duplicates (*-DUPLICATE)
  • No lifecycle management: Files stay forever or get manually deleted
  • No manifest tracking: Unknown provenance, no lineage

Permanent Storage:

  • internal/analysis/ (20 directories) — assessment documents only
  • internal/research/ (7 directories) — strategic research
  • Gap: No permanent home for SDDs, TDDs, ADRs, dashboards, source materials

Workflow Breakdown:

Research Session → Staging Area (gitignored) → ??? → Lost Forever

Single Assessment Doc → internal/analysis/

What Gets Lost:

  • Software Design Documents (SDDs)
  • Technical Design Documents (TDDs)
  • Architecture Decision Records (ADRs)
  • C4 architecture diagrams
  • JSX dashboards (React components)
  • Glossaries and reference materials
  • Mermaid diagrams
  • Source materials (PDFs, transcripts, blog posts)
  • Follow-up prompts and ideation

1.2 ADR-206 Context

The ADR-206 Autonomous Research Pipeline (approved 2026-02-16) defines a 4-phase multi-agent workflow producing:

Phase 1 Outputs (9 artifacts):

  1. 1-2-3-detailed-quick-start.md
  2. coditect-impact.md
  3. executive-summary.md
  4. sdd.md
  5. tdd.md
  6. c4-architecture.md
  7. adrs/ (3-7 ADRs)
  8. glossary.md
  9. mermaid-diagrams.md

Phase 2 Outputs (4-6 dashboards):

  1. tech-architecture-analyzer.jsx
  2. strategic-fit-dashboard.jsx
  3. coditect-integration-playbook.jsx
  4. executive-decision-brief.jsx
  5. competitive-comparison.jsx (extended)
  6. implementation-planner.jsx (extended)

Phase 3 Outputs:

  • follow-up-prompts.md (15-25 prompts across 6 categories)

Phase 4:

  • Handoff to /new-project for project genesis

Problem: ADR-206 defines artifact CREATION but NOT artifact PROMOTION to permanent storage. Without this proposal, ADR-206 outputs would remain in analyze-new-artifacts/ indefinitely.


2. Research Taxonomy (6 Categories)

2.1 Taxonomy Design Principles

  1. Purpose-driven — Categories reflect research intent, not format
  2. Mutually exclusive — Each research topic belongs to exactly one category
  3. Lifecycle-aware — Categories have distinct promotion criteria
  4. Scalable — Can accommodate 100+ research topics without restructuring

2.2 Category Definitions

Category 1: Technology Evaluation

Purpose: Assess external tools, frameworks, APIs, or platforms for CODITECT integration.

Characteristics:

  • Focuses on specific technology (e.g., Agent Labs, CopilotKit, Codex)
  • Produces integration feasibility assessment
  • Results in Go/No-Go decision (often captured as ADR)
  • Time-bound (evaluation period: 1-7 days)

Example Topics:

  • agent-labs, copilotkit, motia, openclaw, codex
  • opencode, agent-zero, unified-studio, bookmarks-buku
  • moltbot, clawdrop, paperbanana, plugins

Typical Artifacts:

  • SDD (integration design)
  • TDD (API specifications)
  • ADRs (adoption decision)
  • Executive summary
  • CODITECT impact analysis
  • JSX dashboards (competitive comparison, tech architecture analyzer)

Promotion Criteria (see Section 3):

  • Executive summary → internal/analysis/{topic}/
  • ADRs → internal/architecture/adrs/ (if adopted) or internal/architecture/adrs/rejected/
  • SDD/TDD → internal/architecture/integrations/{topic}/ (if adopted)

Category 2: Academic Research

Purpose: Extract patterns, algorithms, or methodologies from academic papers for CODITECT implementation.

Characteristics:

  • Source: arXiv, academic journals, conference proceedings
  • Focuses on research findings, not commercial products
  • Often multi-paper synthesis
  • Results in implementation roadmap or capability enhancement

Example Topics:

  • scaling-agent-systems (arXiv 2512.08296v2)
  • recursive-llms, continual-learning, context-graph
  • parallel-agent-reinforcement-learning
  • ambiguity-and-intent, ai-guardrails
  • agentic-paradigms-healthcare, prompt-repetition
  • consequence-aware-autonomous-execution

Typical Artifacts:

  • Paper annotations (PDF + markdown conversion via UDOM)
  • Executive summary (paper synthesis)
  • Implementation recommendations
  • ADRs (if pattern adopted)
  • Glossary (domain terminology)
  • Mermaid diagrams (visual synthesis)

Promotion Criteria:

  • Executive summary → internal/analysis/academic/{topic}/
  • Paper PDFs + markdown → internal/research/academic/{topic}/papers/
  • ADRs → internal/architecture/adrs/ (if pattern implemented)
  • Implementation recommendations → internal/architecture/academic-patterns/

Category 3: Competitive Intelligence

Purpose: Monitor competitors, market positioning, and differentiation strategies.

Characteristics:

  • Focus: competitor products, pricing, features, market share
  • Sources: competitor websites, demos, press releases, investor updates
  • Time-sensitive (market changes frequently)
  • Results in positioning strategy or feature prioritization

Example Topics:

  • anthropic-cowork-impact, palantir, kimi-2.5
  • microsoft-fabric, ibm-ai, eric-schmidt-ai
  • dylan-davis (thought leader analysis)

Typical Artifacts:

  • Competitive comparison matrix
  • Feature gap analysis
  • Pricing analysis
  • Market positioning recommendations
  • Executive brief (board-level)
  • JSX dashboard (competitive-comparison.jsx)

Promotion Criteria:

  • Executive summary → internal/research/market-research/competitors/{company}/
  • Competitive matrix → internal/research/market-research/competitive-analysis/
  • Dashboards → internal/research/market-research/dashboards/
  • Note: Competitive intel ages quickly — auto-archive after 12 months unless refreshed

Category 4: Business & Market Research

Purpose: Validate market opportunity, business models, pricing, and go-to-market strategies.

Characteristics:

  • Focus: TAM/SAM/SOM, pricing models, customer economics, ROI
  • Sources: market research reports, case studies, financial models
  • Results in business case or GTM strategy
  • Board-level decision support

Example Topics:

  • product-market-fit, tiny-seed, runway-mentor-deck
  • financial-model, value-proposition, use-cases
  • hi-tech-ventures, roblox-market (market sizing analogs)

Typical Artifacts:

  • Business case documents
  • Financial models (Excel/JSON)
  • Executive summaries (1-page)
  • Market sizing analysis
  • Pricing strategy
  • Customer success economics
  • JSX dashboards (strategic-fit-dashboard.jsx, executive-decision-brief.jsx)

Promotion Criteria:

  • Business cases → internal/research/business/cases/{topic}/
  • Financial models → internal/research/business/models/{topic}/
  • Market sizing → internal/research/market-research/sizing/
  • Executive summaries → internal/research/business/executive-briefs/

Category 5: Domain Research

Purpose: Deep-dive into specific industries (healthcare, biotech, legal, security) for domain-specific feature development.

Characteristics:

  • Focus: industry workflows, compliance, regulations, terminology
  • Results in domain-specific product features or compliance requirements
  • Often drives Track G (DMS Product) or Track O-AA (PCF Business) work

Example Topics:

  • bioscience-workorders, LIMS-sample-receiving, healthcare
  • regulatory-frameworks, ai-risk-management, legal-contracts
  • license-management, c3pao-accreditation, zero-trust-security

Typical Artifacts:

  • Domain glossary (terminology)
  • Workflow diagrams (Mermaid)
  • Compliance requirements (checklists)
  • SDD (domain-specific features)
  • TDD (data models for domain)
  • ADRs (compliance decisions)

Promotion Criteria:

  • Glossaries → internal/research/domain/{industry}/glossary.md
  • Workflow diagrams → internal/research/domain/{industry}/workflows/
  • Compliance requirements → internal/research/domain/{industry}/compliance/
  • SDD/TDD → internal/architecture/domain/{industry}/
  • ADRs → internal/architecture/adrs/ (tagged with domain)

Category 6: Process & Internal Research

Purpose: Improve CODITECT's own processes, infrastructure, or framework capabilities.

Characteristics:

  • Focus: internal tools, developer experience, automation
  • Results in framework enhancements or process improvements
  • Often drives Track H (Framework) or Track C (DevOps) work

Example Topics:

  • system-prompt, master-prompt, conflict-avoidance
  • decision-rights, new-project-initiation, method-for-analysis
  • core-process-framework, docker-registry, docusaurus-search
  • new-installation-errors, ai-applicability-occupations

Typical Artifacts:

  • Process documentation
  • System prompt updates
  • Framework enhancement proposals
  • Infrastructure analysis
  • Developer guides
  • ADRs (framework decisions)

Promotion Criteria:

  • Process docs → internal/process/{topic}/
  • System prompts → .coditect/prompts/{topic}/ (if adopted)
  • Framework proposals → internal/architecture/framework-enhancements/
  • Infrastructure analysis → internal/research/infrastructure/
  • ADRs → internal/architecture/adrs/

2.3 Special Categories (Non-Research Content)

Media & Transcripts

Nature: Raw source materials, not research outputs.

Content: YouTube transcripts, podcast transcripts, conference recordings.

Examples:

  • davos-economic-forum, AI-HOUSE-DAVOS, a16z-youtube
  • lex-clips, all-in-podcast, nate-b-jones-youtube
  • internal-meeting-notes

Lifecycle:

  • Stay in staging during active research
  • Move to internal/research/source-materials/transcripts/{topic}/ if referenced by permanent analysis
  • Otherwise delete after research completion (regenerable via YouTube API)

UI/UX & Visual

Nature: Screenshots, images, design mockups.

Content: Browser screenshots, profile images, UI analysis.

Examples:

  • screenshots, browser-analysis, browser-screenshots
  • analyze-ui-images, profile-images, ui-ux-agent-design

Lifecycle:

  • Visual references for design work
  • Move to internal/research/design/{topic}/ if part of design system research
  • Otherwise delete (transient reference materials)

3. Artifact Lifecycle (4 Stages)

3.1 Lifecycle Model

Stage 1: STAGING

Stage 2: ANALYSIS

Stage 3: INTEGRATION

Stage 4: ARCHIVE

3.2 Stage 1: STAGING

Location: analyze-new-artifacts/{topic}/

Status: Gitignored (ephemeral, not tracked)

Purpose: Active research workspace for artifact generation.

Duration: 1-30 days (typical: 1-7 days)

Contents:

  • Source materials (PDFs, transcripts, web crawls)
  • Draft artifacts (SDDs, TDDs, ADRs)
  • Generated outputs (dashboards, diagrams)
  • Work-in-progress notes

Activities:

  • Research pipeline execution (ADR-206)
  • Manual research sessions
  • Multi-agent artifact generation
  • Quality validation iterations

Exit Criteria:

  • Research complete (all planned artifacts generated)
  • Quality gate passed (validation score ≥0.7)
  • Promotion decision made (see Section 3.4)

Automation:

  • ADR-206 pipeline auto-creates staging directories
  • Manifest auto-generated on pipeline completion
  • Quality validation run before promotion

3.3 Stage 2: ANALYSIS

Location: internal/analysis/{category}/{topic}/

Status: Git-tracked (permanent record)

Purpose: Refined assessment documents for future reference.

Duration: Indefinite (permanent unless superseded)

Contents (Promoted from Staging):

  • Executive summary (primary artifact)
  • Assessment document (integration feasibility, impact analysis)
  • Key findings summary
  • Recommendations
  • References to source materials
  • Manifest (tracking research lineage)

NOT Included (See Stage 3 for these):

  • SDDs/TDDs (go to internal/architecture/)
  • ADRs (go to internal/architecture/adrs/)
  • Dashboards (go to stage-specific locations)
  • Source materials (stay in staging or move to source-materials/)

Promotion Criteria from Staging:

  • ✅ Research concluded with actionable findings
  • ✅ Assessment document written (not just pipeline outputs)
  • ✅ Quality validated (no placeholder content)
  • ✅ References verified (links to sources work)
  • ✅ Manifest created (see Section 5)

Example Structure:

internal/analysis/technology-evaluation/agent-labs/
├── agent-labs-scaling-assessment-2026-02-16.md # Primary assessment
├── executive-summary.md # Promoted from pipeline
├── recommendations.md # Actionable next steps
├── MANIFEST.md # Research lineage
└── README.md # Navigation document

3.4 Stage 3: INTEGRATION

Location: Varies by artifact type (see Section 4)

Status: Git-tracked (production architecture/docs)

Purpose: Artifacts integrated into CODITECT's production architecture or documentation.

Duration: Indefinite (permanent, versioned)

Contents (Promoted from Analysis):

  • SDDsinternal/architecture/integrations/{topic}/ or domain/{industry}/
  • TDDsinternal/architecture/integrations/{topic}/ or domain/{industry}/
  • ADRsinternal/architecture/adrs/ (numbered, indexed)
  • C4 diagramsinternal/architecture/c4-diagrams/integrations/{topic}/
  • Business casesinternal/research/business/cases/{topic}/
  • Financial modelsinternal/research/business/models/{topic}/
  • Domain glossariesinternal/research/domain/{industry}/glossary.md
  • Dashboards → (depends on adoption — see Section 4.6)

Promotion Criteria from Analysis:

  • ✅ Decision made (Go decision in ADR or executive summary)
  • ✅ Artifact ready for production reference (no draft content)
  • ✅ Properly formatted (follows CODITECT standards)
  • ✅ Numbered/indexed appropriately (ADRs, SDDs)
  • ✅ Cross-referenced (links from analysis docs)

Integration Tracking:

  • Analysis document updated with "Integration Status" section
  • Manifest updated with integration locations
  • ADR recorded for any architecture decisions

3.5 Stage 4: ARCHIVE

Location: internal/research/archive/{category}/{topic}/

Status: Git-tracked (historical record)

Purpose: Historical research that is no longer relevant but should be preserved for reference.

Duration: Permanent (never deleted)

Archive Triggers:

  • Superseded by newer research (e.g., competitive intel updated)
  • Technology deprecated (e.g., evaluated tool no longer exists)
  • Decision reversed (e.g., Go decision later changed to No-Go)
  • Time-sensitive research aged out (e.g., 12-month-old market data)

Archive Process:

  1. Move entire analysis directory to archive location
  2. Add status: archived to frontmatter
  3. Add superseded_by: or archive_reason: field
  4. Update manifest with archive date and reason
  5. Create redirect in original location (README pointing to archive)

Example:

internal/research/archive/competitive-intelligence/palantir-2025/
├── palantir-competitive-analysis-2025-06-15.md # Original assessment
├── MANIFEST.md # Preserved lineage
└── ARCHIVE-NOTICE.md # Why archived, when, by whom

3.6 Lifecycle Decision Matrix

Artifact TypeStaging → AnalysisAnalysis → IntegrationAnalysis → Archive
Executive SummaryAlwaysNever (stays in analysis)If superseded
Assessment DocAlwaysNever (stays in analysis)If superseded
SDDSometimes*If Go decisionN/A
TDDSometimes*If Go decisionN/A
ADRSometimes*If decision finalizedIf decision reversed
C4 DiagramsSometimes*If Go decisionN/A
Business CaseAlwaysAlwaysIf superseded
Financial ModelAlwaysAlwaysIf superseded
GlossarySometimes*If domain researchN/A
DashboardsRarely**Rarely**N/A
Source MaterialsRarely***NeverN/A

*Promoted to analysis only if referenced by assessment document. **Dashboards typically stay in staging unless adopted for production use. ***Source materials referenced by URL/path, not copied to analysis.


4. Permanent Directory Structure

4.1 Top-Level Organization

internal/
├── analysis/ # Stage 2: Refined assessments
│ ├── technology-evaluation/
│ ├── academic/
│ ├── competitive-intelligence/
│ ├── business-market/
│ ├── domain/
│ └── process-internal/

├── architecture/ # Stage 3: Integrated architecture
│ ├── adrs/
│ ├── integrations/
│ ├── domain/
│ ├── c4-diagrams/
│ └── academic-patterns/

├── research/ # Stage 3: Integrated research
│ ├── business/
│ ├── market-research/
│ ├── domain/
│ ├── infrastructure/
│ ├── source-materials/
│ └── archive/ # Stage 4: Historical

└── process/ # Stage 3: Internal processes
└── {topic}/

4.2 Analysis Directory Structure

Pattern: internal/analysis/{category}/{topic}/

Files (Required):

  • {topic}-assessment-YYYY-MM-DD.md — Primary assessment document
  • MANIFEST.md — Research lineage and artifacts tracking
  • README.md — Directory navigation

Files (Optional):

  • executive-summary.md — Promoted from pipeline
  • recommendations.md — Actionable next steps
  • integration-status.md — Tracking integration progress
  • key-findings.md — Summary of critical insights

Example:

internal/analysis/technology-evaluation/agent-labs/
├── agent-labs-scaling-assessment-2026-02-16.md
├── executive-summary.md
├── recommendations.md
├── MANIFEST.md
└── README.md

4.3 Architecture Integration Structure

SDDs/TDDs for Integrations:

internal/architecture/integrations/{topic}/
├── SDD-{topic}-integration.md
├── TDD-{topic}-integration.md
├── c4-diagrams/
│ ├── context.md
│ ├── container.md
│ └── component.md
└── README.md

SDDs/TDDs for Domain Features:

internal/architecture/domain/{industry}/
├── SDD-{industry}-domain-features.md
├── TDD-{industry}-data-models.md
├── workflows/
└── README.md

ADRs:

internal/architecture/adrs/
├── ADR-XXX-{topic}-adoption.md # Integration decisions
├── ADR-XXX-{topic}-rejection.md # No-Go decisions
└── rejected/
└── ADR-XXX-{topic}-superseded.md # Reversed decisions

Academic Patterns:

internal/architecture/academic-patterns/
├── {paper-topic}-implementation-guide.md
├── {paper-topic}-coditect-adaptation.md
└── README.md

4.4 Research Integration Structure

Business Research:

internal/research/business/
├── cases/
│ └── {topic}/
│ ├── business-case.md
│ ├── executive-summary.md
│ └── MANIFEST.md
├── models/
│ └── {topic}/
│ ├── {topic}_v{X}.json
│ ├── {topic}_v{X}.xlsx
│ └── assumptions.md
└── executive-briefs/
└── {topic}-brief-YYYY-MM-DD.md

Market Research:

internal/research/market-research/
├── competitors/
│ └── {company}/
│ ├── competitive-analysis-YYYY-MM-DD.md
│ ├── feature-comparison.md
│ └── MANIFEST.md
├── competitive-analysis/
│ ├── feature-matrix-YYYY-MM-DD.md
│ └── positioning-strategy-YYYY-MM-DD.md
├── sizing/
│ ├── TAM-SAM-SOM-analysis-YYYY-MM-DD.md
│ └── market-opportunity-assessment-YYYY-MM-DD.md
└── dashboards/
└── {topic}-competitive-comparison.jsx

Domain Research:

internal/research/domain/{industry}/
├── glossary.md
├── workflows/
│ ├── {workflow-name}.md
│ └── {workflow-name}.mermaid
├── compliance/
│ ├── requirements-checklist.md
│ └── regulatory-framework.md
└── papers/
├── {paper-id}.pdf
├── {paper-id}.md
└── {paper-id}.udom.json

Source Materials:

internal/research/source-materials/
├── papers/
│ └── {research-topic}/
│ ├── {paper-id}.pdf
│ ├── {paper-id}.md
│ └── {paper-id}.udom.json
├── transcripts/
│ └── {source}/
│ └── {episode-title}.txt
└── README.md

4.5 Process Documentation Structure

internal/process/{topic}/
├── {topic}-process-guide.md
├── {topic}-workflow.mermaid
├── {topic}-templates/
└── README.md

4.6 Dashboard Lifecycle

Staging Location:

analyze-new-artifacts/{topic}/dashboards/
├── tech-architecture-analyzer.jsx
├── strategic-fit-dashboard.jsx
├── coditect-integration-playbook.jsx
└── executive-decision-brief.jsx

Promotion Decision:

Dashboard TypePromotion PathCriteria
Tech Architecture AnalyzerRarely promotedResearch-specific; stays in staging
Strategic Fit DashboardRarely promotedResearch-specific; stays in staging
Integration Playbookinternal/research/market-research/dashboards/If reusable for future integrations
Executive Decision Briefinternal/research/business/executive-briefs/If board-level artifact
Competitive Comparisoninternal/research/market-research/dashboards/If reusable for competitor tracking
Implementation PlannerRarely promotedProject-specific; stays in staging

Rationale: JSX dashboards are often research-specific and don't need permanent storage. Only promote if:

  • Reusable across multiple research topics
  • Part of executive reporting framework
  • Adopted for production use in CODITECT UI

5. Manifest System

5.1 Purpose

The manifest system provides:

  • Lineage tracking — What research produced what outputs
  • Artifact inventory — Complete list of generated artifacts
  • Promotion history — Where artifacts moved and when
  • Source attribution — Original materials referenced
  • Integration status — Which artifacts were integrated into CODITECT

5.2 Manifest Format

File: MANIFEST.md (one per research topic)

Location: Co-located with assessment document in internal/analysis/{category}/{topic}/

Template:

---
title: "Research Manifest: {Topic}"
research_date: 'YYYY-MM-DD'
research_category: [technology-evaluation|academic|competitive-intelligence|business-market|domain|process-internal]
pipeline_version: [manual|adr-206-v1.0]
status: [active|integrated|archived]
integration_status: [pending|partial|complete|rejected]
---

# Research Manifest: {Topic}

## Metadata

| Field | Value |
|-------|-------|
| **Research Date** | YYYY-MM-DD |
| **Researcher** | Claude (Model) / Human Name |
| **Category** | {Category} |
| **Pipeline** | [Manual / ADR-206 v1.0] |
| **Staging Location** | `analyze-new-artifacts/{topic}/` |
| **Analysis Location** | `internal/analysis/{category}/{topic}/` |
| **Status** | [Active / Integrated / Archived] |

---

## Research Objective

[1-2 sentences: What question was this research trying to answer?]

---

## Source Materials

| Type | Source | Location |
|------|--------|----------|
| PDF Paper | {Title} | `{path or URL}` |
| Transcript | {YouTube/Podcast} | `{path or URL}` |
| GitHub Repo | {Repo Name} | `{URL}` |
| Web Article | {Title} | `{URL}` |

---

## Generated Artifacts

### Phase 1: Research Artifacts

| Artifact | Staging Path | Promoted To | Status |
|----------|-------------|-------------|--------|
| Executive Summary | `staging/{topic}/executive-summary.md` | `analysis/{category}/{topic}/` | ✅ Promoted |
| Assessment Document | `staging/{topic}/assessment.md` | `analysis/{category}/{topic}/` | ✅ Promoted |
| SDD | `staging/{topic}/sdd.md` | `architecture/integrations/{topic}/` | ✅ Integrated |
| TDD | `staging/{topic}/tdd.md` | `architecture/integrations/{topic}/` | ✅ Integrated |
| C4 Architecture | `staging/{topic}/c4-architecture.md` | `architecture/c4-diagrams/integrations/{topic}/` | ✅ Integrated |
| Glossary | `staging/{topic}/glossary.md` || ❌ Not promoted |
| Mermaid Diagrams | `staging/{topic}/mermaid-diagrams.md` || ❌ Not promoted |

### Phase 2: Dashboards

| Dashboard | Staging Path | Promoted To | Status |
|-----------|-------------|-------------|--------|
| Tech Architecture Analyzer | `staging/{topic}/dashboards/tech-architecture-analyzer.jsx` || ❌ Not promoted |
| Strategic Fit Dashboard | `staging/{topic}/dashboards/strategic-fit-dashboard.jsx` || ❌ Not promoted |
| Integration Playbook | `staging/{topic}/dashboards/coditect-integration-playbook.jsx` | `research/market-research/dashboards/` | ✅ Promoted |
| Executive Decision Brief | `staging/{topic}/dashboards/executive-decision-brief.jsx` || ❌ Not promoted |

### Phase 3: ADRs

| ADR | Staging Path | Promoted To | Status |
|-----|-------------|-------------|--------|
| ADR-XXX-{topic}-adoption | `staging/{topic}/adrs/ADR-001-*.md` | `architecture/adrs/ADR-XXX-{topic}-adoption.md` | ✅ Integrated |
| ADR-XXX-{topic}-integration-pattern | `staging/{topic}/adrs/ADR-002-*.md` | `architecture/adrs/ADR-XXX-{topic}-integration-pattern.md` | ✅ Integrated |

---

## Integration Summary

### Decisions Made

- **Go/No-Go:** [Go | No-Go | Conditional]
- **Decision ADR:** [ADR-XXX reference]
- **Decision Date:** YYYY-MM-DD

### Integrated Components

- ✅ SDD integrated into `internal/architecture/integrations/{topic}/`
- ✅ TDD integrated into `internal/architecture/integrations/{topic}/`
- ✅ 2 ADRs added to architecture decision log
- ❌ Dashboards not adopted (research-specific)

### Remaining in Staging

- Glossary (topic-specific, not reusable)
- Mermaid diagrams (duplicates C4 diagrams)
- 4 dashboards (research-specific, not production)

---

## Follow-up Actions

- [ ] Complete implementation per SDD/TDD (Track: X, Task: X.X.X)
- [ ] Update integration status when implementation complete
- [ ] Archive staging directory after 90 days (2026-05-XX)

---

## References

- **Analysis Document:** `{path}`
- **ADRs:** `internal/architecture/adrs/ADR-XXX-*.md`
- **Integration Location:** `internal/architecture/integrations/{topic}/`

---

**Manifest Version:** 1.0.0
**Last Updated:** YYYY-MM-DD
**Maintainer:** [Agent/Human Name]

5.3 Manifest Automation

ADR-206 Pipeline Integration:

The research pipeline orchestrator (ADR-206) auto-generates manifests on completion:

def generate_manifest(pipeline_result: PipelineResult, config: PipelineConfig):
"""Auto-generate manifest from pipeline execution."""
manifest = {
"research_date": datetime.now().isoformat(),
"researcher": "Claude (Opus 4.6)", # or model used
"category": config.category,
"pipeline": "ADR-206 v1.0",
"staging_location": config.output_dir,
"source_materials": config.sources,
"artifacts": {
"phase1": pipeline_result.research_artifacts,
"phase2": pipeline_result.dashboards,
"phase3": pipeline_result.prompts,
},
"status": "active",
"integration_status": "pending",
}
write_manifest(manifest, config.output_dir / "MANIFEST.md")

Manual Research:

For manual research sessions (not using ADR-206 pipeline), use command:

/generate-manifest --topic "{topic}" --category "{category}"

Interactive prompts collect:

  • Source materials
  • Generated artifacts (manual listing or auto-scan staging dir)
  • Research objective

5.4 Master Manifest Index

Location: internal/research/RESEARCH-MANIFEST-INDEX.md

Purpose: Searchable catalog of all research with status tracking.

Format:

# CODITECT Research Manifest Index

**Last Updated:** YYYY-MM-DD
**Total Research Topics:** XXX
**Active:** XX | **Integrated:** XX | **Archived:** XX

---

## Technology Evaluation

| Topic | Date | Status | Integration | Analysis Path | Manifest |
|-------|------|--------|-------------|---------------|----------|
| agent-labs | 2026-02-16 | Integrated | Complete | `analysis/technology-evaluation/agent-labs/` | [MANIFEST](../analysis/technology-evaluation/agent-labs/MANIFEST.md) |
| copilotkit | 2026-01-22 | Active | Partial | `analysis/technology-evaluation/copilotkit/` | [MANIFEST](../analysis/technology-evaluation/copilotkit/MANIFEST.md) |

## Academic Research

| Topic | Date | Status | Integration | Analysis Path | Manifest |
|-------|------|--------|-------------|---------------|----------|
| scaling-agent-systems | 2026-02-16 | Integrated | Complete | `analysis/academic/scaling-agent-systems/` | [MANIFEST](../analysis/academic/scaling-agent-systems/MANIFEST.md) |

[... continues for all 6 categories ...]

Automation:

# Auto-update master index
python3 scripts/research/update-manifest-index.py

# Search manifests
python3 scripts/research/search-manifests.py --keyword "multi-agent"

6. Promotion Criteria & Decision Matrix

6.1 Promotion Decision Tree

Research Complete in Staging

├─→ Assessment Document Written? ────→ YES ──→ Promote to Analysis
│ ↓
│ NO ──→ Research incomplete, stay in staging

Analysis Complete

├─→ Go Decision Made? ────→ YES ──→ Promote SDDs/TDDs/ADRs to Integration
│ ↓
│ NO ──→ Stay in Analysis (assessment only)

Time-Sensitive Research?

├─→ Age > 12 months? ────→ YES ──→ Archive (if not refreshed)
│ ↓
│ NO ──→ Stay in Analysis

6.2 Promotion Criteria by Artifact Type

Executive Summary

Criteria:

  • ✅ 1-2 pages (500-1000 words)
  • ✅ Contains: objective, findings, recommendation, impact
  • ✅ Standalone (readable without other artifacts)
  • ✅ No placeholder content

Promotion Path: Staging → Analysis (always)

Location: internal/analysis/{category}/{topic}/executive-summary.md


Assessment Document

Criteria:

  • ✅ 5-15 pages (2000-8000 words)
  • ✅ Synthesizes research findings (not just pipeline output)
  • ✅ Contains analysis, not just summaries
  • ✅ References source materials
  • ✅ Includes recommendations with rationale
  • ✅ Quality validated (no broken links, no TODO markers)

Promotion Path: Staging → Analysis (always, if research complete)

Location: internal/analysis/{category}/{topic}/{topic}-assessment-YYYY-MM-DD.md


SDD (Software Design Document)

Criteria:

  • ✅ Follows CODITECT SDD template
  • ✅ Contains C4 context/container/component views
  • ✅ Describes integration architecture
  • ✅ Referenced by ADR with Go decision
  • ✅ Reviewed for technical accuracy

Promotion Path:

  1. Staging → Analysis (if referenced by assessment)
  2. Analysis → Integration (if Go decision)

Location (Analysis): internal/analysis/{category}/{topic}/sdd.md Location (Integration): internal/architecture/integrations/{topic}/SDD-{topic}-integration.md


TDD (Technical Design Document)

Criteria:

  • ✅ Follows CODITECT TDD template
  • ✅ Contains data models, API specs, sequence diagrams
  • ✅ Describes integration technical details
  • ✅ Referenced by ADR with Go decision
  • ✅ Reviewed for technical accuracy

Promotion Path:

  1. Staging → Analysis (if referenced by assessment)
  2. Analysis → Integration (if Go decision)

Location (Analysis): internal/analysis/{category}/{topic}/tdd.md Location (Integration): internal/architecture/integrations/{topic}/TDD-{topic}-integration.md


ADR (Architecture Decision Record)

Criteria:

  • ✅ Follows ADR template (Context, Decision, Consequences, Alternatives)
  • ✅ Status finalized (Proposed → Accepted | Rejected)
  • ✅ Numbered (ADR-XXX assigned)
  • ✅ Indexed in ADR index
  • ✅ Cross-referenced by assessment document

Promotion Path:

  1. Staging → Analysis (draft ADRs, if referenced)
  2. Analysis → Integration (when status finalized)

Location (Analysis): internal/analysis/{category}/{topic}/adrs/ADR-draft-*.md Location (Integration - Accepted): internal/architecture/adrs/ADR-XXX-{topic}.md Location (Integration - Rejected): internal/architecture/adrs/rejected/ADR-XXX-{topic}-rejected.md


Business Case

Criteria:

  • ✅ Contains: market opportunity, financial projections, ROI, risks
  • ✅ Financial model attached (Excel/JSON)
  • ✅ Executive summary (1-page)
  • ✅ Board-ready (no draft content)

Promotion Path: Staging → Analysis → Integration (always, if complete)

Location (Analysis): internal/analysis/business-market/{topic}/business-case.md Location (Integration): internal/research/business/cases/{topic}/business-case.md


Glossary

Criteria:

  • ✅ Domain-specific terminology (10+ terms)
  • ✅ Definitions accurate and sourced
  • ✅ Reusable for future research in same domain

Promotion Path: Staging → Integration (only if reusable)

Location (Integration): internal/research/domain/{industry}/glossary.md

Note: Most glossaries stay in staging (research-specific, not reusable).


Dashboards (JSX)

Criteria:

  • ✅ Reusable across multiple research topics
  • ✅ Part of executive reporting framework
  • ✅ OR: Adopted for production use in CODITECT UI

Promotion Path: Staging → Integration (rarely)

Location (Integration): internal/research/market-research/dashboards/{dashboard-name}.jsx

Note: Most dashboards stay in staging (research-specific).


Source Materials (PDFs, Transcripts)

Criteria:

  • ✅ Directly referenced by analysis document
  • ✅ Not easily regenerable (e.g., proprietary paper, internal transcript)

Promotion Path: Staging → Integration (rarely)

Location (Integration): internal/research/source-materials/{type}/{topic}/

Note: Most source materials stay in staging and referenced by URL/path. Only promote if:

  • Proprietary/paywalled (can't re-access)
  • Internal materials (transcripts of internal meetings)
  • Critical reference for understanding analysis

6.3 Promotion Workflow

Automated (ADR-206 Pipeline):

async def promote_artifacts(pipeline_result: PipelineResult, config: PipelineConfig):
"""Auto-promote artifacts based on pipeline completion."""

# Stage 1: Always promote to Analysis
promote_to_analysis(
artifacts=["executive-summary.md", "assessment.md"],
destination=f"internal/analysis/{config.category}/{config.topic}/"
)

# Stage 2: Conditional promotion to Integration
if config.genesis_handoff: # --genesis flag set
# Go decision implied by handoff to /new-project
promote_to_integration(
artifacts=["sdd.md", "tdd.md", "c4-architecture.md", "adrs/"],
destination=f"internal/architecture/integrations/{config.topic}/"
)
else:
# Manual promotion after decision
log_promotion_pending(artifacts=["sdd.md", "tdd.md", "adrs/"])

Manual (Post-Research):

# Promote assessment to analysis
/research-promote --topic "{topic}" --to analysis

# Promote SDDs/TDDs/ADRs to integration (after Go decision)
/research-promote --topic "{topic}" --to integration --artifacts sdd,tdd,adrs

# Archive old research
/research-archive --topic "{topic}" --reason "superseded by {new-topic}"

7. Naming Conventions

7.1 Directory Naming

Pattern: {category}-{topic} (kebab-case, lowercase)

Examples:

internal/analysis/technology-evaluation/agent-labs/
internal/analysis/academic/scaling-agent-systems/
internal/analysis/competitive-intelligence/palantir/
internal/analysis/business-market/product-market-fit/
internal/analysis/domain/bioscience-workorders/
internal/analysis/process-internal/system-prompt/

Rules:

  • Use kebab-case (hyphens, not underscores or camelCase)
  • Lowercase only (no UPPERCASE)
  • No coditect- prefix (redundant in CODITECT repo)
  • No version numbers in directory names (use file versioning)
  • No dates in directory names (use file naming for dates)

7.2 File Naming

Assessment Documents:

{topic}-assessment-YYYY-MM-DD.md

Executive Summaries:

executive-summary.md  (standard name, no date)

SDDs/TDDs:

SDD-{topic}-integration.md
TDD-{topic}-integration.md
SDD-{industry}-domain-features.md
TDD-{industry}-data-models.md

ADRs:

ADR-XXX-{topic}-{decision-summary}.md

Business Cases:

{topic}-business-case-YYYY-MM-DD.md

Financial Models:

{topic}_v{X}.{Y}.json
{topic}_v{X}.{Y}.xlsx

Dashboards:

{dashboard-name}.jsx  (kebab-case)

Manifests:

MANIFEST.md  (standard name, all research topics)

7.3 Frontmatter Standards

All Documents (Required Fields):

---
title: "Document Title"
type: [assessment|sdd|tdd|adr|business-case|reference]
audience: contributor
status: [draft|active|integrated|archived]
created: 'YYYY-MM-DD'
updated: 'YYYY-MM-DD'
summary: "One-line AI agent summary"
tags:
- research
- {category}
- {topic}
---

Research-Specific Fields:

research_date: 'YYYY-MM-DD'
research_category: [technology-evaluation|academic|competitive-intelligence|business-market|domain|process-internal]
pipeline_version: [manual|adr-206-v1.0]
integration_status: [pending|partial|complete|rejected]
superseded_by: 'path/to/newer-doc.md' # If archived
archive_reason: 'Brief explanation' # If archived

8. Integration with ADR-206 Pipeline

8.1 ADR-206 Pipeline Outputs

Current Output (ADR-206):

{output-dir}/
├── README.md
├── research-context.json
├── 1-2-3-detailed-quick-start.md
├── coditect-impact.md
├── executive-summary.md
├── sdd.md
├── tdd.md
├── c4-architecture.md
├── glossary.md
├── mermaid-diagrams.md
├── follow-up-prompts.md
├── adrs/
│ ├── ADR-001-*.md
│ └── ADR-00N-*.md
├── dashboards/
│ ├── tech-architecture-analyzer.jsx
│ ├── strategic-fit-dashboard.jsx
│ ├── coditect-integration-playbook.jsx
│ └── executive-decision-brief.jsx
└── pipeline-report.json

Enhanced Output (with this proposal):

{output-dir}/
├── README.md
├── MANIFEST.md # ⬅️ NEW: Auto-generated manifest
├── research-context.json
├── 1-2-3-detailed-quick-start.md
├── coditect-impact.md
├── executive-summary.md
├── sdd.md
├── tdd.md
├── c4-architecture.md
├── glossary.md
├── mermaid-diagrams.md
├── follow-up-prompts.md
├── adrs/
│ ├── ADR-001-*.md
│ └── ADR-00N-*.md
├── dashboards/
│ ├── tech-architecture-analyzer.jsx
│ ├── strategic-fit-dashboard.jsx
│ ├── coditect-integration-playbook.jsx
│ └── executive-decision-brief.jsx
├── pipeline-report.json
└── PROMOTION-CHECKLIST.md # ⬅️ NEW: Next steps for user

8.2 Pipeline Enhancement: Auto-Promotion

New Pipeline Phase 4.5: Artifact Promotion (Optional)

# Phase 4: Genesis (optional) — EXISTING
if config.genesis:
await self.handoff_to_new_project(artifacts, dashboards, prompts)

# Phase 4.5: Promotion (optional) — NEW
if config.auto_promote:
await self.promote_to_analysis(artifacts, config)

if config.genesis: # If genesis flag set, Go decision implied
await self.promote_to_integration(artifacts, config)

New CLI Flags:

/research-pipeline "{topic}" \
--urls https://example.com \
--github https://github.com/org/repo \
--genesis \ # Handoff to /new-project (Go decision)
--auto-promote # Auto-promote to analysis + integration

Promotion Logic:

async def promote_to_analysis(artifacts: dict, config: PipelineConfig):
"""Auto-promote core artifacts to analysis."""
analysis_dir = Path(f"internal/analysis/{config.category}/{config.topic}")
analysis_dir.mkdir(parents=True, exist_ok=True)

# Always promote these to analysis
promote_files = [
"executive-summary.md",
"coditect-impact.md", # Rename to assessment
]

for file in promote_files:
src = Path(config.output_dir) / file
dst = analysis_dir / file
shutil.copy2(src, dst)

# Create README and MANIFEST
create_analysis_readme(analysis_dir, config)
create_manifest(analysis_dir, artifacts, config)

async def promote_to_integration(artifacts: dict, config: PipelineConfig):
"""Auto-promote SDDs/TDDs/ADRs to integration (if Go decision)."""
integration_dir = Path(f"internal/architecture/integrations/{config.topic}")
integration_dir.mkdir(parents=True, exist_ok=True)

# Promote SDDs/TDDs
shutil.copy2(
Path(config.output_dir) / "sdd.md",
integration_dir / f"SDD-{config.topic}-integration.md"
)
shutil.copy2(
Path(config.output_dir) / "tdd.md",
integration_dir / f"TDD-{config.topic}-integration.md"
)

# Promote ADRs (renumber with next available ADR-XXX)
for adr_draft in (Path(config.output_dir) / "adrs").glob("*.md"):
adr_num = get_next_adr_number()
adr_name = adr_draft.stem.replace("ADR-001", f"ADR-{adr_num:03d}")
shutil.copy2(adr_draft, Path("internal/architecture/adrs") / f"{adr_name}.md")

8.3 Pipeline Enhancement: Category Auto-Detection

Problem: User must specify --category flag manually.

Solution: Auto-detect category based on research inputs.

def auto_detect_category(config: PipelineConfig) -> str:
"""Auto-detect research category from inputs."""

# GitHub repo URL → technology evaluation
if config.github:
return "technology-evaluation"

# arXiv paper URL → academic
if any("arxiv.org" in url for url in config.urls):
return "academic"

# Competitor domain → competitive intelligence
competitor_domains = ["anthropic.com", "openai.com", "microsoft.com"]
if any(domain in url for url in config.urls for domain in competitor_domains):
return "competitive-intelligence"

# Financial model keywords → business-market
if "financial" in config.topic.lower() or "market" in config.topic.lower():
return "business-market"

# Default: prompt user
return ask_user_category()

9. Migration Plan for Existing 98 Directories

9.1 Migration Strategy

Approach: Semi-automated with human review.

Phases:

  1. Categorize — Auto-assign categories to 98 directories
  2. Assess — Human review: promote vs. delete vs. archive
  3. Promote — Move valuable artifacts to permanent locations
  4. Archive — Move historical research to archive
  5. Delete — Remove staging directories (after 90-day grace period)

9.2 Phase 1: Categorization (Automated)

Script: scripts/research/categorize-staging.py

Algorithm:

def categorize_directory(dir_name: str, contents: list) -> str:
"""Auto-assign category based on directory name and contents."""

# Technology evaluation patterns
tech_keywords = ["api", "tool", "framework", "sdk", "integration", "plugin"]
if any(kw in dir_name.lower() for kw in tech_keywords):
return "technology-evaluation"

# Academic research patterns
if "research" in dir_name and any(ext in contents for ext in [".pdf", ".tex"]):
return "academic"

# Competitive intelligence patterns
competitor_names = ["anthropic", "openai", "microsoft", "palantir", "kimi"]
if any(comp in dir_name.lower() for comp in competitor_names):
return "competitive-intelligence"

# Business patterns
business_keywords = ["financial", "market", "business", "runway", "seed"]
if any(kw in dir_name.lower() for kw in business_keywords):
return "business-market"

# Domain patterns
domain_keywords = ["healthcare", "biotech", "legal", "regulatory", "compliance"]
if any(kw in dir_name.lower() for kw in domain_keywords):
return "domain"

# Process patterns
process_keywords = ["system-prompt", "docker", "installation", "process"]
if any(kw in dir_name.lower() for kw in process_keywords):
return "process-internal"

# Default: manual review needed
return "uncategorized"

Output: staging-categorization-report.json

{
"categorized": {
"technology-evaluation": 24,
"academic": 18,
"competitive-intelligence": 15,
"business-market": 12,
"domain": 11,
"process-internal": 8
},
"uncategorized": 10,
"total": 98,
"details": [
{
"directory": "coditect-agent-labs-research",
"category": "technology-evaluation",
"confidence": 0.95,
"rationale": "GitHub repo URL present, tool evaluation artifacts"
},
...
]
}

9.3 Phase 2: Assessment (Human Review)

For each categorized directory, decide:

DecisionActionCriteria
PromoteMove to internal/analysis/{category}/Valuable findings, reusable
ArchiveMove to internal/research/archive/{category}/Historical value, no current relevance
DeleteRemove from staging (after grace period)No value, regenerable, duplicates

Review Tool:

/research-review --category technology-evaluation

# Interactive prompts:
# - Show directory contents
# - Show any existing assessment docs
# - Suggest promotion path
# - User decides: promote / archive / delete

9.4 Phase 3: Promotion (Semi-Automated)

For directories marked "promote":

/research-promote \
--source "analyze-new-artifacts/{topic}" \
--category "{category}" \
--artifacts "executive-summary.md,sdd.md,tdd.md,adrs/"

# Actions:
# 1. Create internal/analysis/{category}/{topic}/
# 2. Copy specified artifacts
# 3. Generate MANIFEST.md
# 4. Create README.md
# 5. Update RESEARCH-MANIFEST-INDEX.md
# 6. Add to git

Validation:

  • Check frontmatter present and valid
  • Verify no broken links
  • Confirm no TODO/FIXME markers
  • Lint markdown

9.5 Phase 4: Archive (Manual)

For directories marked "archive":

/research-archive \
--source "analyze-new-artifacts/{topic}" \
--category "{category}" \
--reason "Superseded by {new-topic}"

# Actions:
# 1. Create internal/research/archive/{category}/{topic}/
# 2. Copy all artifacts
# 3. Add ARCHIVE-NOTICE.md with reason
# 4. Update MANIFEST.md with archive status
# 5. Update RESEARCH-MANIFEST-INDEX.md
# 6. Add to git

9.6 Phase 5: Deletion (Automated with Grace Period)

For directories marked "delete":

# Mark for deletion (grace period: 90 days)
/research-mark-delete --topic "{topic}" --reason "No value, regenerable"

# Moves to internal grace period directory
mv analyze-new-artifacts/{topic} analyze-new-artifacts/.pending-deletion/{topic}

# After 90 days, auto-delete (cron job)
# scripts/research/cleanup-expired-deletions.sh

Grace Period Purpose:

  • Allow recovery if deletion was mistake
  • User can un-mark: /research-unmark-delete --topic "{topic}"

9.7 Migration Timeline

PhaseDurationEffortStatus
1. Categorization1 dayAutomated scriptNot started
2. Assessment5 daysHuman review (98 dirs × 20 min avg)Not started
3. Promotion2 daysSemi-automated (30 dirs × 15 min)Not started
4. Archive1 dayManual (20 dirs × 10 min)Not started
5. Deletion90 daysAutomated (grace period)Not started
Total99 days~40 hours human effortNot started

10. Comparison of Alternatives

10.1 Alternative 1: Keep Current Ad-Hoc System

Description: Continue using analyze-new-artifacts/ as permanent storage, no organization.

Pros:

  • Zero migration effort
  • No new tooling needed

Cons:

  • ❌ No searchability (6.1GB of unsearchable files)
  • ❌ No lifecycle management (files never deleted)
  • ❌ No integration with ADR-206 pipeline
  • ❌ No distinction between staging and permanent
  • ❌ Duplicates and inconsistency continue

Verdict:Rejected — Scales poorly, loses knowledge.


10.2 Alternative 2: Git-Track Staging Area

Description: Remove .gitignore from analyze-new-artifacts/, track everything.

Pros:

  • All research preserved in git history
  • No promotion workflow needed

Cons:

  • ❌ 6.1GB in git (PDFs, large artifacts)
  • ❌ No distinction between staging and permanent
  • ❌ Clutters git history with work-in-progress
  • ❌ Slows down clones/pulls

Verdict:Rejected — Git is not a storage system for ephemeral files.


10.3 Alternative 3: External Knowledge Base (Notion, Confluence)

Description: Store research in external wiki/knowledge base, reference from CODITECT.

Pros:

  • Better search than git
  • Rich media support (embeds, images)
  • Collaboration features

Cons:

  • ❌ Separates research from code (no single source of truth)
  • ❌ Requires external service (vendor lock-in)
  • ❌ No integration with ADR-206 pipeline
  • ❌ Markdown export/import overhead

Verdict:Rejected — CODITECT is markdown-first; keep research co-located with architecture.


10.4 Alternative 4: Database-Backed Knowledge Graph

Description: Store research in SQLite/PostgreSQL with semantic search and graph relationships.

Pros:

  • Best searchability (full-text search, semantic vectors)
  • Rich relationships (research → ADRs → code)
  • Query flexibility

Cons:

  • ❌ Over-engineered for current scale (~100 research topics)
  • ❌ Requires migration from markdown to database
  • ❌ Loses markdown-first simplicity
  • ❌ Requires additional tooling/maintenance

Verdict:Rejected — Overkill for current needs. Markdown + manifests + grep is sufficient.


10.5 Alternative 5: This Proposal (Markdown + Manifests)

Description: Markdown files in git, organized by category, tracked by manifests, lifecycle-managed.

Pros:

  • ✅ Markdown-first (CODITECT standard)
  • ✅ Git-tracked (version history)
  • ✅ Searchable (grep, semantic search on markdown)
  • ✅ Scalable (can handle 1000+ research topics)
  • ✅ Integrates with ADR-206 pipeline
  • ✅ Clear lifecycle (staging → analysis → integration → archive)

Cons:

  • Requires migration effort (40 hours)
  • Requires new tooling (promotion scripts, review commands)

Verdict:RECOMMENDED — Best balance of simplicity, scalability, and integration.


11. Success Metrics

11.1 Quantitative Metrics

MetricBaseline (Now)Target (Post-Migration)
Staging Clutter98 directories≤13 active research topics
Searchable Research20% (analysis only)100% (all permanent)
Promotion TimeManual (hours)Automated (seconds)
Artifact Loss Rate80% (only assessment promoted)0% (systematic promotion)
Research FindabilityLow (no index)High (manifest index)
Git Repo Size+6.1GB (if tracked)+200MB (selective promotion)

11.2 Qualitative Metrics

MetricSuccess Criteria
DiscoverabilityAny contributor can find research via manifest index
ReusabilitySDDs/TDDs/ADRs reused across projects
IntegrationADR-206 pipeline outputs directly to permanent locations
Lifecycle ClarityClear rules for promotion/archival (no ambiguity)
MaintainabilityManifest system self-documenting, low maintenance

12. Implementation Roadmap

12.1 Phase 1: Foundation (Week 1)

Tasks:

  • Create directory structure (internal/analysis/{6 categories}/)
  • Create manifest template (MANIFEST.md)
  • Create master manifest index (RESEARCH-MANIFEST-INDEX.md)
  • Write promotion scripts (/research-promote, /research-archive)
  • Write categorization script (scripts/research/categorize-staging.py)

Deliverables:

  • Empty directory structure ready for migration
  • Manifest system operational
  • Automation scripts tested

12.2 Phase 2: Pilot Migration (Week 2)

Tasks:

  • Categorize 10 sample directories (2 per category)
  • Manually promote 5 directories to test workflow
  • Archive 3 directories to test archive workflow
  • Delete 2 directories to test deletion workflow
  • Iterate on tooling based on learnings

Deliverables:

  • 10 research topics migrated and validated
  • Workflow refined and documented
  • Migration guide for remaining 88 directories

12.3 Phase 3: Bulk Migration (Week 3-4)

Tasks:

  • Auto-categorize remaining 88 directories
  • Human review: assess each directory (promote/archive/delete)
  • Batch promote 25 directories
  • Batch archive 15 directories
  • Mark 48 directories for deletion (grace period)

Deliverables:

  • 40 research topics migrated (30 promoted + 15 archived)
  • 48 directories marked for deletion
  • Manifest index populated

12.4 Phase 4: ADR-206 Integration (Week 5)

Tasks:

  • Enhance ADR-206 pipeline with manifest generation
  • Add --auto-promote flag to pipeline
  • Add --category auto-detection
  • Test full pipeline: research → staging → analysis → integration
  • Document new workflow in ADR-206

Deliverables:

  • ADR-206 pipeline fully integrated with promotion system
  • End-to-end automation: research → permanent storage
  • Documentation updated

12.5 Phase 5: Grace Period & Cleanup (Week 6-18)

Tasks:

  • Monitor 90-day grace period for marked deletions
  • Allow recovery of mis-marked directories
  • Auto-delete expired directories (after 90 days)
  • Final validation: all valuable research promoted

Deliverables:

  • Staging area cleaned (13 active topics only)
  • Zero artifact loss
  • System stable and maintainable

13. Risks & Mitigations

13.1 Risk: Accidental Deletion of Valuable Research

Likelihood: Medium Impact: High

Mitigation:

  • 90-day grace period before deletion
  • Manual review required before marking for deletion
  • Recovery command: /research-unmark-delete
  • Manifest system tracks all artifacts (even deleted ones)

13.2 Risk: Inconsistent Categorization

Likelihood: Medium Impact: Medium

Mitigation:

  • Auto-categorization script with confidence scores
  • Human review of all categorizations
  • Re-categorization command: /research-recategorize
  • Category definitions clearly documented (Section 2)

13.3 Risk: Promotion Overhead Discourages Use

Likelihood: Medium Impact: High

Mitigation:

  • ADR-206 pipeline auto-promotes with --auto-promote flag
  • Semi-automated promotion (not fully manual)
  • Batch promotion for bulk operations
  • Clear promotion criteria (no ambiguity)

13.4 Risk: Manifest System Becomes Stale

Likelihood: Medium Impact: Medium

Mitigation:

  • Manifests auto-generated by ADR-206 pipeline
  • Validation checks on manifest updates
  • Master index auto-updated by scripts
  • Cron job: detect manifests needing updates

14. Appendices

14.1 Appendix A: Migration Script Pseudocode

def migrate_research_topic(topic: str, decision: str):
"""Migrate a single research topic."""

if decision == "promote":
# Step 1: Detect category
category = auto_detect_category(topic)

# Step 2: Create analysis directory
analysis_dir = f"internal/analysis/{category}/{topic}"
os.makedirs(analysis_dir, exist_ok=True)

# Step 3: Copy artifacts
artifacts_to_promote = [
"executive-summary.md",
"coditect-impact.md",
"sdd.md", # if exists
"tdd.md", # if exists
"adrs/", # if exists
]
for artifact in artifacts_to_promote:
if exists(f"analyze-new-artifacts/{topic}/{artifact}"):
copy(f"analyze-new-artifacts/{topic}/{artifact}", analysis_dir)

# Step 4: Generate manifest
generate_manifest(analysis_dir, topic, category)

# Step 5: Create README
create_analysis_readme(analysis_dir, topic)

# Step 6: Update master index
update_manifest_index(topic, category, "active")

# Step 7: Git add
git_add(analysis_dir)

elif decision == "archive":
# Similar process, but to archive location
pass

elif decision == "delete":
# Move to grace period directory
os.rename(
f"analyze-new-artifacts/{topic}",
f"analyze-new-artifacts/.pending-deletion/{topic}"
)
create_deletion_notice(topic, grace_period_days=90)

14.2 Appendix B: Manifest Query Examples

# Find all research on multi-agent systems
grep -r "multi-agent" internal/analysis/*/*/MANIFEST.md

# Find all integrated research
grep -l "integration_status: complete" internal/analysis/*/*/MANIFEST.md

# Find all research from 2026-02
grep -l "research_date: '2026-02" internal/analysis/*/*/MANIFEST.md

# Find all ADR-206 pipeline research
grep -l "pipeline_version: adr-206" internal/analysis/*/*/MANIFEST.md

# Semantic search (if vector index available)
python3 scripts/research/semantic-search.py "How to scale agent orchestration?"

14.3 Appendix C: Example Promoted Directory

Scenario: Agent Labs research completed, Go decision made, artifacts promoted.

Before (Staging):

analyze-new-artifacts/coditect-agent-labs-research/
├── 2512.08296v2.pdf
├── I Built an Open-Source Rig That Measures Multi-Agent Architectures.md
├── artifacts/
│ ├── 1-2-3-detailed-quick-start.md
│ ├── coditect-impact.md
│ ├── executive-summary.md
│ ├── sdd.md
│ ├── tdd.md
│ ├── c4-architecture.md
│ ├── glossary.md
│ ├── mermaid-diagrams.md
│ ├── adrs/
│ │ ├── ADR-001-agent-labs-adoption.md
│ │ ├── ADR-002-integration-pattern.md
│ │ └── ADR-003-agent-orchestration-mapping.md
│ └── dashboards/
│ ├── tech-architecture-analyzer.jsx
│ ├── strategic-fit-dashboard.jsx
│ └── coditect-integration-playbook.jsx
└── README.md

After (Analysis):

internal/analysis/technology-evaluation/agent-labs/
├── agent-labs-scaling-assessment-2026-02-16.md # Human-written synthesis
├── executive-summary.md # Promoted from staging
├── recommendations.md # Next steps
├── MANIFEST.md # Research lineage
└── README.md # Navigation

After (Integration):

internal/architecture/integrations/agent-labs/
├── SDD-agent-labs-integration.md
├── TDD-agent-labs-integration.md
├── c4-diagrams/
│ ├── context.md
│ ├── container.md
│ └── component.md
└── README.md

internal/architecture/adrs/
├── ADR-207-agent-labs-adoption.md # Renumbered from ADR-001
├── ADR-208-agent-labs-integration-pattern.md # Renumbered from ADR-002
└── ADR-209-agent-orchestration-mapping.md # Renumbered from ADR-003

Staging (Remaining):

analyze-new-artifacts/coditect-agent-labs-research/
├── 2512.08296v2.pdf # Source material (stays)
├── I Built an Open-Source Rig....md # Source material (stays)
├── artifacts/glossary.md # Not promoted (topic-specific)
├── artifacts/mermaid-diagrams.md # Not promoted (duplicates C4)
└── artifacts/dashboards/ # Not promoted (research-specific)

15. Conclusion

This proposal establishes a comprehensive research artifact organization system for CODITECT with:

  1. 6-category taxonomy distinguishing technology evaluation, academic, competitive, business, domain, and process research
  2. 4-stage lifecycle providing clear progression from staging → analysis → integration → archive
  3. Promotion criteria matrix defining exactly when/how artifacts move between stages
  4. Permanent directory structure replacing ad-hoc organization with clear categorization
  5. Manifest system tracking research lineage, artifacts, and integration status
  6. ADR-206 integration enabling end-to-end automation from research → permanent storage

Impact:

  • 87% reduction in staging clutter (98 → 13 directories)
  • Zero loss of valuable artifacts (systematic promotion)
  • 100% searchability (manifest index + git-tracked)
  • End-to-end automation (ADR-206 pipeline → permanent locations)

Next Steps:

  1. Review and approve this proposal
  2. Begin Phase 1 implementation (directory structure + tooling)
  3. Pilot migration on 10 sample directories
  4. Bulk migration of remaining 88 directories
  5. ADR-206 pipeline enhancement with auto-promotion

Timeline: 18 weeks (40 hours active work + 90-day grace period)


Document Status: Proposed — 2026-02-16 Author: Claude (Sonnet 4.5) Reviewers: TBD Approval Required: CODITECT Framework Architect, Research Team Lead


Version: 1.0.0 Last Updated: 2026-02-16 Compliance: CODITECT CLAUDE.md Standard v1.0.0