Skip to main content

Master Project Build Orchestration Prompt with TRACK Integration

Document ID: MST-PRJ-001
Version: 3.0
Created: 2026-02-03
Purpose: Orchestrate complete FP&A Platform project initialization with TRACK task assignment
CODITECT Integration: TRACK Task Assignment Process v3.0


1. Executive Summary

This master prompt orchestrates the complete initialization and build of the AI-First Open-Source FP&A Platform project. It provides a comprehensive, executable workflow for:

  1. Project Scaffold Generation — Complete directory structure with STD-NCS-001 compliance
  2. Artifact Migration — Rename and organize existing artifacts per naming convention
  3. TRACK Task Generation — Automated task creation by type with role/sprint assignment
  4. Dependency Graph — Build order and critical path identification
  5. Resource Allocation — Map tasks to roles, sprints, and capacity
  6. CI/CD Pipeline — Validation and enforcement automation

Total Coverage: 127+ artifacts across 12 categories
Estimated Hours: 634 hours (38 complete + 63 planned)


2. TRACK Task Types Reference

TRACK (Task Routing And Classification for Knowledge-work) is CODITECT's intelligent task assignment system that automatically routes work based on artifact type, skill requirements, and sprint capacity.

2.1 Task Type Definitions

Task TypePrefix MappingTypical EffortRequired Skills
SPECIFICATIONSPEC, REQ6-12 hoursProduct, Technical Writing
ARCHITECTUREADR, ARC, C4, EVT, SEQ4-8 hoursSolution Architect
RESEARCHANL, RES, PRM, MOD, MKT3-8 hoursDomain Expert, Analyst
DATABASESQL, ERD8-16 hoursData Engineer
INFRASTRUCTURECFG4-12 hoursDevOps Engineer
DEVELOPMENTWFL, API, SCR8-24 hoursSoftware Engineer
COMPLIANCECMP, RGU, GOV, AUD, CTL6-12 hoursCompliance Officer
SECURITYSEC, THR6-10 hoursSecurity Engineer
OPERATIONSDRP, MON, OPS4-8 hoursSRE, DevOps
FINANCIALFIN, BUS, PRC4-6 hoursFinance Analyst
DOCUMENTATIONDOC, GDE, REF, PLB4-8 hoursTechnical Writer
TEMPLATETPL3-6 hoursDomain Expert
TRAININGTRN6-16 hoursTraining Specialist
PROJECTPRJ, INV, MST, TRK, RPT2-8 hoursProject Manager
STANDARDSTD4-8 hoursPlatform Lead

2.2 TRACK Task Schema

task:
id: "TRK-{PREFIX}-{CATEGORY}-{SEQ}"
artifact_id: "{PREFIX}-{CATEGORY}-{SEQ}"
artifact_name: "{PREFIX}-{CATEGORY}-{SEQ}-{descriptor}.{ext}"
task_type: "{TRACK_TYPE}"

metadata:
priority: "P0|P1|P2|P3" # P0=Critical, P3=Low
effort_hours: 4-24
sprint: "S01-S12"
assigned_role: "{ROLE}"

dependencies:
- "{ARTIFACT_ID}" # Must complete before this task

acceptance_criteria:
- "Complies with STD-NCS-001"
- "Passes validation script"
- "Reviewed by {ROLE}"

outputs:
- path: "{DIRECTORY}/{ARTIFACT_NAME}"
type: "{EXTENSION}"

3. Master Orchestration Prompt

PROMPT: FP&A Platform Project Build Initialization

You are a CODITECT Project Orchestrator Agent. Your role is to initialize and orchestrate the complete build of the AI-First FP&A Platform. Execute the following phases in sequence:

## PHASE 1: PROJECT SCAFFOLD GENERATION

### Step 1.1: Create Directory Structure

Execute this bash script to create the complete project scaffold:

```bash
#!/bin/bash
# FP&A Platform Project Scaffold Generator
# Compliant with STD-NCS-001 v3.0

PROJECT_ROOT="${1:-fpa-platform}"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)

echo "🚀 Creating FP&A Platform scaffold at: $PROJECT_ROOT"

# Create main directory structure
mkdir -p "$PROJECT_ROOT"/{
00-project/{inventory,track,standards,scripts},
01-specs,
02-architecture/{adrs,c4,events,data},
03-modules/{core,planning,operations,reporting,ai},
04-compliance/{us,br},
05-integration/{erp,banking,platform},
06-technical/{sql,config,workflows,scripts},
07-financial,
08-security,
09-operations,
10-documentation/{user,admin,api,developer},
11-templates/{sox,implementation,testing},
12-training
}

# Create .gitkeep files in empty directories
find "$PROJECT_ROOT" -type d -empty -exec touch {}/.gitkeep \;

# Count directories
DIR_COUNT=$(find "$PROJECT_ROOT" -type d | wc -l)
echo "✅ Created $DIR_COUNT directories"

# Create project manifest
cat > "$PROJECT_ROOT/00-project/PRJ-MAN-001-project-manifest.yaml" << 'MANIFEST'
project:
id: "FPA-PLATFORM-2026"
name: "AI-First Open-Source FP&A Platform"
codename: "Project Lighthouse"
version: "1.0.0"

organization:
name: "CODITECT"
domain: "coditect.io"

timeline:
kickoff: "2026-02-03"
mvp_target: "2026-04-15"
ga_target: "2026-06-30"

standards:
naming_convention: "STD-NCS-001-v3.0"
track_version: "3.0"

repositories:
primary: "github.com/coditect/fpa-platform"
artifacts: "github.com/coditect/fpa-platform-artifacts"
MANIFEST

echo "✅ Project manifest created"

Step 1.2: Generate Initial Files

Create these mandatory project files:

  1. README.md — Project overview with quick start
  2. CHANGELOG.md — Version history (start with v0.1.0)
  3. CONTRIBUTING.md — Contribution guidelines
  4. .gitignore — Standard exclusions

PHASE 2: ARTIFACT MIGRATION

Step 2.1: Execute Migration Mapping

Apply the complete migration mapping from STD-NCS-001 Section 7. Create this migration script:

#!/bin/bash
# migrate-to-std-ncs-001.sh
# Migrate existing artifacts to STD-NCS-001 naming convention

SOURCE_DIR="${1:-.}"
TARGET_DIR="${2:-./migrated}"
LOG_FILE="migration-$(date +%Y%m%d-%H%M%S).log"

log() { echo "[$(date +%H:%M:%S)] $1" | tee -a "$LOG_FILE"; }

log "Starting migration from $SOURCE_DIR to $TARGET_DIR"

# Create target directories
mkdir -p "$TARGET_DIR"/{00-project,01-specs,02-architecture/{adrs,c4,events},03-modules,04-compliance/{us,br},05-integration,06-technical/{sql,config,workflows},07-financial,08-security,09-operations,10-documentation}

# Migration mapping (legacy -> standard)
declare -A MIGRATIONS=(
# Analysis Outputs
["01-EXECUTIVE-ANALYSIS.md"]="00-project/ANL-PRJ-001-executive-analysis.md"
["02-CODITECT-IMPACT-ANALYSIS.md"]="00-project/ANL-PRJ-002-coditect-impact.md"
["03-RESEARCH-PROMPTS-TIER1.md"]="00-project/PRM-PRJ-003-research-tier1.md"
["04-RESEARCH-PROMPTS-TIER2.md"]="00-project/PRM-PRJ-004-research-tier2.md"
["05-CODITECT-PRODUCT-IDEAS.md"]="00-project/ANL-PRJ-005-product-ideas.md"
["06-STRATEGIC-RECOMMENDATION.md"]="00-project/ANL-PRJ-006-strategic-recommendation.md"

# Specifications
["FPA-PRD.md"]="01-specs/SPEC-FPA-001-product-requirements.md"
["FPA-SDD.md"]="01-specs/SPEC-FPA-002-software-design.md"
["FPA-TDD.md"]="01-specs/SPEC-FPA-003-technical-design.md"
["FPA-TEST-STRATEGY.md"]="01-specs/SPEC-FPA-004-test-strategy.md"

# Architecture
["ARCHITECTURE-DECISIONS.md"]="02-architecture/adrs/ADR-ARC-000-decisions-index.md"
["C4-ARCHITECTURE-DIAGRAMS.md"]="02-architecture/c4/C4-ARC-001-architecture-diagrams.md"
["EVENT-CATALOG.md"]="02-architecture/events/EVT-ARC-001-event-catalog.md"

# Research Prompts
["MODULE-RESEARCH-PROMPTS.md"]="03-modules/PRM-MOD-000-module-prompts-index.md"
["US-COMPLIANCE-PROMPTS.md"]="04-compliance/us/PRM-CMP-001-us-compliance.md"
["BR-COMPLIANCE-PROMPTS.md"]="04-compliance/br/PRM-CMP-002-br-compliance.md"
["INTEGRATION-RESEARCH-PROMPTS.md"]="05-integration/PRM-INT-000-integration-prompts-index.md"
["MISSING-ARTIFACT-PROMPTS.md"]="00-project/PRM-PRJ-001-missing-artifacts.md"

# Compliance
["BR-REGULATORY-UPDATES-2025-2026.md"]="04-compliance/br/RGU-BR-001-regulatory-updates-2026.md"

# Technical
["FPA-COMPLETE-SCHEMA.sql"]="06-technical/sql/SQL-DB-001-complete-schema.sql"
["airbyte-connectors.yaml"]="06-technical/config/CFG-INT-001-airbyte-connectors.yaml"
["dagster-pipelines.yaml"]="06-technical/config/CFG-INT-002-dagster-pipelines.yaml"
["openfga-authorization.yaml"]="06-technical/config/CFG-SEC-001-openfga-authorization.yaml"
["langgraph-workflows.yaml"]="06-technical/workflows/WFL-AI-001-langgraph-workflows.yaml"
["terraform-infrastructure.yaml"]="06-technical/config/CFG-INF-001-terraform-infrastructure.yaml"

# Financial Models
["01-ROI-Calculator.xlsx"]="07-financial/FIN-ROI-001-customer-calculator.xlsx"
["02-Month-End-Close-Savings.xlsx"]="07-financial/FIN-OPS-001-close-savings.xlsx"
["03-13-Week-Cash-Flow.xlsx"]="07-financial/FIN-FCT-001-cash-flow-13week.xlsx"
["04-Budget-vs-Actual.xlsx"]="07-financial/FIN-VAR-001-budget-vs-actual.xlsx"
["05-Platform-Pricing.xlsx"]="07-financial/FIN-PRC-001-platform-pricing.xlsx"
["06-Implementation-Cost-Model.xlsx"]="07-financial/FIN-PRJ-001-implementation-cost.xlsx"

# Security & Operations
["SECURITY-SPECIFICATION.md"]="08-security/SEC-SYS-001-security-specification.md"
["DATA-CLASSIFICATION-MATRIX.md"]="08-security/GOV-SEC-001-data-classification.md"
["DISASTER-RECOVERY-PLAN.md"]="09-operations/DRP-OPS-001-disaster-recovery.md"
["MONITORING-ALERTING-SPEC.md"]="09-operations/MON-OBS-001-monitoring-alerting.md"

# Documentation
["DATA-DICTIONARY.md"]="10-documentation/DOC-DB-001-data-dictionary.md"
["AI-MODEL-CARDS.md"]="10-documentation/DOC-AI-001-model-cards.md"
["PROMPT-ENGINEERING-PLAYBOOK.md"]="10-documentation/PLB-AI-001-prompt-engineering.md"

# Inventory
["00-DOCUMENT-INVENTORY-v2.md"]="00-project/INV-PRJ-000-artifact-inventory.md"
["00-DOCUMENT-INVENTORY.md"]="00-project/INV-PRJ-000-artifact-inventory-legacy.md"
["ARTIFACT-GAP-ANALYSIS-v2.md"]="00-project/ANL-PRJ-002-artifact-gap-analysis.md"
)

# Execute migrations
MIGRATED=0
SKIPPED=0

for legacy in "${!MIGRATIONS[@]}"; do
standard="${MIGRATIONS[$legacy]}"
source_file=$(find "$SOURCE_DIR" -name "$legacy" -type f 2>/dev/null | head -1)

if [[ -n "$source_file" ]]; then
target_file="$TARGET_DIR/$standard"
mkdir -p "$(dirname "$target_file")"
cp "$source_file" "$target_file"
log "✅ Migrated: $legacy -> $standard"
((MIGRATED++))
else
log "⏭️ Skipped (not found): $legacy"
((SKIPPED++))
fi
done

log "Migration complete: $MIGRATED migrated, $SKIPPED skipped"
log "See $LOG_FILE for details"

Step 2.2: Validate Migration

Run validation to ensure 100% compliance:

#!/usr/bin/env python3
"""Post-migration validation for STD-NCS-001"""

import re
import json
from pathlib import Path
from datetime import datetime

NAMING_PATTERN = re.compile(
r'^[A-Z]{2,4}-[A-Z]{2,4}-[0-9]{3}-[a-z0-9]+(-[a-z0-9]+)*\.'
r'(md|xlsx|yaml|json|sql|py|pptx|pdf|docx|html)$'
)

def validate_directory(root: Path) -> dict:
results = {'valid': [], 'invalid': [], 'total': 0}

for f in root.rglob('*'):
if f.is_file() and not f.name.startswith('.'):
results['total'] += 1
if NAMING_PATTERN.match(f.name):
results['valid'].append(str(f.relative_to(root)))
else:
results['invalid'].append(str(f.relative_to(root)))

results['compliance_rate'] = len(results['valid']) / max(results['total'], 1) * 100
results['timestamp'] = datetime.now().isoformat()

return results

if __name__ == '__main__':
import sys
target = Path(sys.argv[1]) if len(sys.argv) > 1 else Path('.')
results = validate_directory(target)

print(f"📊 Validation Results:")
print(f" Total files: {results['total']}")
print(f" Valid: {len(results['valid'])} ({results['compliance_rate']:.1f}%)")
print(f" Invalid: {len(results['invalid'])}")

if results['invalid']:
print(f"\n❌ Non-compliant files:")
for f in results['invalid'][:20]:
print(f" - {f}")
if len(results['invalid']) > 20:
print(f" ... and {len(results['invalid']) - 20} more")

# Output JSON report
with open('RPT-VAL-001-validation-report.json', 'w') as f:
json.dump(results, f, indent=2)
print(f"\n📄 Report saved to RPT-VAL-001-validation-report.json")

PHASE 3: TRACK TASK GENERATION

Step 3.1: Generate TRACK Tasks from Artifacts

For each artifact, generate a TRACK task using this template:

# TRK-{SPRINT}-{SEQ}-tasks.yaml
# TRACK Task Assignment File
# Generated: {TIMESTAMP}

sprint:
id: "S01"
name: "Foundation Sprint"
start_date: "2026-02-03"
end_date: "2026-02-17"
capacity_hours: 80

tasks:
# Specifications (TRACK Type: SPECIFICATION)
- id: "TRK-SPEC-FPA-001"
artifact: "SPEC-FPA-001-product-requirements.md"
type: SPECIFICATION
priority: P0
effort_hours: 8
assigned_role: product_manager
status: complete
dependencies: []
acceptance_criteria:
- "5 personas defined"
- "45+ requirements documented"
- "MoSCoW prioritization complete"

- id: "TRK-SPEC-FPA-002"
artifact: "SPEC-FPA-002-software-design.md"
type: SPECIFICATION
priority: P0
effort_hours: 10
assigned_role: solution_architect
status: complete
dependencies: ["TRK-SPEC-FPA-001"]
acceptance_criteria:
- "12 microservices defined"
- "API contracts documented"
- "Data flows diagrammed"

# Architecture (TRACK Type: ARCHITECTURE)
- id: "TRK-ADR-ARC-001"
artifact: "ADR-ARC-000-decisions-index.md"
type: ARCHITECTURE
priority: P0
effort_hours: 4
assigned_role: solution_architect
status: complete
dependencies: ["TRK-SPEC-FPA-002"]
acceptance_criteria:
- "15 ADRs documented"
- "Alternatives analyzed"
- "Impact assessment complete"

# Database (TRACK Type: DATABASE)
- id: "TRK-SQL-DB-001"
artifact: "SQL-DB-001-complete-schema.sql"
type: DATABASE
priority: P0
effort_hours: 12
assigned_role: data_engineer
status: complete
dependencies: ["TRK-SPEC-FPA-003"]
acceptance_criteria:
- "All tables defined"
- "RLS policies implemented"
- "Indexes optimized"

# Infrastructure (TRACK Type: INFRASTRUCTURE)
- id: "TRK-CFG-INT-001"
artifact: "CFG-INT-001-airbyte-connectors.yaml"
type: INFRASTRUCTURE
priority: P1
effort_hours: 6
assigned_role: devops_engineer
status: complete
dependencies: ["TRK-SQL-DB-001"]
acceptance_criteria:
- "9+ connectors configured"
- "Sync schedules defined"
- "Error handling documented"

Step 3.2: TRACK Assignment Algorithm

#!/usr/bin/env python3
"""TRACK Task Assignment Generator"""

from dataclasses import dataclass
from typing import List, Optional
from enum import Enum
import yaml

class TrackType(Enum):
SPECIFICATION = "SPECIFICATION"
ARCHITECTURE = "ARCHITECTURE"
RESEARCH = "RESEARCH"
DATABASE = "DATABASE"
INFRASTRUCTURE = "INFRASTRUCTURE"
DEVELOPMENT = "DEVELOPMENT"
COMPLIANCE = "COMPLIANCE"
SECURITY = "SECURITY"
OPERATIONS = "OPERATIONS"
FINANCIAL = "FINANCIAL"
DOCUMENTATION = "DOCUMENTATION"
TEMPLATE = "TEMPLATE"
TRAINING = "TRAINING"
PROJECT = "PROJECT"
STANDARD = "STANDARD"

PREFIX_TO_TRACK = {
'SPEC': TrackType.SPECIFICATION, 'REQ': TrackType.SPECIFICATION,
'ADR': TrackType.ARCHITECTURE, 'ARC': TrackType.ARCHITECTURE,
'C4': TrackType.ARCHITECTURE, 'EVT': TrackType.ARCHITECTURE,
'SEQ': TrackType.ARCHITECTURE, 'ERD': TrackType.DATABASE,
'ANL': TrackType.RESEARCH, 'RES': TrackType.RESEARCH,
'PRM': TrackType.RESEARCH, 'MOD': TrackType.RESEARCH,
'MKT': TrackType.RESEARCH,
'SQL': TrackType.DATABASE,
'CFG': TrackType.INFRASTRUCTURE, 'WFL': TrackType.DEVELOPMENT,
'API': TrackType.DEVELOPMENT, 'SCR': TrackType.DEVELOPMENT,
'CMP': TrackType.COMPLIANCE, 'RGU': TrackType.COMPLIANCE,
'GOV': TrackType.COMPLIANCE, 'AUD': TrackType.COMPLIANCE,
'CTL': TrackType.COMPLIANCE,
'SEC': TrackType.SECURITY, 'THR': TrackType.SECURITY,
'DRP': TrackType.OPERATIONS, 'MON': TrackType.OPERATIONS,
'OPS': TrackType.OPERATIONS,
'FIN': TrackType.FINANCIAL, 'BUS': TrackType.FINANCIAL,
'PRC': TrackType.FINANCIAL,
'DOC': TrackType.DOCUMENTATION, 'GDE': TrackType.DOCUMENTATION,
'REF': TrackType.DOCUMENTATION, 'PLB': TrackType.DOCUMENTATION,
'TPL': TrackType.TEMPLATE, 'TRN': TrackType.TRAINING,
'PRJ': TrackType.PROJECT, 'INV': TrackType.PROJECT,
'MST': TrackType.PROJECT, 'TRK': TrackType.PROJECT,
'RPT': TrackType.PROJECT, 'STD': TrackType.STANDARD,
}

ROLE_MAPPING = {
TrackType.SPECIFICATION: ['product_manager', 'technical_writer'],
TrackType.ARCHITECTURE: ['solution_architect'],
TrackType.RESEARCH: ['analyst', 'domain_expert'],
TrackType.DATABASE: ['data_engineer'],
TrackType.INFRASTRUCTURE: ['devops_engineer'],
TrackType.DEVELOPMENT: ['software_engineer'],
TrackType.COMPLIANCE: ['compliance_officer'],
TrackType.SECURITY: ['security_engineer'],
TrackType.OPERATIONS: ['sre', 'devops_engineer'],
TrackType.FINANCIAL: ['finance_analyst'],
TrackType.DOCUMENTATION: ['technical_writer'],
TrackType.TEMPLATE: ['domain_expert'],
TrackType.TRAINING: ['training_specialist'],
TrackType.PROJECT: ['project_manager'],
TrackType.STANDARD: ['platform_lead'],
}

EFFORT_ESTIMATES = {
TrackType.SPECIFICATION: (6, 12),
TrackType.ARCHITECTURE: (4, 8),
TrackType.RESEARCH: (3, 8),
TrackType.DATABASE: (8, 16),
TrackType.INFRASTRUCTURE: (4, 12),
TrackType.DEVELOPMENT: (8, 24),
TrackType.COMPLIANCE: (6, 12),
TrackType.SECURITY: (6, 10),
TrackType.OPERATIONS: (4, 8),
TrackType.FINANCIAL: (4, 6),
TrackType.DOCUMENTATION: (4, 8),
TrackType.TEMPLATE: (3, 6),
TrackType.TRAINING: (6, 16),
TrackType.PROJECT: (2, 8),
TrackType.STANDARD: (4, 8),
}

@dataclass
class TrackTask:
id: str
artifact: str
track_type: TrackType
priority: str
effort_hours: int
assigned_role: str
status: str
dependencies: List[str]
acceptance_criteria: List[str]

def generate_task(artifact_name: str, priority: str = "P1") -> TrackTask:
"""Generate TRACK task from artifact name."""
parts = artifact_name.split('-')
prefix = parts[0]
category = parts[1]
seq = parts[2]

track_type = PREFIX_TO_TRACK.get(prefix, TrackType.PROJECT)
min_effort, max_effort = EFFORT_ESTIMATES[track_type]
roles = ROLE_MAPPING[track_type]

return TrackTask(
id=f"TRK-{prefix}-{category}-{seq}",
artifact=artifact_name,
track_type=track_type,
priority=priority,
effort_hours=(min_effort + max_effort) // 2,
assigned_role=roles[0],
status="pending",
dependencies=[],
acceptance_criteria=[
f"Complies with STD-NCS-001",
f"Passes validation script",
f"Reviewed by {roles[0]}"
]
)

def export_sprint_yaml(tasks: List[TrackTask], sprint_id: str) -> str:
"""Export tasks to TRACK YAML format."""
sprint_data = {
'sprint': {
'id': sprint_id,
'tasks': [
{
'id': t.id,
'artifact': t.artifact,
'type': t.track_type.value,
'priority': t.priority,
'effort_hours': t.effort_hours,
'assigned_role': t.assigned_role,
'status': t.status,
'dependencies': t.dependencies,
'acceptance_criteria': t.acceptance_criteria
}
for t in tasks
]
}
}
return yaml.dump(sprint_data, default_flow_style=False, sort_keys=False)

PHASE 4: DEPENDENCY GRAPH CONSTRUCTION

Step 4.1: Build Order Definition

# Artifact dependency graph for build order
dependency_graph:
# Level 0: Foundation (no dependencies)
level_0:
- STD-NCS-001-naming-convention.md
- SPEC-FPA-001-product-requirements.md

# Level 1: Architecture (depends on L0)
level_1:
- SPEC-FPA-002-software-design.md
- SPEC-FPA-003-technical-design.md
- ADR-ARC-000-decisions-index.md

# Level 2: Design (depends on L1)
level_2:
- SPEC-FPA-004-test-strategy.md
- C4-ARC-001-architecture-diagrams.md
- EVT-ARC-001-event-catalog.md
- SQL-DB-001-complete-schema.sql

# Level 3: Implementation (depends on L2)
level_3:
- CFG-INT-001-airbyte-connectors.yaml
- CFG-INT-002-dagster-pipelines.yaml
- CFG-SEC-001-openfga-authorization.yaml
- WFL-AI-001-langgraph-workflows.yaml
- CFG-INF-001-terraform-infrastructure.yaml

# Level 4: Compliance & Security (depends on L3)
level_4:
- PRM-CMP-001-us-compliance.md
- PRM-CMP-002-br-compliance.md
- RGU-BR-001-regulatory-updates-2026.md
- SEC-SYS-001-security-specification.md
- GOV-SEC-001-data-classification.md

# Level 5: Operations & Documentation (depends on L4)
level_5:
- DRP-OPS-001-disaster-recovery.md
- MON-OBS-001-monitoring-alerting.md
- DOC-DB-001-data-dictionary.md
- DOC-AI-001-model-cards.md
- PLB-AI-001-prompt-engineering.md

Step 4.2: Critical Path Analysis

# Critical path calculation
CRITICAL_PATH = [
("SPEC-FPA-001", 8), # PRD
("SPEC-FPA-002", 10), # SDD
("SPEC-FPA-003", 10), # TDD
("SQL-DB-001", 12), # Database
("CFG-INT-001", 6), # Airbyte
("WFL-AI-001", 8), # LangGraph
("MON-OBS-001", 6), # Monitoring
]

CRITICAL_PATH_HOURS = sum(h for _, h in CRITICAL_PATH) # 60 hours
CRITICAL_PATH_WEEKS = CRITICAL_PATH_HOURS / 40 # 1.5 weeks

PHASE 5: SPRINT PLANNING

Step 5.1: Sprint Allocation

SprintDurationFocusTasksHours
S01Weeks 1-2FoundationSpecs, ADRs68
S02Weeks 3-4Core ModulesGL, COA, JRN54
S03Weeks 5-6OperationsRecon, Close46
S04Weeks 7-8AI AgentsOrchestrator, Anomaly62
S05Weeks 9-10ComplianceUS/BR Compliance78
S06Weeks 11-12IntegrationERP, Banking56
S07Weeks 13-14DocumentationGuides, API Ref48
S08Weeks 15-16TrainingMaterials, Workshops38

Step 5.2: Resource Allocation

resource_allocation:
product_manager:
capacity_hours_per_sprint: 40
assigned_types: [SPECIFICATION, PROJECT]

solution_architect:
capacity_hours_per_sprint: 60
assigned_types: [ARCHITECTURE, RESEARCH]

data_engineer:
capacity_hours_per_sprint: 80
assigned_types: [DATABASE, INFRASTRUCTURE]

devops_engineer:
capacity_hours_per_sprint: 60
assigned_types: [INFRASTRUCTURE, OPERATIONS]

compliance_officer:
capacity_hours_per_sprint: 40
assigned_types: [COMPLIANCE, SECURITY]

technical_writer:
capacity_hours_per_sprint: 40
assigned_types: [DOCUMENTATION, TRAINING]

PHASE 6: CI/CD PIPELINE SETUP

Step 6.1: GitHub Actions Workflow

# .github/workflows/artifact-validation.yaml
name: Artifact Validation

on:
push:
branches: [main, develop]
pull_request:
branches: [main]

jobs:
validate-naming:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'

- name: Run naming convention validation
run: |
python scripts/validate-naming.py . --recursive

- name: Upload validation report
uses: actions/upload-artifact@v4
with:
name: validation-report
path: RPT-VAL-001-validation-report.json

track-sync:
runs-on: ubuntu-latest
needs: validate-naming
steps:
- uses: actions/checkout@v4

- name: Sync TRACK tasks
run: |
python scripts/track-sync.py --sprint current

- name: Update sprint board
env:
TRACK_API_KEY: ${{ secrets.TRACK_API_KEY }}
run: |
python scripts/track-update.py --board fpa-platform

Step 6.2: Pre-commit Hook

#!/bin/bash
# .git/hooks/pre-commit
# Validate artifact naming before commit

PATTERN='^[A-Z]{2,4}-[A-Z]{2,4}-[0-9]{3}-[a-z0-9]+(-[a-z0-9]+)*\.(md|xlsx|yaml|json|sql|py|pptx|pdf)$'

violations=0

for file in $(git diff --cached --name-only --diff-filter=ACM); do
filename=$(basename "$file")

# Skip hidden files and special files
[[ "$filename" =~ ^\..*$ ]] && continue
[[ "$filename" =~ ^(README|CHANGELOG|CONTRIBUTING|LICENSE).*$ ]] && continue

if ! echo "$filename" | grep -qE "$PATTERN"; then
echo "❌ Non-compliant: $file"
echo " Expected pattern: [PREFIX]-[CATEGORY]-[SEQ]-[descriptor].[ext]"
((violations++))
fi
done

if [ $violations -gt 0 ]; then
echo ""
echo "⛔ $violations naming convention violation(s) found."
echo " See STD-NCS-001 for naming requirements."
exit 1
fi

echo "✅ All files comply with STD-NCS-001"

PHASE 7: EXECUTION CHECKLIST

Execute these steps in order:

  • 1. Clone/Create Repository

    git clone https://github.com/coditect/fpa-platform.git
    cd fpa-platform
  • 2. Run Scaffold Generator (Phase 1, Step 1.1)

  • 3. Copy Existing Artifacts to source directory

  • 4. Run Migration Script (Phase 2, Step 2.1)

  • 5. Run Validation (Phase 2, Step 2.2)

    • Target: 100% compliance rate
  • 6. Generate TRACK Tasks (Phase 3)

    • Create TRK-S01-001-tasks.yaml
  • 7. Setup CI/CD (Phase 6)

    • Add GitHub Actions workflow
    • Install pre-commit hook
  • 8. Create Sprint Board

    • Import tasks to TRACK
    • Assign team members
  • 9. Kickoff Meeting

    • Review dependency graph
    • Confirm sprint allocation

PHASE 8: SUCCESS CRITERIA

MetricTargetValidation
Naming Compliance100%validate-naming.py
Directory StructureCompletetree fpa-platform
TRACK IntegrationAll tasksSprint board populated
CI/CD PipelineFunctionalWorkflow runs green
DocumentationREADME, CONTRIBUTINGFiles exist

END OF MASTER ORCHESTRATION PROMPT


---

## 4. TRACK Quick Reference

### 4.1 Task Status Flow

┌─────────┐ ┌────────────┐ ┌───────────┐ ┌──────────┐ │ PENDING │───►│ IN_PROGRESS│───►│ IN_REVIEW │───►│ COMPLETE │ └─────────┘ └────────────┘ └───────────┘ └──────────┘ │ │ │ │ ▼ ▼ │ ┌──────────┐ ┌──────────┐ └────────►│ BLOCKED │ │ REJECTED │ └──────────┘ └──────────┘


### 4.2 Priority Definitions

| Priority | SLA | Description |
|----------|-----|-------------|
| **P0** | 24 hours | Critical path blocker |
| **P1** | 3 days | High priority, sprint commitment |
| **P2** | 1 week | Standard priority |
| **P3** | 2 weeks | Nice to have |

### 4.3 Sprint Velocity Tracking

```yaml
velocity:
S01:
planned: 68
completed: 68
velocity: 1.0
S02:
planned: 54
completed: 48
velocity: 0.89
average_velocity: 0.95
projected_completion: "2026-06-15"

5. Document Control

VersionDateAuthorChanges
3.02026-02-03CODITECTTRACK v3.0 integration, complete orchestration workflow
2.02026-02-03CODITECTAdded sprint planning, CI/CD pipeline
1.02026-02-03CODITECTInitial orchestration prompt

MST-PRJ-001 v3.0 — Master Project Build Orchestration Prompt
CODITECT Platform Engineering
Effective: 2026-02-03