Skip to main content

CODITECT Workflow Definitions

CODITECT Workflow Definitions

Version: 1.0.0 Last Updated: December 12, 2025 Audience: All users, automation architects Scope: AI/ML, Data Engineering, Automation, Analytics, Infrastructure


Overview

This document defines 50 production-ready workflows across AI/ML Development, Data Engineering, Automation & Integration, Analytics & Reporting, and Infrastructure & DevOps categories. Each workflow includes triggers, complexity ratings, QA integration requirements, component dependencies, and step-by-step execution plans.

Relationship to COOKBOOK:

  • COOKBOOK - Quick-start recipes for common tasks (15 recipes)
  • WORKFLOW-DEFINITIONS - Detailed automation workflows for specialized domains (50 workflows)

Usage:

  1. Identify the workflow matching your goal
  2. Review complexity and duration
  3. Check dependencies (agents/commands needed)
  4. Follow step-by-step execution plan
  5. Apply QA integration as specified

Quick Reference

CategoryWorkflowsPage
AI/ML Development10 workflowsModel training, evaluation, deployment, monitoring
Data Engineering10 workflowsETL, data quality, migration, real-time streaming
Automation & Integration10 workflowsAPI integration, webhooks, scheduled jobs, error handling
Analytics & Reporting10 workflowsDashboards, KPI tracking, forecasting, anomaly detection
Infrastructure & DevOps10 workflowsServer provisioning, CI/CD, monitoring, disaster recovery

AI/ML Development

1. model-training-pipeline

  • Description: Complete supervised learning model training pipeline from data ingestion to model artifact storage with experiment tracking and hyperparameter optimization.
  • Trigger: /train-model or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: ml-engineer, data-scientist, testing-specialist
    • Commands: /run-experiment, /validate-model, /save-model
  • Steps:
    1. Data validation - data-scientist - Verify training data quality, schema, and distributions
    2. Feature engineering - ml-engineer - Create/transform features, handle missing values, encode categoricals
    3. Train/test split - ml-engineer - Stratified split with reproducible random seed
    4. Model training - ml-engineer - Train model with hyperparameter optimization (grid/random/bayesian)
    5. Experiment tracking - ml-engineer - Log metrics, parameters, and artifacts to MLflow/Weights&Biases
    6. Model validation - testing-specialist - Validate on holdout set, check for overfitting
    7. Model artifact save - ml-engineer - Serialize model and metadata to versioned storage
    8. Quality review - testing-specialist - Final accuracy/F1/AUC review against acceptance criteria
  • Tags: ml, training, supervised-learning, mlops

2. dataset-preparation

  • Description: Automated dataset preparation including data collection, cleaning, labeling, augmentation, and splitting for ML model training.
  • Trigger: /prepare-dataset or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-scientist, ml-engineer
    • Commands: /clean-data, /augment-data, /split-data
  • Steps:
    1. Data collection - data-scientist - Aggregate raw data from sources (API, database, files)
    2. Data cleaning - data-scientist - Handle missing values, remove duplicates, fix inconsistencies
    3. Data labeling - data-scientist - Apply labels (manual or semi-automated)
    4. Data augmentation - ml-engineer - Generate synthetic samples if needed (SMOTE, image transforms)
    5. Statistical analysis - data-scientist - Descriptive stats, distribution checks, outlier detection
    6. Dataset splitting - ml-engineer - Train/validation/test split with stratification
    7. Metadata documentation - data-scientist - Document schema, statistics, and versioning
    8. Validation - data-scientist - Verify split ratios, class balance, and data integrity
  • Tags: ml, data-prep, preprocessing, etl

3. feature-engineering-pipeline

  • Description: Systematic feature engineering including selection, transformation, encoding, scaling, and dimensionality reduction with feature importance analysis.
  • Trigger: /engineer-features or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-scientist, ml-engineer
    • Commands: /select-features, /transform-features, /scale-features
  • Steps:
    1. Feature discovery - data-scientist - Identify candidate features from raw data
    2. Feature encoding - data-scientist - One-hot, label, target encoding for categoricals
    3. Feature scaling - ml-engineer - StandardScaler, MinMaxScaler, or RobustScaler
    4. Feature transformation - data-scientist - Log, sqrt, polynomial, interaction features
    5. Feature selection - ml-engineer - Mutual info, LASSO, tree-based importance
    6. Dimensionality reduction - ml-engineer - PCA, t-SNE, UMAP if high-dimensional
    7. Feature validation - data-scientist - Check for leakage, multicollinearity, variance
    8. Feature documentation - data-scientist - Document feature definitions and transformations
  • Tags: ml, feature-engineering, preprocessing

4. model-evaluation

  • Description: Comprehensive model evaluation using cross-validation, multiple metrics, confusion matrices, ROC/PR curves, and performance comparison against baselines.
  • Trigger: /evaluate-model or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: ml-engineer, testing-specialist
    • Commands: /cross-validate, /compute-metrics, /plot-curves
  • Steps:
    1. Cross-validation - ml-engineer - K-fold CV with stratification
    2. Metrics computation - ml-engineer - Accuracy, precision, recall, F1, AUC-ROC, AUC-PR
    3. Confusion matrix - ml-engineer - Analyze TP, FP, TN, FN distributions
    4. ROC/PR curves - ml-engineer - Plot and analyze curve shapes
    5. Baseline comparison - ml-engineer - Compare against dummy classifier/regressor
    6. Error analysis - testing-specialist - Identify patterns in misclassifications
    7. Performance report - testing-specialist - Generate comprehensive evaluation report
    8. Acceptance review - testing-specialist - Validate against business requirements
  • Tags: ml, evaluation, validation, metrics

5. ab-testing-setup

  • Description: Design and implement A/B test for model comparison including experimental design, traffic splitting, statistical power analysis, and significance testing.
  • Trigger: /setup-ab-test or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: ml-engineer, data-scientist, testing-specialist
    • Commands: /design-experiment, /split-traffic, /analyze-results
  • Steps:
    1. Hypothesis definition - data-scientist - Define null/alternative hypotheses
    2. Power analysis - data-scientist - Calculate required sample size for significance
    3. Experimental design - ml-engineer - Design randomization and stratification strategy
    4. Traffic splitting - ml-engineer - Implement 50/50 or weighted split with consistent hashing
    5. Metric instrumentation - ml-engineer - Track primary and secondary metrics
    6. Guardrail setup - testing-specialist - Define acceptable metric boundaries
    7. Statistical testing - data-scientist - Run t-test, chi-square, or Mann-Whitney U
    8. Results analysis - data-scientist - Interpret p-values, effect sizes, confidence intervals
  • Tags: ml, ab-testing, experimentation, statistics

6. model-deployment

  • Description: Deploy trained ML model to production including containerization, API endpoint creation, load balancing, versioning, and rollback capability.
  • Trigger: /deploy-model or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, ml-engineer, security-specialist
    • Commands: /containerize-model, /deploy-to-production, /setup-monitoring
  • Steps:
    1. Model serialization - ml-engineer - Pickle/joblib/ONNX export with version tag
    2. Containerization - devops-engineer - Create Docker image with model and dependencies
    3. API creation - ml-engineer - FastAPI/Flask endpoint with input validation
    4. Load testing - testing-specialist - Stress test with expected QPS
    5. Security review - security-specialist - Check for vulnerabilities, secrets exposure
    6. Deployment - devops-engineer - Deploy to Kubernetes/GCP/AWS with blue-green strategy
    7. Smoke testing - testing-specialist - Verify deployment with sample requests
    8. Rollback plan - devops-engineer - Document rollback procedure and triggers
  • Tags: ml, deployment, mlops, production

7. model-monitoring-drift-detection

  • Description: Set up continuous monitoring for deployed models including data drift detection, concept drift detection, performance degradation alerts, and automated retraining triggers.
  • Trigger: /monitor-model or scheduled
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: ml-engineer, devops-engineer
    • Commands: /setup-drift-detection, /configure-alerts, /setup-retraining
  • Steps:
    1. Baseline capture - ml-engineer - Capture training data statistics (mean, std, distributions)
    2. Data drift detection - ml-engineer - PSI, KL divergence, Kolmogorov-Smirnov tests
    3. Concept drift detection - ml-engineer - Monitor prediction distribution shifts
    4. Performance monitoring - ml-engineer - Track accuracy, latency, throughput metrics
    5. Alert configuration - devops-engineer - Set thresholds for drift and performance degradation
    6. Visualization dashboard - devops-engineer - Grafana/Tableau dashboards for metrics
    7. Retraining triggers - ml-engineer - Define conditions for automated retraining
    8. Incident response - devops-engineer - Document escalation and remediation procedures
  • Tags: ml, monitoring, drift-detection, mlops

8. prompt-engineering-workflow

  • Description: Systematic prompt engineering for LLMs including prompt design, few-shot examples, chain-of-thought prompting, evaluation, and version control.
  • Trigger: /engineer-prompt or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: ml-engineer, testing-specialist
    • Commands: /test-prompt, /optimize-prompt, /version-prompt
  • Steps:
    1. Task definition - ml-engineer - Define clear input/output specification
    2. Prompt design - ml-engineer - Write initial prompt with instructions and constraints
    3. Few-shot examples - ml-engineer - Add 3-5 representative examples
    4. Chain-of-thought - ml-engineer - Add reasoning steps if complex task
    5. Prompt testing - testing-specialist - Test on diverse examples, edge cases
    6. Iterative refinement - ml-engineer - Adjust based on failure modes
    7. Prompt versioning - ml-engineer - Version control in git with metadata
    8. Performance evaluation - testing-specialist - Measure accuracy, cost, latency
  • Tags: ml, llm, prompt-engineering, nlp

9. model-fine-tuning

  • Description: Fine-tune pre-trained models on custom datasets including transfer learning, learning rate scheduling, early stopping, and model comparison.
  • Trigger: /fine-tune-model or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: ml-engineer, data-scientist
    • Commands: /load-pretrained, /fine-tune, /save-checkpoint
  • Steps:
    1. Base model selection - ml-engineer - Choose pre-trained model (BERT, ResNet, etc.)
    2. Data preparation - data-scientist - Format data for fine-tuning
    3. Layer freezing - ml-engineer - Freeze early layers, unfreeze later layers
    4. Learning rate schedule - ml-engineer - Use learning rate warmup and decay
    5. Fine-tuning - ml-engineer - Train with small LR, monitor validation loss
    6. Early stopping - ml-engineer - Stop when validation loss plateaus
    7. Checkpoint saving - ml-engineer - Save best model based on validation metric
    8. Comparison - data-scientist - Compare fine-tuned vs. base model performance
  • Tags: ml, fine-tuning, transfer-learning, deep-learning

10. rag-pipeline-setup

  • Description: Build end-to-end Retrieval-Augmented Generation pipeline including document ingestion, chunking, embedding, vector store, retrieval, and LLM integration.
  • Trigger: /setup-rag or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: ml-engineer, backend-architect
    • Commands: /ingest-documents, /create-embeddings, /setup-vector-store
  • Steps:
    1. Document ingestion - ml-engineer - Load PDFs, docs, web pages
    2. Text chunking - ml-engineer - Split into overlapping chunks (512-1024 tokens)
    3. Embedding generation - ml-engineer - Use OpenAI/Cohere/sentence-transformers
    4. Vector store setup - backend-architect - Configure Pinecone/Weaviate/ChromaDB
    5. Indexing - ml-engineer - Store embeddings with metadata in vector DB
    6. Retrieval testing - ml-engineer - Test semantic search with sample queries
    7. LLM integration - ml-engineer - Combine retrieved context with LLM (GPT-4, Claude)
    8. End-to-end testing - testing-specialist - Verify accuracy of generated answers
  • Tags: ml, rag, llm, vector-search, nlp

Data Engineering

1. etl-pipeline-creation

  • Description: Design and implement Extract-Transform-Load pipeline with error handling, incremental loading, idempotency, and monitoring for batch data processing.
  • Trigger: /create-etl or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-engineer, backend-architect, testing-specialist
    • Commands: /extract-data, /transform-data, /load-data
  • Steps:
    1. Source analysis - data-engineer - Identify data sources (databases, APIs, files)
    2. Extract logic - data-engineer - Implement extraction with pagination, rate limiting
    3. Transform logic - data-engineer - Data cleaning, type conversion, business rules
    4. Load strategy - backend-architect - Choose full/incremental, upsert/replace
    5. Error handling - data-engineer - Implement retry logic, dead letter queue
    6. Idempotency - data-engineer - Ensure rerunnable without duplicates
    7. Testing - testing-specialist - Unit tests for each stage, integration tests
    8. Monitoring - data-engineer - Log progress, errors, row counts, duration
  • Tags: data-engineering, etl, pipeline, batch-processing

2. data-quality-checks

  • Description: Implement comprehensive data quality validation including schema validation, null checks, range checks, uniqueness constraints, and referential integrity.
  • Trigger: /check-data-quality or scheduled
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-engineer, testing-specialist
    • Commands: /validate-schema, /check-constraints, /generate-report
  • Steps:
    1. Schema validation - data-engineer - Check column names, data types, order
    2. Completeness checks - data-engineer - Identify null values, missing records
    3. Uniqueness checks - data-engineer - Validate primary keys, unique constraints
    4. Range checks - data-engineer - Validate numeric ranges, date ranges
    5. Format checks - data-engineer - Regex validation for emails, phone numbers, etc.
    6. Referential integrity - data-engineer - Check foreign key relationships
    7. Statistical profiling - data-engineer - Compute min, max, mean, stddev, percentiles
    8. Report generation - testing-specialist - Generate data quality scorecard
  • Tags: data-engineering, data-quality, validation, testing

3. schema-management

  • Description: Database schema version control, migration generation, rollback capability, and schema documentation for evolving data models.
  • Trigger: /manage-schema or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: database-architect, data-engineer
    • Commands: /generate-migration, /apply-migration, /rollback-migration
  • Steps:
    1. Schema design - database-architect - Design new tables, columns, indexes
    2. Migration generation - data-engineer - Generate SQL migration (Alembic, Flyway, Liquibase)
    3. Migration review - database-architect - Review for performance, safety
    4. Backward compatibility - database-architect - Ensure safe migration (add before remove)
    5. Testing - data-engineer - Test migration on copy of production data
    6. Migration application - data-engineer - Apply migration with transaction
    7. Rollback testing - data-engineer - Verify rollback script works
    8. Documentation - database-architect - Update schema docs, ERD diagrams
  • Tags: data-engineering, schema, migration, database

4. data-migration

  • Description: Migrate data between systems/databases including extraction, transformation, validation, incremental sync, and cutover planning.
  • Trigger: /migrate-data or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-engineer, database-architect, testing-specialist
    • Commands: /extract-source, /transform-data, /validate-migration
  • Steps:
    1. Source assessment - data-engineer - Analyze source schema, volume, constraints
    2. Target design - database-architect - Design target schema, mappings
    3. Migration strategy - database-architect - Choose big-bang vs. phased approach
    4. Extract logic - data-engineer - Export from source with checkpointing
    5. Transform logic - data-engineer - Map source to target schema
    6. Load logic - data-engineer - Bulk load with batch processing
    7. Validation - testing-specialist - Row count, checksum, sampling validation
    8. Cutover plan - database-architect - Document cutover steps, rollback, downtime
  • Tags: data-engineering, migration, database, etl

5. api-data-integration

  • Description: Integrate external API as data source including authentication, pagination, rate limiting, error handling, and incremental sync.
  • Trigger: /integrate-api or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: backend-architect, data-engineer
    • Commands: /configure-api-client, /sync-api-data, /handle-errors
  • Steps:
    1. API exploration - backend-architect - Review API docs, authentication, endpoints
    2. Authentication setup - backend-architect - Implement OAuth, API key, JWT
    3. Client implementation - backend-architect - Create API client with retry logic
    4. Pagination handling - data-engineer - Implement cursor/offset pagination
    5. Rate limiting - backend-architect - Respect API rate limits with backoff
    6. Incremental sync - data-engineer - Track last sync timestamp, fetch new records
    7. Error handling - data-engineer - Handle 4xx/5xx errors, network failures
    8. Testing - backend-architect - Mock API responses, test edge cases
  • Tags: data-engineering, api, integration, etl

6. data-warehouse-management

  • Description: Manage data warehouse including star/snowflake schema design, fact/dimension tables, SCD handling, and OLAP optimization.
  • Trigger: /manage-warehouse or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: database-architect, data-engineer
    • Commands: /design-schema, /create-tables, /optimize-queries
  • Steps:
    1. Dimensional modeling - database-architect - Identify facts, dimensions, measures
    2. Schema design - database-architect - Design star or snowflake schema
    3. SCD strategy - database-architect - Choose SCD Type 1, 2, or 3 for dimensions
    4. Fact table design - database-architect - Additive, semi-additive, non-additive measures
    5. Dimension table design - database-architect - Slowly changing dimensions
    6. Indexing strategy - database-architect - Bitmap indexes, partitioning
    7. Aggregation tables - data-engineer - Pre-aggregate for performance
    8. Query optimization - database-architect - Optimize common OLAP queries
  • Tags: data-engineering, data-warehouse, olap, dimensional-modeling

7. real-time-streaming-pipeline

  • Description: Build real-time data streaming pipeline using Kafka/Kinesis including producers, consumers, stream processing, and exactly-once semantics.
  • Trigger: /setup-streaming or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-engineer, backend-architect, devops-engineer
    • Commands: /setup-kafka, /create-producer, /create-consumer
  • Steps:
    1. Streaming platform setup - devops-engineer - Deploy Kafka/Kinesis cluster
    2. Topic/stream creation - data-engineer - Create topics with partitions
    3. Producer implementation - backend-architect - Implement event producers
    4. Serialization - data-engineer - Choose Avro, Protobuf, JSON with schema registry
    5. Consumer implementation - backend-architect - Implement consumer groups
    6. Stream processing - data-engineer - Kafka Streams/Flink for transformations
    7. Exactly-once semantics - data-engineer - Implement idempotent producers, transactions
    8. Monitoring - devops-engineer - Monitor lag, throughput, error rates
  • Tags: data-engineering, streaming, kafka, real-time

8. data-governance-setup

  • Description: Implement data governance framework including data catalog, lineage tracking, access control, PII detection, and compliance policies.
  • Trigger: /setup-governance or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-engineer, security-specialist, database-architect
    • Commands: /create-catalog, /track-lineage, /configure-access
  • Steps:
    1. Data catalog - data-engineer - Deploy Amundsen/DataHub/Alation
    2. Metadata management - data-engineer - Document tables, columns, owners, descriptions
    3. Lineage tracking - data-engineer - Track data flow from source to consumption
    4. PII detection - security-specialist - Scan for sensitive data (SSN, credit cards)
    5. Access control - security-specialist - Implement RBAC, column-level security
    6. Data classification - data-engineer - Tag data as public, internal, confidential
    7. Compliance policies - security-specialist - GDPR, HIPAA, CCPA compliance
    8. Audit logging - security-specialist - Log all data access and changes
  • Tags: data-engineering, governance, compliance, security

9. backup-and-recovery

  • Description: Implement automated backup and disaster recovery including full/incremental backups, point-in-time recovery, backup testing, and restoration procedures.
  • Trigger: /setup-backup or scheduled
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, database-architect
    • Commands: /configure-backup, /test-restore, /schedule-backup
  • Steps:
    1. Backup strategy - database-architect - Define RPO/RTO requirements
    2. Full backup - devops-engineer - Schedule daily/weekly full backups
    3. Incremental backup - devops-engineer - Schedule hourly/daily incremental backups
    4. Backup storage - devops-engineer - Store backups in S3/GCS with versioning
    5. Encryption - security-specialist - Encrypt backups at rest and in transit
    6. Retention policy - devops-engineer - Define retention (daily 7d, weekly 30d, monthly 1y)
    7. Restore testing - devops-engineer - Monthly restore test to verify backups
    8. Documentation - devops-engineer - Document backup and restore procedures
  • Tags: data-engineering, backup, disaster-recovery, devops

10. data-performance-optimization

  • Description: Optimize data pipeline and query performance including indexing, partitioning, caching, query tuning, and infrastructure scaling.
  • Trigger: /optimize-data-performance or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: database-architect, data-engineer, devops-engineer
    • Commands: /analyze-queries, /create-indexes, /partition-tables
  • Steps:
    1. Performance profiling - database-architect - Identify slow queries, bottlenecks
    2. Query analysis - database-architect - Analyze execution plans (EXPLAIN)
    3. Index optimization - database-architect - Create, modify, or remove indexes
    4. Partitioning - database-architect - Partition large tables by date/range
    5. Materialized views - database-architect - Pre-compute expensive aggregations
    6. Caching layer - data-engineer - Implement Redis/Memcached for hot data
    7. Query tuning - database-architect - Rewrite inefficient queries
    8. Infrastructure scaling - devops-engineer - Vertical/horizontal scaling if needed
  • Tags: data-engineering, performance, optimization, database

Automation & Integration

1. workflow-automation-setup

  • Description: Design and implement business process automation including workflow orchestration, task scheduling, dependency management, and error recovery.
  • Trigger: /automate-workflow or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: automation-specialist, backend-architect
    • Commands: /create-workflow, /schedule-tasks, /setup-orchestration
  • Steps:
    1. Process mapping - automation-specialist - Document current manual process
    2. Workflow design - automation-specialist - Design DAG with tasks and dependencies
    3. Orchestration setup - backend-architect - Configure Airflow/Prefect/Temporal
    4. Task implementation - backend-architect - Implement each task as function/script
    5. Dependency management - automation-specialist - Define task dependencies, triggers
    6. Error handling - backend-architect - Implement retry logic, alerting
    7. Testing - testing-specialist - Test workflow end-to-end with mock data
    8. Monitoring - automation-specialist - Track execution status, duration, failures
  • Tags: automation, workflow, orchestration, airflow

2. api-integration-framework

  • Description: Build reusable API integration framework with authentication, request/response handling, rate limiting, circuit breaker, and monitoring.
  • Trigger: /build-api-framework or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: backend-architect, automation-specialist
    • Commands: /create-api-client, /setup-retry, /configure-circuit-breaker
  • Steps:
    1. Client abstraction - backend-architect - Create base API client class
    2. Authentication - backend-architect - Support OAuth, API key, JWT, basic auth
    3. Request builder - backend-architect - Fluent interface for building requests
    4. Response parsing - backend-architect - Parse JSON/XML, handle errors
    5. Retry logic - automation-specialist - Exponential backoff, max retries
    6. Circuit breaker - automation-specialist - Prevent cascading failures
    7. Rate limiting - backend-architect - Client-side rate limiter
    8. Logging/monitoring - automation-specialist - Log requests, responses, errors
  • Tags: automation, api, integration, framework

3. webhook-event-handler

  • Description: Implement webhook receiver with signature verification, event processing, idempotency, queuing, and failure recovery.
  • Trigger: /setup-webhook or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: backend-architect, security-specialist
    • Commands: /create-endpoint, /verify-signature, /process-event
  • Steps:
    1. Endpoint creation - backend-architect - Create POST endpoint for webhook
    2. Signature verification - security-specialist - Verify HMAC/JWT signature
    3. Event parsing - backend-architect - Parse webhook payload
    4. Idempotency - backend-architect - Use event ID to prevent duplicate processing
    5. Queue integration - backend-architect - Push to queue (SQS, RabbitMQ) for async processing
    6. Event processing - backend-architect - Implement business logic for event types
    7. Error handling - backend-architect - Return 200, handle errors asynchronously
    8. Monitoring - automation-specialist - Track webhook deliveries, processing time, errors
  • Tags: automation, webhook, event-driven, integration

4. scheduled-job-orchestration

  • Description: Set up cron-based or interval-based job scheduling with dependency management, concurrency control, and execution history.
  • Trigger: /schedule-jobs or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: devops-engineer, automation-specialist
    • Commands: /create-schedule, /configure-concurrency, /monitor-jobs
  • Steps:
    1. Job inventory - automation-specialist - List all jobs, schedules, dependencies
    2. Scheduler selection - devops-engineer - Choose cron, Airflow, Celery Beat, APScheduler
    3. Schedule definition - automation-specialist - Define cron expressions or intervals
    4. Dependency management - automation-specialist - Define job dependencies (job B after job A)
    5. Concurrency control - devops-engineer - Prevent overlapping executions
    6. Execution logging - automation-specialist - Log start time, end time, status, output
    7. Alerting - devops-engineer - Alert on failure, long duration, missed schedule
    8. History retention - devops-engineer - Store execution history for 90 days
  • Tags: automation, scheduling, cron, orchestration

5. error-handling-framework

  • Description: Build comprehensive error handling system with retry strategies, circuit breakers, dead letter queues, and error analytics.
  • Trigger: /setup-error-handling or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: backend-architect, automation-specialist
    • Commands: /configure-retry, /setup-dlq, /track-errors
  • Steps:
    1. Error classification - backend-architect - Classify errors (transient, permanent, user)
    2. Retry strategy - automation-specialist - Exponential backoff, max attempts, jitter
    3. Circuit breaker - automation-specialist - Open circuit after threshold, half-open retry
    4. Dead letter queue - backend-architect - Route failed messages to DLQ
    5. Error logging - backend-architect - Structured logging with context
    6. Alerting - automation-specialist - Alert on error rate spikes, circuit open
    7. Error analytics - automation-specialist - Track error trends, common failures
    8. Recovery procedures - automation-specialist - Document manual intervention steps
  • Tags: automation, error-handling, reliability, resilience

6. notification-system-setup

  • Description: Implement multi-channel notification system (email, SMS, Slack, webhook) with templating, scheduling, and delivery tracking.
  • Trigger: /setup-notifications or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: backend-architect, automation-specialist
    • Commands: /configure-channels, /create-templates, /send-notification
  • Steps:
    1. Channel setup - backend-architect - Configure email (SMTP), SMS (Twilio), Slack (webhook)
    2. Template system - backend-architect - Create reusable templates with variables
    3. Notification queue - backend-architect - Queue notifications for async delivery
    4. Delivery logic - backend-architect - Implement send logic for each channel
    5. Retry logic - automation-specialist - Retry failed deliveries
    6. Delivery tracking - automation-specialist - Track sent, delivered, failed, bounced
    7. User preferences - backend-architect - Allow users to configure notification preferences
    8. Testing - testing-specialist - Test each channel with sample notifications
  • Tags: automation, notification, email, sms, integration

7. multi-system-sync

  • Description: Synchronize data across multiple systems with conflict resolution, eventual consistency, change detection, and sync monitoring.
  • Trigger: /sync-systems or scheduled
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-engineer, backend-architect, automation-specialist
    • Commands: /detect-changes, /resolve-conflicts, /sync-data
  • Steps:
    1. Sync strategy - data-engineer - Define master source, sync direction, frequency
    2. Change detection - data-engineer - Track changes via timestamps, version fields, CDC
    3. Delta extraction - data-engineer - Extract only changed records
    4. Conflict detection - data-engineer - Detect conflicting updates in same record
    5. Conflict resolution - backend-architect - Apply resolution rules (latest wins, manual review)
    6. Sync execution - automation-specialist - Apply changes to target systems
    7. Validation - data-engineer - Verify record counts, checksums, sampling
    8. Monitoring - automation-specialist - Track sync lag, failures, conflicts
  • Tags: automation, sync, integration, data-engineering

8. batch-processing-pipeline

  • Description: Implement large-scale batch processing with chunking, parallel execution, checkpointing, and progress tracking.
  • Trigger: /process-batch or scheduled
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-engineer, backend-architect
    • Commands: /chunk-data, /process-parallel, /track-progress
  • Steps:
    1. Input preparation - data-engineer - Load batch data, validate format
    2. Chunking - data-engineer - Split data into chunks (1000-10000 records)
    3. Parallel execution - backend-architect - Process chunks in parallel (threads/processes)
    4. Processing logic - backend-architect - Apply transformations, business rules
    5. Error handling - backend-architect - Isolate failed records, continue processing
    6. Checkpointing - data-engineer - Save progress after each chunk
    7. Aggregation - data-engineer - Aggregate results from all chunks
    8. Progress tracking - automation-specialist - Track processed/failed/remaining records
  • Tags: automation, batch-processing, parallel, etl

9. queue-management-system

  • Description: Set up message queue system with topic-based routing, consumer groups, priority queues, and visibility timeout management.
  • Trigger: /setup-queue or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: backend-architect, devops-engineer
    • Commands: /create-queue, /configure-consumers, /monitor-queue
  • Steps:
    1. Queue selection - backend-architect - Choose RabbitMQ, SQS, Redis, or Kafka
    2. Queue creation - devops-engineer - Create queues with appropriate configuration
    3. Topic routing - backend-architect - Implement topic-based message routing
    4. Priority queues - backend-architect - Separate queues or priority field
    5. Consumer groups - backend-architect - Implement consumer groups for scaling
    6. Visibility timeout - backend-architect - Set appropriate timeout for message processing
    7. Dead letter queue - backend-architect - Route failed messages after max retries
    8. Monitoring - devops-engineer - Track queue depth, processing time, error rate
  • Tags: automation, queue, messaging, rabbitmq, sqs

10. monitoring-automation-setup

  • Description: Implement automated monitoring with health checks, metric collection, alerting rules, and incident automation.
  • Trigger: /setup-monitoring or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: devops-engineer, automation-specialist
    • Commands: /configure-health-checks, /setup-alerts, /create-dashboards
  • Steps:
    1. Health check implementation - devops-engineer - Implement /health endpoints
    2. Metric collection - devops-engineer - Instrument code with Prometheus, StatsD
    3. Log aggregation - devops-engineer - Configure ELK, Loki, or CloudWatch Logs
    4. Alerting rules - automation-specialist - Define alert conditions (error rate, latency, uptime)
    5. Alert routing - automation-specialist - Configure PagerDuty, Opsgenie, or email
    6. Dashboard creation - devops-engineer - Build Grafana/Datadog dashboards
    7. Incident automation - automation-specialist - Auto-restart, auto-scale, runbook automation
    8. Testing - devops-engineer - Test alerts by simulating failures
  • Tags: automation, monitoring, observability, alerting

Analytics & Reporting

1. dashboard-creation

  • Description: Build interactive dashboards with KPIs, charts, filters, drill-down capability, and automatic refresh using BI tools.
  • Trigger: /create-dashboard or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: recommended, review: recommended
  • Dependencies:
    • Agents: data-analyst, data-engineer
    • Commands: /connect-data-source, /design-visualizations, /publish-dashboard
  • Steps:
    1. Requirements gathering - data-analyst - Identify KPIs, metrics, audience
    2. Data source connection - data-engineer - Connect to database, data warehouse, API
    3. Data modeling - data-engineer - Create views, aggregations for dashboard
    4. Chart selection - data-analyst - Choose appropriate chart types (line, bar, pie, scatter)
    5. Dashboard design - data-analyst - Arrange charts, add filters, date pickers
    6. Drill-down setup - data-analyst - Enable drill-down from summary to detail
    7. Auto-refresh - data-engineer - Configure automatic data refresh schedule
    8. Publishing - data-analyst - Publish dashboard, set permissions
  • Tags: analytics, dashboard, bi, visualization

2. kpi-tracking-system

  • Description: Set up automated KPI tracking with data collection, calculation, trend analysis, target monitoring, and alerting on deviations.
  • Trigger: /track-kpis or scheduled
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-analyst, data-engineer
    • Commands: /define-kpis, /calculate-metrics, /setup-alerts
  • Steps:
    1. KPI definition - data-analyst - Define KPIs with formulas, data sources
    2. Data collection - data-engineer - Extract required data for KPI calculation
    3. Calculation logic - data-engineer - Implement KPI calculation (daily, weekly, monthly)
    4. Historical tracking - data-engineer - Store KPI values over time
    5. Target setting - data-analyst - Define target values, acceptable ranges
    6. Trend analysis - data-analyst - Calculate moving averages, trend lines
    7. Alerting - data-engineer - Alert when KPI crosses threshold
    8. Reporting - data-analyst - Generate KPI reports, dashboards
  • Tags: analytics, kpi, metrics, tracking

3. ad-hoc-analysis-workflow

  • Description: Enable data analysts to perform ad-hoc analysis with data access, SQL/Python notebooks, visualization tools, and sharing capabilities.
  • Trigger: /ad-hoc-analysis or manual
  • Complexity: simple
  • Duration: 5-15m
  • QA Integration: validation: recommended, review: none
  • Dependencies:
    • Agents: data-analyst
    • Commands: /launch-notebook, /query-data, /export-results
  • Steps:
    1. Tool selection - data-analyst - Choose Jupyter, Databricks, Mode, or SQL client
    2. Data access - data-analyst - Connect to data sources with read-only credentials
    3. Exploratory analysis - data-analyst - Write SQL queries or Python/R code
    4. Visualization - data-analyst - Create charts using matplotlib, plotly, seaborn
    5. Interpretation - data-analyst - Analyze results, identify insights
    6. Documentation - data-analyst - Document findings, methodology
    7. Sharing - data-analyst - Export to PDF, share notebook, or present
    8. Iteration - data-analyst - Refine analysis based on feedback
  • Tags: analytics, ad-hoc, analysis, exploration

4. automated-reporting

  • Description: Automate report generation and distribution with scheduling, templating, data refresh, export to multiple formats, and email delivery.
  • Trigger: /automate-report or scheduled
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-analyst, automation-specialist
    • Commands: /create-report-template, /schedule-report, /distribute-report
  • Steps:
    1. Report design - data-analyst - Design report layout, sections, metrics
    2. Template creation - data-analyst - Create template in BI tool, Jupyter, or LaTeX
    3. Data query - data-analyst - Define SQL/API queries for report data
    4. Parameterization - automation-specialist - Add date ranges, filters as parameters
    5. Scheduling - automation-specialist - Set daily, weekly, monthly schedule
    6. Export - automation-specialist - Export to PDF, Excel, CSV
    7. Distribution - automation-specialist - Email to recipients, upload to shared drive
    8. Monitoring - automation-specialist - Track report generation, delivery success
  • Tags: analytics, reporting, automation, scheduling

5. data-visualization-library

  • Description: Build reusable visualization component library with consistent styling, interactivity, responsiveness, and accessibility.
  • Trigger: /create-viz-library or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: recommended, review: recommended
  • Dependencies:
    • Agents: data-analyst, frontend-react-typescript-expert
    • Commands: /create-components, /style-charts, /test-visualizations
  • Steps:
    1. Chart inventory - data-analyst - Identify commonly used chart types
    2. Library selection - frontend-react-typescript-expert - Choose D3, Chart.js, Recharts, Plotly
    3. Component creation - frontend-react-typescript-expert - Create React/Vue components for each chart
    4. Styling - frontend-react-typescript-expert - Apply consistent color scheme, fonts
    5. Interactivity - frontend-react-typescript-expert - Add tooltips, zoom, pan, click events
    6. Responsiveness - frontend-react-typescript-expert - Make charts responsive to screen size
    7. Accessibility - frontend-react-typescript-expert - Add ARIA labels, keyboard navigation
    8. Documentation - data-analyst - Document usage, props, examples
  • Tags: analytics, visualization, component-library, frontend

6. cohort-analysis

  • Description: Perform cohort analysis to understand user behavior over time including cohort definition, metric calculation, retention analysis, and visualization.
  • Trigger: /analyze-cohorts or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-analyst, data-engineer
    • Commands: /define-cohorts, /calculate-retention, /visualize-cohorts
  • Steps:
    1. Cohort definition - data-analyst - Define cohorts (by signup week, feature usage, etc.)
    2. Event data extraction - data-engineer - Extract user events from database
    3. Cohort assignment - data-analyst - Assign users to cohorts based on criteria
    4. Retention calculation - data-analyst - Calculate retention by cohort (Day 1, 7, 30, 90)
    5. Metric aggregation - data-engineer - Calculate revenue, engagement per cohort
    6. Cohort comparison - data-analyst - Compare cohorts to identify trends
    7. Visualization - data-analyst - Create cohort retention heatmap
    8. Insights - data-analyst - Identify high-value cohorts, retention drivers
  • Tags: analytics, cohort, retention, user-behavior

7. funnel-analysis

  • Description: Analyze conversion funnels to identify drop-off points including funnel definition, step tracking, conversion calculation, and optimization recommendations.
  • Trigger: /analyze-funnel or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: data-analyst, data-engineer
    • Commands: /define-funnel, /track-conversions, /identify-dropoffs
  • Steps:
    1. Funnel definition - data-analyst - Define funnel steps (e.g., view → add to cart → checkout → purchase)
    2. Event tracking - data-engineer - Ensure events are tracked for each step
    3. Data extraction - data-engineer - Extract user journey data
    4. Conversion calculation - data-analyst - Calculate conversion rate for each step
    5. Drop-off analysis - data-analyst - Identify steps with highest drop-off
    6. Segmentation - data-analyst - Segment by user attributes (device, source, location)
    7. Visualization - data-analyst - Create funnel chart showing drop-offs
    8. Recommendations - data-analyst - Suggest improvements for low-converting steps
  • Tags: analytics, funnel, conversion, optimization

8. attribution-modeling

  • Description: Build marketing attribution model to understand channel contribution including multi-touch attribution, model comparison, and ROI calculation.
  • Trigger: /model-attribution or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-analyst, data-scientist
    • Commands: /extract-touchpoints, /apply-model, /calculate-roi
  • Steps:
    1. Touchpoint extraction - data-engineer - Extract all user touchpoints (ads, emails, organic)
    2. Conversion tracking - data-engineer - Link touchpoints to conversions
    3. Model selection - data-scientist - Choose model (first-touch, last-touch, linear, time-decay, data-driven)
    4. Attribution calculation - data-scientist - Apply attribution model to assign credit
    5. Model comparison - data-scientist - Compare results across different models
    6. ROI calculation - data-analyst - Calculate ROI by channel
    7. Visualization - data-analyst - Visualize attribution results
    8. Recommendations - data-analyst - Optimize marketing spend based on attribution
  • Tags: analytics, attribution, marketing, roi

9. forecasting-pipeline

  • Description: Build time series forecasting pipeline including data preparation, model selection, training, validation, and automated forecast generation.
  • Trigger: /forecast or scheduled
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-scientist, data-engineer
    • Commands: /prepare-timeseries, /train-forecast-model, /generate-forecast
  • Steps:
    1. Data preparation - data-engineer - Extract historical time series data
    2. Stationarity check - data-scientist - Check for stationarity, apply differencing if needed
    3. Seasonality detection - data-scientist - Detect seasonal patterns, trends
    4. Model selection - data-scientist - Choose ARIMA, Prophet, LSTM, or XGBoost
    5. Train/test split - data-scientist - Time-based split (train on historical, test on recent)
    6. Model training - data-scientist - Train forecasting model
    7. Validation - data-scientist - Validate on test set, calculate MAPE, RMSE
    8. Forecast generation - data-scientist - Generate forecasts for next N periods
  • Tags: analytics, forecasting, time-series, prediction

10. anomaly-detection-system

  • Description: Implement automated anomaly detection for metrics and KPIs using statistical methods, ML models, alerting, and root cause analysis.
  • Trigger: /detect-anomalies or scheduled
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: data-scientist, data-engineer
    • Commands: /train-anomaly-detector, /detect-anomalies, /alert-anomalies
  • Steps:
    1. Baseline creation - data-scientist - Calculate normal ranges from historical data
    2. Method selection - data-scientist - Choose statistical (Z-score, IQR) or ML (Isolation Forest, Autoencoders)
    3. Model training - data-scientist - Train anomaly detection model
    4. Real-time detection - data-engineer - Apply model to incoming data
    5. Anomaly scoring - data-scientist - Score anomalies by severity
    6. Alerting - data-engineer - Alert on high-severity anomalies
    7. Root cause analysis - data-analyst - Investigate potential causes
    8. Feedback loop - data-scientist - Incorporate feedback to reduce false positives
  • Tags: analytics, anomaly-detection, monitoring, ml

Infrastructure & DevOps

1. server-provisioning

  • Description: Automate server provisioning with infrastructure-as-code, configuration management, security hardening, and idempotency.
  • Trigger: /provision-server or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, security-specialist
    • Commands: /create-terraform, /apply-configuration, /verify-setup
  • Steps:
    1. Requirements definition - devops-engineer - Define OS, specs, network, storage
    2. IaC template - devops-engineer - Create Terraform/CloudFormation template
    3. Network setup - devops-engineer - Configure VPC, subnets, security groups
    4. Server creation - devops-engineer - Provision EC2/GCE/Azure VM
    5. Configuration management - devops-engineer - Apply Ansible/Chef/Puppet configuration
    6. Security hardening - security-specialist - Disable root login, configure firewall, install updates
    7. Verification - devops-engineer - Verify SSH access, services running
    8. Documentation - devops-engineer - Document access, credentials, configurations
  • Tags: infrastructure, devops, provisioning, iac

2. container-orchestration

  • Description: Deploy containerized applications with Kubernetes including cluster setup, deployment manifests, service discovery, and auto-scaling.
  • Trigger: /deploy-k8s or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, backend-architect
    • Commands: /create-cluster, /deploy-app, /configure-scaling
  • Steps:
    1. Cluster provisioning - devops-engineer - Provision GKE/EKS/AKS cluster
    2. Namespace creation - devops-engineer - Create namespaces for environments
    3. Deployment manifest - backend-architect - Create Kubernetes Deployment YAML
    4. Service creation - backend-architect - Create Service for internal/external access
    5. ConfigMap/Secret - devops-engineer - Store configuration and secrets
    6. Ingress setup - devops-engineer - Configure Ingress for external traffic
    7. Auto-scaling - devops-engineer - Configure HPA based on CPU/memory
    8. Monitoring - devops-engineer - Deploy Prometheus, Grafana for monitoring
  • Tags: infrastructure, kubernetes, containers, orchestration

3. cicd-pipeline-setup

  • Description: Build CI/CD pipeline with automated testing, building, deployment, rollback capability, and deployment gates.
  • Trigger: /setup-cicd or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, testing-specialist
    • Commands: /create-pipeline, /configure-stages, /setup-deployment
  • Steps:
    1. Pipeline design - devops-engineer - Define stages (test, build, deploy)
    2. CI configuration - devops-engineer - Configure GitHub Actions/GitLab CI/Jenkins
    3. Test stage - testing-specialist - Run unit, integration, e2e tests
    4. Build stage - devops-engineer - Build Docker image, tag with commit SHA
    5. Push to registry - devops-engineer - Push to Docker Hub/GCR/ECR
    6. Deploy to staging - devops-engineer - Auto-deploy to staging environment
    7. Manual approval - devops-engineer - Require approval for production deploy
    8. Deploy to production - devops-engineer - Blue-green or canary deployment
  • Tags: infrastructure, cicd, deployment, automation

4. monitoring-observability-stack

  • Description: Deploy full observability stack with metrics (Prometheus), logs (Loki), traces (Jaeger), and dashboards (Grafana).
  • Trigger: /setup-observability or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer
    • Commands: /deploy-prometheus, /deploy-loki, /deploy-jaeger, /create-dashboards
  • Steps:
    1. Prometheus deployment - devops-engineer - Deploy Prometheus for metrics
    2. Exporter setup - devops-engineer - Configure node exporter, app exporters
    3. Loki deployment - devops-engineer - Deploy Loki for log aggregation
    4. Promtail setup - devops-engineer - Configure Promtail to ship logs
    5. Jaeger deployment - devops-engineer - Deploy Jaeger for distributed tracing
    6. Application instrumentation - devops-engineer - Add OpenTelemetry to apps
    7. Grafana deployment - devops-engineer - Deploy Grafana for visualization
    8. Dashboard creation - devops-engineer - Create dashboards for RED metrics
  • Tags: infrastructure, monitoring, observability, metrics

5. log-aggregation-pipeline

  • Description: Set up centralized logging with log collection, parsing, indexing, searching, and retention policies.
  • Trigger: /setup-logging or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: recommended
  • Dependencies:
    • Agents: devops-engineer
    • Commands: /deploy-elk, /configure-shippers, /setup-retention
  • Steps:
    1. Stack selection - devops-engineer - Choose ELK, Loki, CloudWatch Logs
    2. Log shipper deployment - devops-engineer - Deploy Filebeat/Fluentd/Promtail
    3. Log parsing - devops-engineer - Configure grok patterns for parsing
    4. Index creation - devops-engineer - Create Elasticsearch indexes
    5. Retention policy - devops-engineer - Set retention (hot 7d, warm 30d, cold 90d)
    6. Search setup - devops-engineer - Configure Kibana for log searching
    7. Alerting - devops-engineer - Create alerts on error patterns
    8. Dashboard - devops-engineer - Build log volume, error rate dashboards
  • Tags: infrastructure, logging, elk, observability

6. backup-automation

  • Description: Automate infrastructure and database backups with scheduling, encryption, versioning, and restore testing.
  • Trigger: /automate-backups or scheduled
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, database-architect
    • Commands: /configure-backup, /schedule-backup, /test-restore
  • Steps:
    1. Backup strategy - devops-engineer - Define RPO/RTO, full/incremental schedule
    2. Database backup - database-architect - Configure automated database backups
    3. File system backup - devops-engineer - Configure file system snapshots
    4. Encryption - security-specialist - Encrypt backups at rest
    5. Storage - devops-engineer - Store in S3/GCS with versioning
    6. Scheduling - devops-engineer - Set daily/weekly/monthly schedules
    7. Restore testing - devops-engineer - Monthly restore test
    8. Monitoring - devops-engineer - Alert on backup failures
  • Tags: infrastructure, backup, disaster-recovery, automation

7. scaling-policies

  • Description: Implement auto-scaling policies with horizontal and vertical scaling, load-based triggers, and cost optimization.
  • Trigger: /configure-scaling or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: devops-engineer, cloud-architect
    • Commands: /setup-autoscaling, /configure-triggers, /test-scaling
  • Steps:
    1. Baseline metrics - devops-engineer - Establish normal CPU, memory, request load
    2. Scaling strategy - cloud-architect - Choose horizontal (add instances) vs. vertical (bigger instances)
    3. Trigger configuration - devops-engineer - Set CPU/memory thresholds (scale at 70%)
    4. Horizontal scaling - devops-engineer - Configure ASG/GCE MIG with min/max instances
    5. Vertical scaling - cloud-architect - Configure VPA for Kubernetes
    6. Cooldown periods - devops-engineer - Set cooldown to prevent flapping
    7. Load testing - testing-specialist - Test scaling behavior under load
    8. Cost optimization - cloud-architect - Right-size instances, use spot instances
  • Tags: infrastructure, scaling, autoscaling, optimization

8. security-hardening

  • Description: Apply security best practices to infrastructure including OS hardening, firewall configuration, secret management, and compliance scanning.
  • Trigger: /harden-security or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: security-specialist, devops-engineer
    • Commands: /audit-security, /apply-hardening, /scan-vulnerabilities
  • Steps:
    1. OS hardening - security-specialist - Disable unnecessary services, apply patches
    2. Firewall configuration - security-specialist - Configure iptables/Security Groups (deny all, allow specific)
    3. SSH hardening - security-specialist - Disable root login, use key-based auth, change port
    4. Secret management - security-specialist - Use Vault, AWS Secrets Manager, or GCP Secret Manager
    5. TLS/SSL - security-specialist - Configure HTTPS with valid certificates
    6. Intrusion detection - security-specialist - Deploy OSSEC, fail2ban
    7. Vulnerability scanning - security-specialist - Run Nessus, OpenVAS, Trivy
    8. Compliance - security-specialist - Verify CIS benchmarks, SOC 2 compliance
  • Tags: infrastructure, security, hardening, compliance

9. cost-optimization

  • Description: Analyze and optimize cloud infrastructure costs with rightsizing, reserved instances, spot instances, and resource cleanup.
  • Trigger: /optimize-costs or manual
  • Complexity: moderate
  • Duration: 15-30m
  • QA Integration: validation: recommended, review: recommended
  • Dependencies:
    • Agents: cloud-architect, devops-engineer
    • Commands: /analyze-costs, /rightsize-instances, /cleanup-resources
  • Steps:
    1. Cost analysis - cloud-architect - Analyze current spending by service, environment
    2. Rightsizing - cloud-architect - Identify over-provisioned instances
    3. Reserved instances - cloud-architect - Purchase RIs for predictable workloads (1-3 year)
    4. Spot instances - devops-engineer - Use spot instances for batch, non-critical workloads
    5. Storage optimization - cloud-architect - Move infrequently accessed data to cold storage
    6. Resource cleanup - devops-engineer - Delete unused instances, snapshots, load balancers
    7. Tagging - devops-engineer - Tag resources for cost allocation
    8. Budget alerts - devops-engineer - Set budget alerts in AWS/GCP
  • Tags: infrastructure, cost-optimization, cloud, finops

10. disaster-recovery-plan

  • Description: Implement comprehensive disaster recovery strategy with backup sites, failover procedures, RTO/RPO targets, and regular DR drills.
  • Trigger: /setup-disaster-recovery or manual
  • Complexity: complex
  • Duration: 30m+
  • QA Integration: validation: required, review: required
  • Dependencies:
    • Agents: cloud-architect, devops-engineer, database-architect
    • Commands: /configure-failover, /setup-replication, /test-dr
  • Steps:
    1. RTO/RPO definition - cloud-architect - Define recovery time and point objectives
    2. Backup site - cloud-architect - Provision standby environment (hot, warm, or cold)
    3. Data replication - database-architect - Configure cross-region database replication
    4. Failover mechanism - devops-engineer - Implement DNS failover, load balancer failover
    5. Runbook creation - devops-engineer - Document step-by-step recovery procedures
    6. Automated failover - devops-engineer - Configure automated failover for critical services
    7. DR testing - devops-engineer - Conduct quarterly DR drills
    8. Post-mortem - cloud-architect - Document lessons learned from drills
  • Tags: infrastructure, disaster-recovery, failover, business-continuity

Workflow Usage Guidelines

Complexity Ratings

  • Simple (1-5m): Single-step workflows with minimal dependencies
  • Moderate (5-30m): Multi-step workflows with some coordination required
  • Complex (30m+): Multi-agent workflows requiring careful orchestration

QA Integration

  • Validation Required: Automated validation must pass before completion
  • Validation Recommended: Manual validation suggested but not blocking
  • Review Required: Human review mandatory before production deployment
  • Review Recommended: Peer review suggested for quality assurance
  • Review None: No review needed for low-risk changes

Triggering Workflows

  • Command-based: Use /command-name in Claude Code
  • Manual: Invoke via Task tool with specific agent
  • Scheduled: Configure cron/Airflow for automated execution

Customizing Workflows

  1. Copy workflow definition as template
  2. Adjust steps to match your requirements
  3. Add/remove agents based on team availability
  4. Modify complexity/duration based on experience
  5. Document customizations in project-specific workflow docs

Integration with CODITECT Framework

Workflow → Agent Mapping

All workflows reference agents from the CODITECT agent catalog. Verify agent availability:

python3 scripts/update-component-activation.py list --type agent

Workflow → Command Mapping

Commands referenced in workflows are defined in commands/ directory. Activate as needed:

python3 scripts/update-component-activation.py activate command COMMAND_NAME

Workflow Execution Pattern

User: "I need to [goal]"

Consult: WORKFLOW-DEFINITIONS.md (find matching workflow)

Activate: Required agents/commands

Execute: Follow step-by-step workflow

Validate: Apply QA integration requirements

Complete: Document outcomes

Appendix: Workflow Categories

AI/ML Development (10 workflows)

Focus: End-to-end ML lifecycle from training to production monitoring

Data Engineering (10 workflows)

Focus: Data pipelines, quality, governance, and optimization

Automation & Integration (10 workflows)

Focus: System integration, workflow automation, and reliability

Analytics & Reporting (10 workflows)

Focus: Data analysis, visualization, and business intelligence

Infrastructure & DevOps (10 workflows)

Focus: Cloud infrastructure, deployment, monitoring, and reliability


Version History:

  • v1.0.0 (Dec 12, 2025) - Initial 50 workflow definitions

Maintained by: CODITECT Framework Team License: Proprietary - AZ1.AI INC