Skip to main content

Project Management Skills

Version: 1.0 Last Updated: 2025-11-27 Category: Project Planning & Task Management Target Users: AI agents, project managers, development teams


Overview

This document defines comprehensive skills for managing CODITECT projects using PROJECT-PLAN.md and TASKLIST-WITH-CHECKBOXES.md workflows. These skills enable AI agents to autonomously plan, track, and report on project progress across the 46-submodule ecosystem.


Table of Contents

  1. Skill: Project Plan Creation
  2. Skill: TASKLIST Management
  3. Skill: Progress Tracking
  4. Skill: Milestone Management
  5. Skill: Epic & Feature Planning
  6. Skill: Status Reporting
  7. Skill: YAML Metadata Management
  8. Skill: Cross-Project Coordination

Skill: Project Plan Creation

Description

Create comprehensive PROJECT-PLAN.md documents following CODITECT standards with executive summary, architecture overview, implementation phases, budget breakdown, and quality gates.

Inputs

  • Project Name (string): Official project name
  • Project Description (string): Brief purpose and scope
  • GitHub Repository (URL): Git repository location
  • Category (enum): cloud, core, dev, ops, gtm, market, legal, docs
  • Team (array): List of team members with roles
  • Budget (number, optional): Total project budget in USD
  • Timeline (object): Start date, end date, key milestones

Process

  1. Read Template

    # Use master PROJECT-PLAN.md as reference
    cat docs/project-management/PROJECT-PLAN.md
  2. Generate Executive Summary

    • Current status overview table
    • Key milestones table
    • Budget & investment table
    • Revenue targets (if applicable)
  3. Create Architecture Section

    • Purpose statement
    • Strategic context
    • System architecture diagram (Mermaid or reference to diagrams/)
    • Technology stack
  4. Define Implementation Phases

    • Phase 0: Planning & Setup
    • Phase 1: Core Development
    • Phase 2: Testing & QA
    • Phase 3: Deployment & Launch
    • Phase 4+: Post-launch iterations
  5. Establish Quality Gates

    • Per-phase acceptance criteria
    • Testing requirements
    • Documentation requirements
    • Security review checkpoints
  6. Create Risk Management Section

    • Identified risks with severity
    • Mitigation strategies
    • Contingency plans
  7. Define Success Metrics

    • KPIs for project success
    • Performance benchmarks
    • User adoption targets

Outputs

  • File: PROJECT-PLAN.md (typically 40-80KB)
  • Format: Markdown with YAML frontmatter (optional)
  • Location: Project root directory

Example

# Project Name - Project Plan

**Document Version:** 1.0
**Last Updated:** 2025-11-27
**Document Owner:** Hal Casteel
**Current Phase:** Planning
**Status:** ACTIVE

## Executive Summary

This PROJECT-PLAN.md provides comprehensive orchestration for [Project Name]...

### Current Status Overview

| Metric | Current State | Target |
|--------|---------------|--------|
| **Project Start Date** | 2025-11-27 | - |
| **Current Phase** | Planning | - |
| **Completion** | 0% | 100% |
...

Quality Checks

  • Executive summary complete with status tables
  • All phases defined with clear objectives
  • Quality gates established for each phase
  • Risk assessment complete
  • Budget breakdown included (if applicable)
  • Timeline with dependencies mapped
  • Cross-references to related documents

Skill: TASKLIST Management

Description

Create and maintain TASKLIST-WITH-CHECKBOXES.md files with comprehensive YAML frontmatter, organized task lists, and automatic progress tracking.

Inputs

  • Project Metadata (object): From PROJECT-PLAN.md or manual input
  • Existing TASKLIST.md (file, optional): For conversion from old format
  • Milestones (array): From project planning
  • Epics (array): Logical groupings of features
  • Initial Tasks (array, optional): Seed tasks to start with

Process

  1. Initialize YAML Frontmatter

    ---
    project_name: "Auth System"
    project_description: "User authentication and authorization system"
    github_repo: "https://github.com/coditect-ai/auth-system"
    category: "cloud"
    status: "active"
    created_date: "2025-11-27"
    updated_date: "2025-11-27"
    version: "1.0"
  2. Define Milestones

    milestones:
    - name: "Phase 1: Foundation"
    target_date: "2025-12-15"
    status: "in_progress"
    description: "Core infrastructure and database setup"
  3. Define Epics

    epics:
    - name: "JWT Authentication"
    priority: "P0"
    status: "in_progress"
    description: "Complete JWT token system"
    tasks: [1, 2, 3, 4, 5]
  4. Suggest Features

    features:
    suggested:
    - "Real-time WebSocket notifications"
    - "Advanced search and filtering"
    planned:
    - "User authentication and authorization"
    completed:
    - "Database schema design"
  5. Generate Task Statistics

    stats:
    total_tasks: 23
    completed: 5
    in_progress: 3
    pending: 15
    completion_percentage: 21.7
  6. Create Task List

    ## Phase 1: Foundation

    ### Epic: JWT Authentication (#auth #phase-1)

    - [x] 1. Design authentication flow (P0, 4h) #auth #design
    - [~] 2. Implement JWT generation (P0, 6h) #auth #backend
    - [ ] 3. Create OAuth2 integration (P1, 8h) #auth #oauth

Outputs

  • File: TASKLIST-WITH-CHECKBOXES.md (typically 10-40KB)
  • Format: Markdown with YAML frontmatter
  • Location: Project root directory

Checkbox Syntax

  • [ ] = Pending
  • [~] = In Progress
  • [x] = Completed

Task Format

- [status] ID. Task description (Priority, Xh) #tag1 #tag2

Quality Checks

  • YAML frontmatter complete and valid
  • All milestones have target dates
  • Epics reference actual task IDs
  • Features categorized (suggested/planned/completed)
  • Statistics auto-calculated or accurate
  • Tasks grouped by phase and epic
  • All tasks have priority (P0-P3)
  • Effort estimates provided (Xh)
  • Tags used for filtering (#phase-1, #backend, etc.)

Automation

Convert Existing TASKLIST.md:

python3 scripts/convert-tasklist-to-checkboxes.py --project-path ./

Update Statistics:

import yaml
import re

def update_stats(tasklist_path):
with open(tasklist_path, 'r') as f:
content = f.read()

total = len(re.findall(r'- \[[ x~]\]', content))
completed = len(re.findall(r'- \[x\]', content))
in_progress = len(re.findall(r'- \[~\]', content))
pending = len(re.findall(r'- \[ \]', content))

percentage = (completed / total * 100) if total > 0 else 0

# Update YAML frontmatter...

Skill: Progress Tracking

Description

Monitor and report task completion progress across projects using checkbox analysis, milestone tracking, and statistical reporting.

Inputs

  • TASKLIST-WITH-CHECKBOXES.md (file): Task list to analyze
  • Reporting Period (string, optional): "daily", "weekly", "monthly", "sprint"
  • Include Submodules (boolean): Whether to aggregate submodule progress

Process

  1. Parse Checkbox Status

    import re

    def analyze_tasklist(content):
    total = len(re.findall(r'- \[[ x~]\]', content))
    completed = len(re.findall(r'- \[x\]', content))
    in_progress = len(re.findall(r'- \[~\]', content))
    pending = len(re.findall(r'- \[ \]', content))

    return {
    'total': total,
    'completed': completed,
    'in_progress': in_progress,
    'pending': pending,
    'completion_pct': (completed / total * 100) if total > 0 else 0
    }
  2. Calculate Milestone Progress

    def milestone_progress(tasks, milestone_name):
    milestone_tasks = [t for t in tasks if milestone_name in t['tags']]
    completed = len([t for t in milestone_tasks if t['status'] == 'x'])
    total = len(milestone_tasks)
    return (completed / total * 100) if total > 0 else 0
  3. Generate Velocity Metrics

    def calculate_velocity(completed_tasks_by_week):
    # Average tasks per week
    avg = sum(completed_tasks_by_week) / len(completed_tasks_by_week)

    # Trend (increasing/decreasing/stable)
    recent = completed_tasks_by_week[-2:]
    earlier = completed_tasks_by_week[-4:-2]

    if sum(recent) > sum(earlier) * 1.1:
    trend = "increasing"
    elif sum(recent) < sum(earlier) * 0.9:
    trend = "decreasing"
    else:
    trend = "stable"

    return {'avg_per_week': avg, 'trend': trend}
  4. Identify Blockers

    def find_blockers(content):
    # Tasks marked with ⚠️ or containing "blocked" in description
    blocker_pattern = r'- \[ \].*(?:⚠️|blocked|blocker)'
    blockers = re.findall(blocker_pattern, content, re.IGNORECASE)
    return blockers

Outputs

Progress Report (Markdown):

# Progress Report - Project Name
**Period:** Nov 20-27, 2025
**Generated:** 2025-11-27 10:00 AM

## Summary
- **Total Tasks:** 127
- **Completed:** 78 (61.4%)
- **In Progress:** 12 (9.4%)
- **Pending:** 37 (29.1%)

## Milestone Progress
- Phase 1: Foundation - 85% complete (17/20 tasks)
- Phase 2: Development - 45% complete (23/51 tasks)
- Phase 3: Testing - 0% complete (0/38 tasks)

## Velocity
- **Average:** 8.5 tasks/week
- **Trend:** Increasing (+15% vs last 2 weeks)

## Blockers
- ⚠️ Task #34: Awaiting API specification (blocked 3 days)
- ⚠️ Task #67: Database migration issues (blocked 1 day)

## Recommendations
- Phase 1 nearly complete, prepare for Phase 2 kickoff
- Address blockers before next sprint planning

Quality Checks

  • All tasks counted correctly
  • Completion percentage accurate
  • Milestone progress calculated
  • Velocity trend identified
  • Blockers highlighted
  • Recommendations actionable

Skill: Milestone Management

Description

Define, track, and manage project milestones with target dates, dependencies, and completion criteria.

Inputs

  • Milestone Name (string): Descriptive milestone name
  • Target Date (date): Expected completion date
  • Description (string): What defines milestone completion
  • Associated Tasks (array): Task IDs that comprise this milestone
  • Dependencies (array, optional): Other milestones that must complete first

Process

  1. Define Milestone in YAML

    milestones:
    - name: "Beta Testing Launch"
    target_date: "2025-12-10"
    status: "in_progress"
    description: "Beta environment deployed with 50-100 test users"
    dependencies: ["Core Development Complete"]
    success_criteria:
    - "Beta environment accessible to users"
    - "50+ users onboarded"
    - "Monitoring and analytics operational"
    - "Support documentation published"
  2. Link Tasks to Milestone

    ## Milestone: Beta Testing Launch

    - [x] 1. Setup beta environment (P0, 8h) #beta #infrastructure
    - [~] 2. Onboard first 50 users (P0, 16h) #beta #onboarding
    - [ ] 3. Configure monitoring (P1, 4h) #beta #ops
  3. Track Milestone Progress

    def milestone_status(milestone):
    tasks = get_tasks_for_milestone(milestone['name'])
    completed = len([t for t in tasks if t['status'] == 'x'])
    total = len(tasks)

    percentage = (completed / total * 100) if total > 0 else 0

    if percentage == 100:
    status = "complete"
    elif percentage > 0:
    status = "in_progress"
    else:
    status = "pending"

    return {
    'percentage': percentage,
    'status': status,
    'tasks_complete': completed,
    'tasks_total': total
    }
  4. Check Dependencies

    def can_start_milestone(milestone, all_milestones):
    for dep_name in milestone.get('dependencies', []):
    dep = next((m for m in all_milestones if m['name'] == dep_name), None)
    if not dep or dep['status'] != 'complete':
    return False
    return True

Outputs

Milestone Dashboard:

## Milestone Dashboard

| Milestone | Target Date | Progress | Status | Blockers |
|-----------|-------------|----------|--------|----------|
| Core Dev Complete | Nov 19 | 100% | ✅ Complete | - |
| Beta Launch | Dec 10 | 67% | ⚡ In Progress | - |
| Pilot Phase 1 | Dec 24 | 0% | ⏸️ Pending | Awaiting Beta |
| Public Launch | Mar 11 | 0% | 📅 Scheduled | Awaiting Pilot |

Quality Checks

  • Target dates realistic and achievable
  • Success criteria clearly defined
  • Dependencies identified and tracked
  • Tasks properly linked to milestone
  • Progress tracking automated
  • Blocker escalation process defined

Skill: Epic & Feature Planning

Description

Organize work into logical epics (large features) and break down into implementable tasks with clear acceptance criteria.

Inputs

  • Epic Name (string): High-level feature name
  • Priority (enum): P0, P1, P2, P3
  • Description (string): What the epic delivers
  • User Stories (array, optional): User-facing value statements
  • Acceptance Criteria (array): Definition of done

Process

  1. Define Epic

    epics:
    - name: "Real-Time Collaboration"
    priority: "P1"
    status: "planned"
    description: "Enable multiple users to work on same project simultaneously"
    user_stories:
    - "As a developer, I want to see my teammate's changes in real-time"
    - "As a project manager, I want to see who is working on what"
    acceptance_criteria:
    - "Multiple users can edit same project without conflicts"
    - "Changes propagate within 2 seconds"
    - "User presence indicators show active editors"
    - "Conflict resolution UI handles simultaneous edits"
    estimated_effort: "40-60 hours"
    tasks: [45, 46, 47, 48, 49, 50, 51, 52]
  2. Break Down into Tasks

    ### Epic: Real-Time Collaboration (#realtime #collab)

    - [ ] 45. Design WebSocket message protocol (P1, 4h) #realtime #design
    - [ ] 46. Implement WebSocket server (P0, 8h) #realtime #backend
    - [ ] 47. Create client connection manager (P0, 6h) #realtime #frontend
    - [ ] 48. Build user presence tracking (P1, 6h) #realtime #frontend
    - [ ] 49. Implement operational transformation (P0, 12h) #realtime #backend
    - [ ] 50. Add conflict resolution UI (P1, 8h) #realtime #frontend
    - [ ] 51. Write integration tests (P1, 6h) #realtime #testing
    - [ ] 52. Performance test with 100+ concurrent users (P2, 4h) #realtime #testing
  3. Prioritize Features

    def prioritize_features(features):
    # RICE scoring: Reach × Impact × Confidence / Effort
    scored = []
    for feature in features:
    score = (
    feature['reach'] *
    feature['impact'] *
    feature['confidence']
    ) / feature['effort']
    scored.append({'feature': feature, 'rice_score': score})

    return sorted(scored, key=lambda x: x['rice_score'], reverse=True)
  4. Track Epic Progress

    def epic_progress(epic, tasks):
    epic_tasks = [t for t in tasks if t['id'] in epic['tasks']]
    completed = len([t for t in epic_tasks if t['status'] == 'x'])
    total = len(epic_tasks)

    return {
    'completed': completed,
    'total': total,
    'percentage': (completed / total * 100) if total > 0 else 0
    }

Outputs

Epic Status Board:

## Epic Status

### In Progress
- **Real-Time Collaboration** (P1) - 25% complete (2/8 tasks)
- WebSocket server implemented ✅
- Client connection manager implemented ✅
- User presence tracking in progress 🔄
- Estimated completion: 2 weeks

### Planned
- **Advanced Analytics Dashboard** (P1) - 0% complete (0/12 tasks)
- **Mobile App Support** (P2) - 0% complete (0/15 tasks)

### Completed
- **User Authentication System** (P0) - 100% complete (10/10 tasks) ✅
- **Database Migration Tools** (P1) - 100% complete (7/7 tasks) ✅

Quality Checks

  • Epic has clear user value proposition
  • Acceptance criteria measurable
  • Tasks comprehensively cover epic scope
  • Effort estimates provided
  • Priority assigned based on RICE or similar
  • Dependencies identified
  • Progress trackable via task checkboxes

Skill: Status Reporting

Description

Generate comprehensive status reports for stakeholders combining progress metrics, milestone tracking, risk assessment, and next steps.

Inputs

  • Reporting Period (object): Start date, end date
  • Stakeholder Audience (enum): executive, team, customer
  • Report Type (enum): daily, weekly, sprint, milestone
  • Include Financials (boolean): Whether to show budget status

Process

  1. Collect Metrics

    metrics = {
    'tasks': analyze_tasklist(tasklist_content),
    'milestones': calculate_milestone_progress(milestones),
    'velocity': calculate_velocity(historical_data),
    'blockers': find_blockers(tasklist_content),
    'risks': assess_risks(project_plan),
    'budget': calculate_burn_rate(expenses) if include_financials else None
    }
  2. Format for Audience

    Executive Report:

    • High-level progress summary
    • Milestone status
    • Budget vs. actual
    • Top 3 risks
    • Executive decision needed (if any)

    Team Report:

    • Detailed task completion
    • Sprint velocity
    • Blockers with owners
    • Next sprint priorities
    • Technical decisions needed

    Customer Report:

    • Feature delivery status
    • Known issues
    • Upcoming releases
    • Support metrics
  3. Generate Report

Example Executive Report:

# CODITECT Rollout - Executive Status Report
**Week of:** Nov 20-27, 2025
**Phase:** Beta Testing (Week 2 of 4)
**Overall Status:** 🟢 On Track

## Executive Summary
Beta testing progressing ahead of schedule with 78 users onboarded (target: 50-100). Core platform stability at 99.2% uptime. On track for Dec 10 beta analysis milestone.

## Progress
- **Tasks Completed This Week:** 23 (target: 20)
- **Sprint Velocity:** +15% vs. last 2 weeks
- **Overall Completion:** 61.4% (78/127 tasks)

## Milestones
| Milestone | Target | Status | Notes |
|-----------|--------|--------|-------|
| Beta Launch | Dec 10 | ⚡ 67% | Ahead of schedule |
| Pilot Phase 1 | Dec 24 | 📅 Pending | Ready to start on time |

## Budget
- **Spent to Date:** $1.11M (43% of total)
- **Burn Rate:** $145K/month (on budget)
- **Forecast:** On track for $2.57M total

## Top Risks
1. 🟡 **Medium:** Scaling concerns for 100+ concurrent users
- Mitigation: Load testing scheduled for next week
2. 🟢 **Low:** Beta user engagement lower than expected
- Mitigation: Enhanced onboarding materials deployed

## Decisions Needed
- None at this time

## Next Week
- Complete Phase 1 tasks (5 remaining)
- Begin Phase 2 planning
- Conduct load testing with 100+ concurrent users

Example Team Report:

# Sprint 5 Status - Development Team
**Sprint:** Nov 20-27, 2025
**Sprint Goal:** Complete Phase 1, prepare Phase 2

## Sprint Progress
- **Completed:** 23 tasks (target: 20) ✅
- **In Progress:** 12 tasks
- **Pending:** 37 tasks

## Completed This Sprint
- ✅ WebSocket server implementation
- ✅ User presence tracking
- ✅ Database migration scripts
- ✅ API rate limiting
- ✅ Monitoring dashboard setup

## Active Blockers
- ⚠️ **Task #34:** Awaiting OAuth provider approval (blocked 3 days)
- **Owner:** Jane
- **Action:** Escalated to partnerships team
- ⚠️ **Task #67:** Database migration performance issues (blocked 1 day)
- **Owner:** Bob
- **Action:** Optimizing query indexes

## Velocity Metrics
- **Current Sprint:** 23 tasks/week
- **Average (last 4 sprints):** 20 tasks/week
- **Trend:** +15% improving

## Next Sprint Priorities
1. Address remaining 2 blockers
2. Begin Phase 2 API development (5 tasks)
3. Complete integration testing (3 tasks)
4. Load testing preparation (2 tasks)

## Technical Decisions Needed
- Database: Stick with PostgreSQL or evaluate MongoDB for real-time features?
- Frontend: Migrate to Next.js 14 or stay on 13?

Outputs

  • Format: Markdown or HTML
  • Frequency: Configurable (daily/weekly/sprint/milestone)
  • Distribution: Email, Slack, dashboard

Quality Checks

  • Metrics accurate and up-to-date
  • Audience-appropriate level of detail
  • Actionable next steps identified
  • Risks and blockers highlighted
  • Decisions clearly flagged
  • Visual formatting (tables, emojis) used effectively

Skill: YAML Metadata Management

Description

Parse, validate, and update YAML frontmatter in PROJECT-PLAN.md and TASKLIST-WITH-CHECKBOXES.md files programmatically.

Inputs

  • File Path (string): Path to markdown file with YAML frontmatter
  • Updates (object): Key-value pairs to update
  • Validation Schema (object, optional): YAML structure to validate against

Process

  1. Parse YAML Frontmatter

    import yaml
    import re

    def parse_yaml_frontmatter(file_path):
    with open(file_path, 'r') as f:
    content = f.read()

    # Extract YAML between --- markers
    yaml_match = re.search(r'^---\n(.*?)\n---', content, re.DOTALL | re.MULTILINE)

    if not yaml_match:
    return None, content

    yaml_str = yaml_match.group(1)
    metadata = yaml.safe_load(yaml_str)

    # Get markdown content after frontmatter
    md_content = content[yaml_match.end():]

    return metadata, md_content
  2. Validate Metadata

    def validate_metadata(metadata, schema):
    errors = []

    # Check required fields
    for field in schema.get('required', []):
    if field not in metadata:
    errors.append(f"Missing required field: {field}")

    # Check field types
    for field, expected_type in schema.get('types', {}).items():
    if field in metadata:
    actual_type = type(metadata[field]).__name__
    if actual_type != expected_type:
    errors.append(f"Field '{field}' should be {expected_type}, got {actual_type}")

    # Check enum values
    for field, allowed_values in schema.get('enums', {}).items():
    if field in metadata and metadata[field] not in allowed_values:
    errors.append(f"Field '{field}' must be one of {allowed_values}")

    return errors
  3. Update Metadata

    def update_metadata(file_path, updates):
    metadata, md_content = parse_yaml_frontmatter(file_path)

    if metadata is None:
    raise ValueError("No YAML frontmatter found")

    # Apply updates
    for key, value in updates.items():
    metadata[key] = value

    # Update timestamp
    from datetime import datetime
    metadata['updated_date'] = datetime.now().strftime('%Y-%m-%d')

    # Rebuild file
    yaml_str = yaml.dump(metadata, default_flow_style=False, sort_keys=False)
    new_content = f"---\n{yaml_str}---\n{md_content}"

    with open(file_path, 'w') as f:
    f.write(new_content)
  4. Recalculate Statistics

    def recalculate_stats(file_path):
    metadata, md_content = parse_yaml_frontmatter(file_path)

    # Count checkboxes
    total = len(re.findall(r'- \[[ x~]\]', md_content))
    completed = len(re.findall(r'- \[x\]', md_content))
    in_progress = len(re.findall(r'- \[~\]', md_content))
    pending = len(re.findall(r'- \[ \]', md_content))

    # Update stats
    metadata['stats'] = {
    'total_tasks': total,
    'completed': completed,
    'in_progress': in_progress,
    'pending': pending,
    'completion_percentage': round((completed / total * 100), 1) if total > 0 else 0
    }

    # Write back
    update_metadata(file_path, {'stats': metadata['stats']})

Outputs

  • Updated File: Markdown file with modified YAML frontmatter
  • Validation Report: List of any errors or warnings

Quality Checks

  • YAML syntax valid
  • Required fields present
  • Field types correct
  • Enum values within allowed set
  • Statistics accurate
  • Timestamps updated

Skill: Cross-Project Coordination

Description

Coordinate tasks, milestones, and dependencies across multiple projects in the 46-submodule CODITECT ecosystem.

Inputs

  • Master TASKLIST (file): Main project tasklist
  • Submodule TASKLISTs (array of files): Individual project tasklists
  • Dependency Map (object): Cross-project dependencies
  • Coordination Level (enum): "portfolio", "category", "specific_projects"

Process

  1. Discover All Projects

    def find_all_projects(base_path):
    submodules_dir = Path(base_path) / 'submodules'
    projects = []

    for category_dir in submodules_dir.iterdir():
    if not category_dir.is_dir():
    continue

    for project_dir in category_dir.iterdir():
    tasklist_path = project_dir / 'TASKLIST-WITH-CHECKBOXES.md'
    if tasklist_path.exists():
    metadata, _ = parse_yaml_frontmatter(tasklist_path)
    projects.append({
    'path': project_dir,
    'category': category_dir.name,
    'metadata': metadata,
    'tasklist': tasklist_path
    })

    return projects
  2. Build Dependency Graph

    def build_dependency_graph(projects):
    graph = {}

    for project in projects:
    project_name = project['metadata']['project_name']
    dependencies = project['metadata'].get('dependencies', [])

    graph[project_name] = {
    'depends_on': dependencies,
    'project': project,
    'status': project['metadata']['status']
    }

    return graph
  3. Check for Circular Dependencies

    def detect_circular_deps(graph):
    def has_cycle(node, visited, rec_stack):
    visited.add(node)
    rec_stack.add(node)

    for dep in graph[node]['depends_on']:
    if dep not in visited:
    if has_cycle(dep, visited, rec_stack):
    return True
    elif dep in rec_stack:
    return True

    rec_stack.remove(node)
    return False

    visited = set()
    for node in graph:
    if node not in visited:
    if has_cycle(node, set(), set()):
    return True
    return False
  4. Generate Coordination Report

    def coordination_report(projects, graph):
    # Group by category
    by_category = {}
    for project in projects:
    cat = project['category']
    if cat not in by_category:
    by_category[cat] = []
    by_category[cat].append(project)

    # Calculate aggregate progress
    report = {
    'total_projects': len(projects),
    'by_category': {},
    'blocked_projects': [],
    'ready_to_start': [],
    'in_progress': []
    }

    for cat, cat_projects in by_category.items():
    total_tasks = sum(p['metadata']['stats']['total_tasks'] for p in cat_projects)
    completed_tasks = sum(p['metadata']['stats']['completed'] for p in cat_projects)

    report['by_category'][cat] = {
    'projects': len(cat_projects),
    'completion': round((completed_tasks / total_tasks * 100), 1) if total_tasks > 0 else 0
    }

    # Identify ready-to-start projects
    for name, node in graph.items():
    if node['status'] == 'pending':
    deps_complete = all(
    graph[dep]['status'] == 'completed'
    for dep in node['depends_on']
    )
    if deps_complete:
    report['ready_to_start'].append(name)

    return report

Outputs

Portfolio Dashboard:

# CODITECT Portfolio Status
**Generated:** 2025-11-27

## Overview
- **Total Projects:** 46
- **Active:** 23
- **Completed:** 8
- **Pending:** 15

## Progress by Category

| Category | Projects | Completion |
|----------|----------|------------|
| Core | 3 | 78% |
| Cloud | 4 | 62% |
| Development | 10 | 45% |
| Operations | 5 | 31% |
| GTM | 8 | 12% |
| Market | 6 | 8% |
| Legal | 4 | 5% |
| Docs | 6 | 89% |

## Ready to Start
- **coditect-license-manager** - All dependencies complete
- **coditect-installer** - Awaiting core completion (2 tasks remaining)

## Cross-Project Blockers
- **coditect-cloud-backend** blocked by **coditect-cloud-infra** (infrastructure not deployed)
- **coditect-dev-context** blocked by **coditect-core** (API changes in progress)

## Coordination Recommendations
1. Prioritize completing core dependencies to unblock 5 pending projects
2. Coordinate cloud infrastructure deployment for week of Dec 1
3. Schedule cross-team sync for dev tools integration

Quality Checks

  • All projects discovered
  • Dependencies mapped correctly
  • No circular dependencies detected
  • Ready-to-start projects identified
  • Blockers highlighted with owners
  • Category-level progress accurate

Automation Scripts

convert-tasklist-to-checkboxes.py

Location: /scripts/convert-tasklist-to-checkboxes.py

Purpose: Convert legacy TASKLIST.md files to new YAML format

Usage:

# Dry run (preview changes)
python3 scripts/convert-tasklist-to-checkboxes.py --dry-run

# Convert all found files
python3 scripts/convert-tasklist-to-checkboxes.py

# Convert specific project
python3 scripts/convert-tasklist-to-checkboxes.py --project-path ./submodules/cloud/coditect-cloud-backend

generate-progress-report.py

Location: /scripts/generate-progress-report.py (to be created)

Purpose: Generate automated progress reports

Usage:

# Weekly team report
python3 scripts/generate-progress-report.py --type weekly --audience team

# Executive monthly report
python3 scripts/generate-progress-report.py --type monthly --audience executive --include-financials

validate-yaml-metadata.py

Location: /scripts/validate-yaml-metadata.py (to be created)

Purpose: Validate YAML frontmatter across all projects

Usage:

# Validate all TASKLISTs
python3 scripts/validate-yaml-metadata.py --scan-all

# Validate specific file
python3 scripts/validate-yaml-metadata.py --file ./TASKLIST-WITH-CHECKBOXES.md

Best Practices

YAML Frontmatter

  1. Always include updated_date and update on every change
  2. Keep statistics (stats block) in sync with actual checkboxes
  3. Use semantic versioning for version field
  4. Categorize features into suggested/planned/completed

Task Management

  1. Use consistent checkbox syntax: [ ], [~], [x]
  2. Always include priority (P0-P3) and effort estimate (Xh)
  3. Tag tasks with phase and category for filtering
  4. Link tasks to epics in YAML frontmatter

Progress Tracking

  1. Update checkboxes same-day when work completes
  2. Recalculate statistics weekly (or use automation)
  3. Review blockers in daily standups
  4. Track velocity over 4+ week periods for accuracy

Reporting

  1. Match report detail to audience (executive vs. team)
  2. Always include next steps and action items
  3. Highlight decisions needed prominently
  4. Use visual formatting (tables, emojis) for scannability

Cross-Project Coordination

  1. Document dependencies in YAML frontmatter
  2. Check for circular dependencies before committing
  3. Coordinate milestone dates across dependent projects
  4. Generate portfolio view weekly during active development


Status: ✅ Production Ready Version: 1.0 Last Updated: 2025-11-27 Maintained By: CODITECT Project Management Team