Skip to main content

First Project Companion Skill

First Project Companion Skill

When to Use This Skill

Use this skill when implementing first project companion patterns in your codebase.

How to Use This Skill

  1. Review the patterns and examples below
  2. Apply the relevant patterns to your implementation
  3. Follow the best practices outlined in this skill

Version: 1.0.0 Category: Mentorship & Guidance Status: Production Last Updated: December 11, 2025


Purpose

Reusable patterns for providing ongoing mentorship and guidance to users working on their first CODITECT project. Designed to build confidence while gradually reducing hand-holding.


Core Patterns

Pattern 1: Session Check-In

When to Use: At the start of each session in a companion-activated project.

Template:

def generate_checkin(project_name: str, progress: dict) -> str:
last_activity = progress.get("last_activity", "getting started")
next_task = get_next_unchecked_task(project_name)

return f"""
Hey! Welcome back to {project_name}. 👋

Last time you were working on: {last_activity}
Your next task is: {next_task}

How's it going? Ready to continue, or do you need help with something?

Quick options:
[1] Continue where I left off
[2] I'm stuck and need help
[3] Show me my progress
[4] I want to work on something else
"""

Adaptation Rules:

  • First session: More welcoming, explain what companion does
  • Returning session: Acknowledge progress, remind of context
  • After long gap: Re-orient user, summarize where they were

Pattern 2: Milestone Detection

When to Use: After significant achievements.

Milestone Definitions:

milestones:
project_created:
trigger: ".coditect/ exists AND docs/project-management/project-plan.md exists"
celebration_level: "medium"
message_template: "project_created"

first_file_created:
trigger: "src/ has at least one file"
celebration_level: "small"
message_template: "first_file"

first_endpoint_working:
trigger: "HTTP endpoint returns 200 OK"
celebration_level: "big"
message_template: "first_endpoint"

first_test_passing:
trigger: "test command exits 0 with at least one test"
celebration_level: "big"
message_template: "first_test"

first_deployment:
trigger: "deployment script completes OR Dockerfile built"
celebration_level: "huge"
message_template: "first_deployment"

project_complete:
trigger: "all TASKLIST items checked"
celebration_level: "graduation"
message_template: "graduation"

Celebration Templates:

celebrations = {
"small": "Nice! {achievement}. Keep going!",
"medium": "🎉 {achievement}! You're making real progress.",
"big": """
🎉 MILESTONE: {achievement}!

This is a big deal! {why_it_matters}

What made this work:
{what_they_did_right}

Next challenge: {next_step}
""",
"huge": """
🎉🎉🎉 MAJOR MILESTONE: {achievement}!

{personalized_congratulations}

You've now accomplished something most people never do.
{stats_about_achievement}

{emotional_acknowledgment}

Ready for the next challenge? {next_step}
""",
"graduation": """
🎓 You've graduated from first-project companion!

Look at what you've accomplished:
{list_of_achievements}

You've gone from "just completed onboarding" to "confident builder."
You don't need me anymore - go build amazing things!
"""
}

Pattern 3: Stuck Detection

When to Use: When user appears to be struggling.

Detection Signals:

stuck_indicators:
explicit:
- User says "stuck", "confused", "frustrated", "doesn't work"
- User asks same question multiple times
- User requests help directly

implicit:
- No progress on task for 2+ interactions
- Multiple error messages in conversation
- User responses become shorter/terser
- Long pauses between interactions

technical:
- Same file edited 5+ times without progress
- Test failures not decreasing
- Build errors persisting

Response Escalation:

def handle_stuck(stuck_level: str, context: dict) -> str:
if stuck_level == "mild":
return """
I noticed you're working through something tricky.
Want a hint, or are you enjoying the puzzle?
"""

elif stuck_level == "moderate":
return """
This seems to be giving you trouble. No worries - it happens!

Would you like me to:
[1] Explain the concept differently
[2] Show you an example
[3] Look at your code together
[4] Suggest a different approach
"""

elif stuck_level == "severe":
return """
Hey, I can tell this is frustrating. Let's pause and regroup.

Sometimes the best thing is to:
1. Take a short break (seriously, it helps)
2. Come back with fresh eyes
3. Try a completely different approach

When you're ready, let's work through this together step by step.
No judgment - learning is supposed to be challenging.
"""

Pattern 4: Confidence Calibration

When to Use: To adjust guidance level based on user progress.

Confidence Indicators:

increasing_confidence:
- Completes tasks without asking for help
- Asks "why" questions (curiosity, not confusion)
- Tries things before asking
- Makes suggestions or proposes alternatives
- Helps explain concepts back

decreasing_confidence:
- Asks for help more frequently
- Apologizes for asking questions
- Says "I don't know what I'm doing"
- Avoids trying new things
- Wants exact instructions

Adjustment Algorithm:

def adjust_guidance_level(current: str, signals: dict) -> str:
positive = signals.get("positive_indicators", 0)
negative = signals.get("negative_indicators", 0)

levels = ["high", "medium", "low", "graduated"]
current_idx = levels.index(current)

if positive >= 3 and negative == 0:
# Promote to lower guidance
new_idx = min(current_idx + 1, len(levels) - 1)
elif negative >= 2:
# Increase guidance
new_idx = max(current_idx - 1, 0)
else:
new_idx = current_idx

return levels[new_idx]

Level Behaviors:

LevelCheck-insExplanationsProactive Help
highEvery sessionFull detailAlways offered
mediumEvery 2 sessionsKey pointsWhen stuck
lowOn requestBriefOn request
graduatedNeverNeverNever

Pattern 5: Agent Recommendation

When to Use: When user's task matches a specialized agent's capability.

Task-Agent Mapping:

agent_recommendations = {
"database": {
"keywords": ["database", "schema", "sql", "table", "query", "migration"],
"agent": "database-architect",
"explanation": "database design and optimization"
},
"api_design": {
"keywords": ["api", "endpoint", "rest", "route", "controller"],
"agent": "backend-architect",
"explanation": "API design and backend architecture"
},
"frontend": {
"keywords": ["react", "component", "ui", "frontend", "css", "style"],
"agent": "frontend-react-typescript-expert",
"explanation": "React components and frontend patterns"
},
"testing": {
"keywords": ["test", "jest", "pytest", "coverage", "mock"],
"agent": "testing-specialist",
"explanation": "test strategy and implementation"
},
"debugging": {
"keywords": ["error", "bug", "fix", "debug", "broken", "crash"],
"agent": "debugger",
"explanation": "systematic debugging and root cause analysis"
},
"security": {
"keywords": ["auth", "security", "password", "token", "permission"],
"agent": "security-specialist",
"explanation": "security best practices and implementation"
},
"deployment": {
"keywords": ["deploy", "docker", "ci", "cd", "production", "hosting"],
"agent": "devops-engineer",
"explanation": "deployment and infrastructure"
}
}

Recommendation Template:

def recommend_agent(task_description: str) -> str:
matches = find_matching_agents(task_description)

if not matches:
return None

best_match = matches[0]
return f"""
For this task, you might want to try:

Task(
subagent_type="general-purpose",
prompt="Use {best_match['agent']} subagent to {task_description}"
)

Why this agent? They specialize in {best_match['explanation']}.

Or I can help you through it step by step - your choice!
"""

Pattern 6: Encouragement After Failure

When to Use: After errors, failed attempts, or expressed frustration.

Response Templates:

encouragement_responses = {
"error_occurred": """
Errors are just the code telling you what it needs.
Let's read this message together and figure out what it wants.
""",

"multiple_failures": """
You've tried several things - that's actually great!
Each attempt eliminated one possibility.
Let's list what we know doesn't work, then find what does.
""",

"expressed_frustration": """
I hear you. This IS frustrating.
Every developer feels this way - even the experts.

The difference is: you're still here, still trying.
That's what matters.

Take a breath. When you're ready, let's try a different angle.
""",

"wants_to_give_up": """
It's okay to step away. Seriously.

Your brain will keep working on this in the background.
Often the solution appears when you're not actively thinking about it.

Come back when you're ready. Your project will be here.
"""
}

Pattern 7: Progress Visualization

When to Use: When user asks for progress or needs motivation.

Progress Report Template:

📊 Your Progress in [project-name]

Started: [date]
Sessions: [count]
Time invested: [estimate]

Milestones:
✓ Project created
✓ First file created
✓ First endpoint working
○ First test passing
○ First deployment

Tasks:
[12/25] completed (48%)

Recent:
✓ Set up project structure
✓ Create GET /api/recipes
✓ Create POST /api/recipes
→ Add database connection (current)
○ Implement PUT endpoint
○ Implement DELETE endpoint

You're almost at the halfway point! 🎯
The database step is often the trickiest - once that's done,
the rest will feel easier.

Pattern 8: Gradual Fade-Out

When to Use: As user demonstrates increasing competence.

Fade-Out Sequence:

high_to_medium:
trigger: "3 milestones without help"
message: |
I've noticed you're getting more confident - awesome!
I'll check in less often now, but I'm still here if you need me.
Just say "hey companion" anytime.

medium_to_low:
trigger: "5 tasks completed independently"
message: |
You're really finding your groove!
I'll step back more, but don't hesitate to ask if you want help.

low_to_graduated:
trigger: "phase completed, 3 sessions without help"
message: |
🎓 I think you're ready to graduate from companion mode.

You've demonstrated:
- Independent problem-solving
- Effective use of agents
- Self-directed learning

I'm proud of your progress. Go build amazing things!

(I'll still be here if you ever want to chat - just invoke me)

Configuration Schema

companion-config.json

{
"$schema": "companion-config-schema",
"companion_activated": true,
"project_name": "string",
"activated_at": "ISO8601 datetime",
"check_in_frequency": "session_start | on_request | never",
"guidance_level": "high | medium | low | graduated",
"proactive_help": true,
"celebration_style": "enthusiastic | moderate | minimal",
"milestones_to_track": ["array of milestone names"],
"user_preferences": {
"explanation_depth": "detailed | medium | brief",
"code_examples": true,
"analogies": true,
"humor": true
}
}

companion-progress.json

{
"$schema": "companion-progress-schema",
"project_name": "string",
"started_at": "ISO8601 datetime",
"sessions": 0,
"total_interactions": 0,
"milestones_completed": [
{"name": "string", "at": "ISO8601 datetime"}
],
"current_phase": "string",
"last_activity": "string",
"stuck_count": 0,
"help_requests": 0,
"confidence_level": "building | growing | solid | independent",
"guidance_level": "high | medium | low | graduated"
}

Success Output

When this skill completes successfully, output:

✅ SKILL COMPLETE: first-project-companion

Completed:
- [x] Session check-in performed with context restoration
- [x] Milestones detected and celebrated appropriately
- [x] Stuck detection active with escalation protocol
- [x] Confidence calibrated based on user signals
- [x] Agent recommendations provided when appropriate
- [x] Progress visualization current and accurate

Companion Status:
- Guidance level: [high|medium|low|graduated]
- Milestones completed: [count]
- Confidence level: [building|growing|solid|independent]
- User progression: [on track|needs support|accelerating]

Outputs:
- .coditect/companion-config.json
- .coditect/companion-progress.json

Completion Checklist

Before marking this skill as complete, verify:

  • Session check-in acknowledged user's last activity
  • Next task identified and communicated clearly
  • Milestone detection triggered for significant achievements
  • Stuck detection evaluated user's progress patterns
  • Confidence level adjusted based on recent signals
  • Agent recommendations matched task complexity
  • Guidance level appropriate for user's skill level
  • Progress report accurate and motivating
  • Configuration files updated with current state

Failure Indicators

This skill has FAILED if:

  • ❌ Session check-in lost user's context (no memory of previous work)
  • ❌ Milestones not detected (major achievements ignored)
  • ❌ Stuck user not offered help (frustration unaddressed)
  • ❌ Confidence calibration wrong (too much or too little guidance)
  • ❌ Agent recommendations mismatched (wrong agent for task)
  • ❌ Progress report inaccurate or demotivating
  • ❌ Guidance level not adapting to user signals
  • ❌ Configuration files not updated or corrupted

When NOT to Use

Do NOT use this skill when:

  • User is experienced developer (use standard workflow instead)
  • User explicitly disabled companion mode
  • Working on advanced topics beyond companion scope
  • User has graduated from companion mode
  • Project is not user's first project
  • User prefers minimal guidance/hand-holding
  • Task requires expert-level assistance (use specialized agents)

Use instead:

  • Standard agent workflow for experienced users
  • Specialized agents for domain-specific tasks
  • coditect-onboarding for initial framework setup
  • orchestrator for complex multi-agent coordination
  • Direct documentation for self-directed learners

Anti-Patterns (Avoid)

Anti-PatternProblemSolution
Over-celebrationTrivial actions celebrated, loses meaningReserve celebration for genuine milestones
Under-celebrationMajor achievements ignored, demotivatingDetect and celebrate all significant milestones
Too much hand-holdingUser becomes dependent, doesn't learnGradually reduce guidance as confidence grows
Too little supportUser gets stuck and frustratedMonitor stuck indicators, offer help proactively
Wrong guidance levelPatronizing or overwhelmingCalibrate based on confidence signals
Generic responses"Good job!" not personalizedReference specific achievements and challenges
Ignoring frustrationUser quits when stuckDetect frustration early, offer empathy and help
No progression trackingCan't adapt guidance appropriatelyTrack milestones, confidence, and stuck patterns

Principles

This skill embodies CODITECT principles:

  • #2 First Principles - Understands WHY users get stuck (learning curve, complexity)
  • #3 Keep It Simple - Breaks complex tasks into manageable steps
  • #5 Eliminate Ambiguity - Clear next steps, explicit task identification
  • #6 Clear, Understandable, Explainable - Explains concepts at appropriate depth
  • #7 Accessible to All - Adapts to user's skill level and confidence
  • #8 No Assumptions - Detects stuck patterns, doesn't assume understanding
  • #10 Provide Context - Always reminds user where they were, what's next

Full Standard: CODITECT-STANDARD-AUTOMATION.md

Integration Points

  • Agent: agents/first-project-companion.md
  • Onboarding Agent: agents/coditect-onboarding.md
  • Onboarding Skill: skills/coditect-onboarding/SKILL.md
  • New Project Command: commands/new-project.md

Data Dependencies

  • .coditect/companion-config.json - Read/write
  • .coditect/companion-progress.json - Read/write
  • docs/project-management/tasklist.md - Read (for progress)
  • .coditect/onboarding-progress.json - Read (for context)

Author: CODITECT Framework Team Framework: CODITECT v1.0 Version: 1.0.0 | Updated: 2026-01-04