Agent-Skills Implementation Patterns
Document Type: Technical Implementation Guide Audience: Engineering team implementing agent-skill separation Date: December 20, 2025 Version: 1.0.0 Status: Reference Implementation Patterns
Table of Contents
- Agent Skills Framework JSON Schema
- Progressive Disclosure Implementation
- A2A Agent Card Generation
- Skill Composition Engine
- Migration Patterns
- Code Examples
Agent Skills Framework JSON Schema
Level 1: Skill Card (Discovery - 20 tokens)
Purpose: Ultra-lightweight skill discovery for agent capability browsing
{
"$schema": "https://agent-skills.anthropic.com/v1/card.schema.json",
"skill_id": "coditect:api-design:v1.2.0",
"name": "API Design",
"description": "RESTful and GraphQL API design patterns",
"category": "development",
"tags": ["api", "rest", "graphql", "openapi"]
}
Token Budget: ~20 tokens (5 fields × 4 tokens/field average)
Use Case: Agent scans 244 skills to find relevant capabilities
- Without progressive disclosure: 244 × 4500 tokens = 1,098,000 tokens
- With card-level: 244 × 20 tokens = 4,880 tokens
- Savings: 99.6% reduction
Level 2: Skill Summary (Selection - 150 tokens)
Purpose: Detailed capability overview for skill selection decisions
{
"$schema": "https://agent-skills.anthropic.com/v1/summary.schema.json",
"skill_id": "coditect:api-design:v1.2.0",
"name": "API Design",
"description": "Comprehensive RESTful and GraphQL API design patterns with OpenAPI specifications",
"version": "1.2.0",
"category": "development",
"tags": ["api", "rest", "graphql", "openapi", "swagger"],
"capabilities": [
"RESTful API resource design and URL structure",
"GraphQL schema design and resolver patterns",
"OpenAPI 3.0 specification generation",
"API versioning strategies (URL, header, content negotiation)",
"Request/response payload design",
"Error handling and status code patterns",
"Authentication integration (JWT, OAuth2, API keys)",
"Rate limiting and pagination patterns"
],
"tools": ["Read", "Write", "Edit", "Bash"],
"dependencies": ["authentication-implementation", "database-design"],
"metadata": {
"token_budget": 4500,
"complexity": "medium",
"maturity": "production",
"author": "coditect-core",
"last_updated": "2025-12-20"
},
"examples": [
{
"id": "rest-crud-api",
"description": "Complete CRUD API with OpenAPI spec",
"language": "python"
},
{
"id": "graphql-schema",
"description": "GraphQL schema with nested resolvers",
"language": "typescript"
}
],
"quick_start": "Use for designing production-ready APIs with proper resource modeling, error handling, and OpenAPI documentation. Supports both RESTful and GraphQL paradigms."
}
Token Budget: ~150 tokens (comprehensive but concise)
Use Case: Agent selected 5 candidate skills, reads summaries to choose best fit
- 5 × 150 tokens = 750 tokens (instead of 5 × 4500 = 22,500 tokens)
- Savings: 96.7% reduction
Level 3: Full Specification (Execution - 4500 tokens)
Purpose: Complete implementation patterns and code examples
{
"$schema": "https://agent-skills.anthropic.com/v1/full.schema.json",
"skill_id": "coditect:api-design:v1.2.0",
"name": "API Design",
"description": "Comprehensive RESTful and GraphQL API design patterns",
"version": "1.2.0",
"full_specification": {
"overview": "...",
"when_to_use": "...",
"core_capabilities": "...",
"implementation_patterns": {
"rest_api_design": {
"resource_modeling": "...",
"url_structure": "...",
"http_methods": "...",
"status_codes": "...",
"code_examples": [
{
"language": "python",
"framework": "fastapi",
"code": "..."
}
]
},
"graphql_design": {
"schema_definition": "...",
"resolver_patterns": "...",
"code_examples": [...]
}
},
"integration_points": "...",
"best_practices": "...",
"common_issues": "..."
},
"code_examples": [...],
"templates": [...],
"references": [...]
}
Token Budget: 4500 tokens (full implementation guide)
Use Case: Agent committed to using skill, loads full spec for execution
Progressive Disclosure Implementation
Python Skill Loader
# coditect_core/skills/loader.py
from typing import Literal, Dict, Any
from pathlib import Path
import json
SkillLevel = Literal["card", "summary", "full"]
class SkillLoader:
"""Progressive disclosure skill loader with 3-level caching."""
def __init__(self, skills_dir: Path):
self.skills_dir = skills_dir
self._card_cache: Dict[str, dict] = {}
self._summary_cache: Dict[str, dict] = {}
self._full_cache: Dict[str, dict] = {}
def load_skill(
self,
skill_id: str,
level: SkillLevel = "card"
) -> Dict[str, Any]:
"""
Load skill at specified disclosure level.
Args:
skill_id: Skill identifier (e.g., "api-design")
level: Disclosure level ("card", "summary", "full")
Returns:
Skill definition at requested level
Token Budgets:
- card: ~20 tokens
- summary: ~150 tokens
- full: ~4500 tokens
"""
if level == "card":
return self._load_card(skill_id)
elif level == "summary":
return self._load_summary(skill_id)
else:
return self._load_full(skill_id)
def _load_card(self, skill_id: str) -> dict:
"""Load Level 1: Card (20 tokens)."""
if skill_id in self._card_cache:
return self._card_cache[skill_id]
card_path = self.skills_dir / skill_id / "card.json"
with open(card_path) as f:
card = json.load(f)
self._card_cache[skill_id] = card
return card
def _load_summary(self, skill_id: str) -> dict:
"""Load Level 2: Summary (150 tokens)."""
if skill_id in self._summary_cache:
return self._summary_cache[skill_id]
summary_path = self.skills_dir / skill_id / "summary.json"
with open(summary_path) as f:
summary = json.load(f)
self._summary_cache[skill_id] = summary
return summary
def _load_full(self, skill_id: str) -> dict:
"""Load Level 3: Full specification (4500 tokens)."""
if skill_id in self._full_cache:
return self._full_cache[skill_id]
full_path = self.skills_dir / skill_id / "SKILL.json"
with open(full_path) as f:
full_spec = json.load(f)
self._full_cache[skill_id] = full_spec
return full_spec
def discover_skills(
self,
query: str = None,
category: str = None,
tags: list = None
) -> list[dict]:
"""
Discover skills using card-level search.
Token-efficient skill discovery:
- Loads only cards (20 tokens each)
- Filters by query/category/tags
- Returns matching skills for further inspection
"""
all_cards = []
# Scan skills directory for card.json files
for skill_dir in self.skills_dir.iterdir():
if skill_dir.is_dir():
try:
card = self._load_card(skill_dir.name)
all_cards.append(card)
except FileNotFoundError:
continue
# Filter by criteria
results = all_cards
if category:
results = [c for c in results if c.get("category") == category]
if tags:
results = [
c for c in results
if any(tag in c.get("tags", []) for tag in tags)
]
if query:
query_lower = query.lower()
results = [
c for c in results
if query_lower in c.get("name", "").lower()
or query_lower in c.get("description", "").lower()
]
return results
# Usage Example
loader = SkillLoader(Path("skills/"))
# Step 1: Discovery (20 tokens × 244 skills = 4,880 tokens)
api_skills = loader.discover_skills(category="development", tags=["api"])
print(f"Found {len(api_skills)} API-related skills")
# Step 2: Selection (150 tokens × 5 candidates = 750 tokens)
for skill in api_skills[:5]:
summary = loader.load_skill(skill["skill_id"], level="summary")
print(f"{summary['name']}: {summary['quick_start']}")
# Step 3: Execution (4500 tokens × 1 selected = 4,500 tokens)
chosen_skill = loader.load_skill("api-design", level="full")
# Total: 4,880 + 750 + 4,500 = 10,130 tokens
# vs. loading all 244 full skills: 1,098,000 tokens
# Savings: 99.1%
A2A Agent Card Generation
Agent Card Schema (A2A Protocol v1.0)
{
"$schema": "https://a2a-protocol.org/v1/agent-card.schema.json",
"agent_id": "coditect:codi-backend-engineer:v2.0.0",
"name": "Backend Engineering Specialist",
"description": "Expert backend developer specializing in API design, database architecture, and authentication systems",
"version": "2.0.0",
"persona": {
"role": "Backend Engineer",
"expertise_level": "senior",
"domains": ["web-services", "databases", "security", "testing"]
},
"capabilities": [
{
"skill_id": "coditect:api-design:v1.2.0",
"skill_name": "API Design",
"proficiency": "expert",
"description": "RESTful and GraphQL API design"
},
{
"skill_id": "coditect:database-design:v1.1.0",
"skill_name": "Database Design",
"proficiency": "expert",
"description": "Schema design and optimization"
},
{
"skill_id": "coditect:authentication-implementation:v1.0.5",
"skill_name": "Authentication Implementation",
"proficiency": "expert",
"description": "JWT, OAuth2, session management"
},
{
"skill_id": "coditect:testing-automation:v1.3.0",
"skill_name": "Testing Automation",
"proficiency": "advanced",
"description": "API testing and integration tests"
},
{
"skill_id": "coditect:code-review:v1.0.0",
"skill_name": "Code Review",
"proficiency": "advanced",
"description": "Security and best practices review"
}
],
"protocols": {
"a2a": "1.0",
"mcp": "1.0",
"agent_skills": "1.0"
},
"tools": ["Read", "Write", "Edit", "Bash", "Grep", "Glob"],
"availability": {
"status": "available",
"max_concurrent_tasks": 3,
"average_response_time_ms": 2000
},
"communication": {
"languages": ["english"],
"preferred_format": "markdown",
"accepts_tasks_from": ["orchestrator", "frontend-engineer", "devops-engineer"]
},
"metadata": {
"created": "2025-01-15",
"last_updated": "2025-12-20",
"author": "coditect-core",
"license": "MIT",
"documentation_uri": "https://coditect.ai/agents/backend-engineer"
}
}
Python Agent Card Generator
# coditect_core/agents/card_generator.py
from typing import List, Dict, Any
from pathlib import Path
import json
import yaml
class AgentCardGenerator:
"""Generate A2A protocol-compliant agent cards from agent definitions."""
def __init__(self, agents_dir: Path, skills_dir: Path):
self.agents_dir = agents_dir
self.skills_dir = skills_dir
def generate_agent_card(self, agent_id: str) -> Dict[str, Any]:
"""
Generate A2A agent card from CODITECT agent definition.
Args:
agent_id: Agent identifier (e.g., "codi-backend-engineer")
Returns:
A2A-compliant agent card JSON
"""
# Load existing agent definition
agent_path = self.agents_dir / f"{agent_id}.md"
agent_def = self._parse_agent_md(agent_path)
# Extract skills from agent definition
skills = self._extract_skills(agent_def)
# Generate agent card
card = {
"$schema": "https://a2a-protocol.org/v1/agent-card.schema.json",
"agent_id": f"coditect:{agent_id}:v2.0.0",
"name": agent_def.get("name", agent_id),
"description": agent_def.get("description", ""),
"version": "2.0.0",
"persona": {
"role": agent_def.get("role", "Specialist"),
"expertise_level": agent_def.get("expertise_level", "senior"),
"domains": agent_def.get("domains", [])
},
"capabilities": skills,
"protocols": {
"a2a": "1.0",
"mcp": "1.0",
"agent_skills": "1.0"
},
"tools": agent_def.get("tools", []),
"availability": {
"status": "available",
"max_concurrent_tasks": 3,
"average_response_time_ms": 2000
},
"communication": {
"languages": ["english"],
"preferred_format": "markdown",
"accepts_tasks_from": agent_def.get("collaborates_with", [])
},
"metadata": {
"created": agent_def.get("created", "2025-01-15"),
"last_updated": "2025-12-20",
"author": "coditect-core",
"license": "MIT"
}
}
return card
def _parse_agent_md(self, path: Path) -> dict:
"""Parse agent markdown file with YAML frontmatter."""
with open(path) as f:
content = f.read()
# Extract YAML frontmatter
if content.startswith("---"):
parts = content.split("---", 2)
frontmatter = yaml.safe_load(parts[1])
return frontmatter
return {}
def _extract_skills(self, agent_def: dict) -> List[dict]:
"""
Extract skill capabilities from agent definition.
Strategy:
1. Check if agent has explicit "skills" field
2. Infer skills from agent responsibilities/capabilities
3. Map to existing skill IDs from skills registry
"""
explicit_skills = agent_def.get("skills", [])
if explicit_skills:
return [
{
"skill_id": f"coditect:{skill}:v1.0.0",
"skill_name": skill.replace("-", " ").title(),
"proficiency": "expert"
}
for skill in explicit_skills
]
# Fallback: infer from agent description/capabilities
# (This would be more sophisticated in production)
return []
def generate_all_agent_cards(self) -> Dict[str, dict]:
"""Generate agent cards for all agents in repository."""
cards = {}
for agent_file in self.agents_dir.glob("*.md"):
if agent_file.stem in ["README", "INDEX"]:
continue
agent_id = agent_file.stem
try:
card = self.generate_agent_card(agent_id)
cards[agent_id] = card
# Save card to JSON file
card_path = self.agents_dir / f"{agent_id}.card.json"
with open(card_path, "w") as f:
json.dump(card, f, indent=2)
print(f"✓ Generated card for {agent_id}")
except Exception as e:
print(f"✗ Failed to generate card for {agent_id}: {e}")
return cards
# Usage
generator = AgentCardGenerator(
agents_dir=Path("agents/"),
skills_dir=Path("skills/")
)
# Generate all agent cards
cards = generator.generate_all_agent_cards()
print(f"Generated {len(cards)} agent cards")
Skill Composition Engine
Dynamic Skill Assembly
# coditect_core/composition/engine.py
from typing import List, Set, Dict, Any
from dataclasses import dataclass
from pathlib import Path
@dataclass
class SkillRequirement:
"""Skill required for task execution."""
skill_id: str
required: bool = True
alternatives: List[str] = None
@dataclass
class ComposedAgent:
"""Dynamically composed agent from skill assembly."""
agent_id: str
base_persona: str
skills: List[dict]
tools: List[str]
total_token_budget: int
class SkillCompositionEngine:
"""
Compose agents dynamically from skill requirements.
Replaces static agent definitions with on-demand assembly.
"""
def __init__(self, skill_loader, agent_registry):
self.skill_loader = skill_loader
self.agent_registry = agent_registry
def compose_agent(
self,
task_description: str,
base_persona: str = None
) -> ComposedAgent:
"""
Dynamically compose agent from task requirements.
Workflow:
1. Analyze task description to identify required skills
2. Discover candidate skills via progressive disclosure
3. Resolve skill dependencies
4. Assemble composed agent with full skill specs
Args:
task_description: Natural language task description
base_persona: Optional base agent persona to start from
Returns:
Composed agent ready for task execution
"""
# Step 1: Analyze task to identify skill requirements
required_skills = self._analyze_task_requirements(task_description)
# Step 2: Discover candidate skills (card-level, ~20 tokens each)
candidate_skills = []
for req in required_skills:
matches = self.skill_loader.discover_skills(
query=req.skill_id,
tags=self._extract_tags(task_description)
)
candidate_skills.extend(matches)
# Step 3: Select best skills (summary-level, ~150 tokens each)
selected_skills = self._select_skills(candidate_skills, task_description)
# Step 4: Resolve dependencies
all_skills = self._resolve_dependencies(selected_skills)
# Step 5: Load full specifications (4500 tokens each)
full_skills = [
self.skill_loader.load_skill(skill["skill_id"], level="full")
for skill in all_skills
]
# Step 6: Assemble composed agent
composed = ComposedAgent(
agent_id=f"composed_{hash(task_description) % 10000}",
base_persona=base_persona or "Generalist Agent",
skills=full_skills,
tools=self._aggregate_tools(full_skills),
total_token_budget=sum(s.get("token_budget", 0) for s in full_skills)
)
return composed
def _analyze_task_requirements(self, task: str) -> List[SkillRequirement]:
"""
Analyze task description to identify required skills.
Uses keyword matching, NLP, and pattern recognition.
"""
requirements = []
# Keyword-based skill detection
skill_keywords = {
"api": ["api-design", "authentication-implementation"],
"database": ["database-design", "query-optimization"],
"security": ["security-audit", "authentication-implementation"],
"test": ["testing-automation", "code-review"],
"deploy": ["ci-cd-configuration", "docker-deployment"]
}
task_lower = task.lower()
for keyword, skills in skill_keywords.items():
if keyword in task_lower:
for skill in skills:
requirements.append(SkillRequirement(skill_id=skill))
return requirements
def _select_skills(self, candidates: List[dict], task: str) -> List[dict]:
"""
Select best skills from candidates using summary-level inspection.
Loads summaries (150 tokens each) to make selection decision.
"""
scored_skills = []
for candidate in candidates:
summary = self.skill_loader.load_skill(
candidate["skill_id"],
level="summary"
)
# Score skill based on:
# - Capability match with task
# - Maturity level
# - Token budget efficiency
score = self._score_skill(summary, task)
scored_skills.append((score, summary))
# Select top-scoring skills
scored_skills.sort(reverse=True, key=lambda x: x[0])
return [skill for score, skill in scored_skills[:5]]
def _resolve_dependencies(self, skills: List[dict]) -> List[dict]:
"""
Resolve skill dependencies recursively.
Example: api-design depends on authentication-implementation
"""
all_skills = set(s["skill_id"] for s in skills)
queue = list(skills)
while queue:
skill = queue.pop(0)
deps = skill.get("dependencies", [])
for dep in deps:
if dep not in all_skills:
dep_skill = self.skill_loader.load_skill(dep, level="summary")
queue.append(dep_skill)
all_skills.add(dep)
return [
self.skill_loader.load_skill(sid, level="summary")
for sid in all_skills
]
def _aggregate_tools(self, skills: List[dict]) -> List[str]:
"""Aggregate all required tools from skills."""
tools = set()
for skill in skills:
tools.update(skill.get("tools", []))
return list(tools)
def _extract_tags(self, task: str) -> List[str]:
"""Extract tags from task description."""
# Simplified tag extraction (production would use NLP)
return task.lower().split()
def _score_skill(self, skill: dict, task: str) -> float:
"""Score skill relevance to task."""
score = 0.0
# Check capability match
task_lower = task.lower()
for capability in skill.get("capabilities", []):
if any(word in task_lower for word in capability.lower().split()):
score += 10
# Bonus for production maturity
if skill.get("metadata", {}).get("maturity") == "production":
score += 5
# Penalty for large token budgets (prefer efficiency)
token_budget = skill.get("metadata", {}).get("token_budget", 5000)
score -= (token_budget / 1000) * 0.5
return score
# Usage Example
engine = SkillCompositionEngine(skill_loader, agent_registry)
# User command: /create-secure-api "User authentication REST API"
task = "Create a secure REST API for user authentication with JWT tokens"
# Compose agent dynamically
agent = engine.compose_agent(task, base_persona="Backend Engineer")
print(f"Composed Agent: {agent.agent_id}")
print(f"Skills: {[s['name'] for s in agent.skills]}")
print(f"Total Token Budget: {agent.total_token_budget}")
print(f"Required Tools: {agent.tools}")
# Output:
# Composed Agent: composed_1234
# Skills: ['API Design', 'Authentication Implementation', 'Security Audit', 'Testing Automation']
# Total Token Budget: 18000
# Required Tools: ['Read', 'Write', 'Edit', 'Bash', 'Grep']
Migration Patterns
Pattern 1: Extract Skills from Existing Agent
Before (Monolithic Agent):
---
name: codi-backend-engineer
description: Backend development expert
tools: Read, Write, Edit, Bash
---
## Core Responsibilities
### 1. API Design
- Design RESTful APIs with proper resource modeling
- Create OpenAPI specifications
- Implement versioning strategies
[... 2000 lines of API design patterns ...]
### 2. Database Design
- Schema design and normalization
- Index optimization
[... 1500 lines of database patterns ...]
### 3. Authentication
- JWT implementation
- OAuth2 flows
[... 1000 lines of auth patterns ...]
After (Agent + Skills):
Agent Definition (codi-backend-engineer.md):
---
name: codi-backend-engineer
description: Backend development expert
tools: Read, Write, Edit, Bash
skills:
- api-design
- database-design
- authentication-implementation
- testing-automation
- code-review
persona:
role: Backend Engineer
expertise_level: senior
---
## Role Description
Expert backend developer specializing in scalable web services.
## Approach
- Security-first design principles
- Test-driven development
- Performance optimization focus
Skill Files (skills/api-design/):
skills/api-design/
├── card.json # Level 1: Discovery (20 tokens)
├── summary.json # Level 2: Selection (150 tokens)
├── SKILL.json # Level 3: Full spec (4500 tokens)
└── examples/
├── rest-crud.py
└── graphql-schema.ts
Pattern 2: Skill Composition Examples
Example 1: Simple Composition
# User: /create-api "Blog post CRUD API"
# Engine composes:
agent = compose_agent(
task="Blog post CRUD API",
base_persona="Backend Engineer"
)
# Result:
# - api-design skill (RESTful patterns)
# - database-design skill (blog post schema)
# - testing-automation skill (API tests)
# Total: 3 skills, 13,500 tokens
Example 2: Complex Composition with Dependencies
# User: /deploy-secure-service "Microservice with auth and monitoring"
# Engine composes:
agent = compose_agent(
task="Microservice with auth and monitoring",
base_persona="DevOps Engineer"
)
# Result (with dependency resolution):
# - ci-cd-configuration (requested)
# - docker-deployment (requested)
# - authentication-implementation (requested)
# - monitoring-setup (requested)
# - security-audit (dependency of auth)
# - logging (dependency of monitoring)
# Total: 6 skills, 27,000 tokens
Code Examples
Example 1: Skill Discovery CLI
# cli/skill_discovery.py
import click
from coditect_core.skills.loader import SkillLoader
from pathlib import Path
@click.group()
def skill_cli():
"""CODITECT Skill Discovery CLI"""
pass
@skill_cli.command()
@click.option("--category", help="Filter by category")
@click.option("--tags", help="Comma-separated tags")
@click.option("--query", help="Search query")
def discover(category, tags, query):
"""Discover skills with progressive disclosure."""
loader = SkillLoader(Path("skills/"))
tag_list = tags.split(",") if tags else None
# Level 1: Card-level discovery
results = loader.discover_skills(
query=query,
category=category,
tags=tag_list
)
click.echo(f"Found {len(results)} skills (card-level scan)")
for card in results:
click.echo(f"\n{card['name']}")
click.echo(f" ID: {card['skill_id']}")
click.echo(f" Description: {card['description']}")
# Ask user if they want summary
if click.confirm(" Load summary?"):
summary = loader.load_skill(card["skill_id"], level="summary")
click.echo(f"\n Capabilities:")
for cap in summary["capabilities"]:
click.echo(f" - {cap}")
@skill_cli.command()
@click.argument("skill_id")
def load(skill_id):
"""Load full skill specification."""
loader = SkillLoader(Path("skills/"))
# Progressive loading demonstration
click.echo("Loading skill...")
card = loader.load_skill(skill_id, level="card")
click.echo(f"Card loaded: {card['name']} (~20 tokens)")
if click.confirm("Load summary?"):
summary = loader.load_skill(skill_id, level="summary")
click.echo(f"Summary loaded (~150 tokens)")
click.echo(f"Capabilities: {len(summary['capabilities'])}")
if click.confirm("Load full specification?"):
full = loader.load_skill(skill_id, level="full")
click.echo(f"Full spec loaded (~4500 tokens)")
click.echo(f"Ready for execution")
if __name__ == "__main__":
skill_cli()
Usage:
# Discover API-related skills
python cli/skill_discovery.py discover --category development --tags api
# Load specific skill
python cli/skill_discovery.py load api-design
Summary
Key Takeaways
-
Progressive Disclosure is Essential
- 3-level loading (card/summary/full) reduces tokens by 99.6%
- Enables efficient skill discovery and selection
-
Agent-Skill Separation Enables Composition
- Agents become lightweight personas
- Skills become portable, reusable capabilities
- Dynamic composition replaces static definitions
-
Standards Alignment is Critical
- A2A Agent Cards enable agent discovery
- Agent Skills Framework provides cross-platform portability
- MCP integration connects to tool ecosystem
-
Migration is Gradual
- Extract skills from existing agents incrementally
- Maintain backward compatibility during transition
- Use composition engine to phase out monolithic agents
Next Steps
-
Pilot Implementation (Week 1-2)
- Select 5 pilot agents for refactoring
- Extract 10-15 skills
- Implement progressive disclosure loader
- Generate agent cards
-
Validation (Week 3)
- Test skill composition engine
- Measure token savings
- Validate A2A protocol compliance
-
Scale (Week 4-18)
- Migrate all 122 agents
- Create 183+ skills
- Deploy skill discovery API
- Launch public skill registry
Document Status: Reference Implementation Next Review: January 15, 2026 Related: MOE-STRATEGIC-ANALYSIS-AGENT-SKILLS-STANDARDS.md