Skip to main content

analyze the requirements for a UI / UX agentic system agent with H.P.003-SKILLS, H.P.002-COMMANDS, tools, H.P.004-SCRIPTS, H.P.005-HOOKS following the Anthropic Claude Code agent skill etc process flow

A UI/UX-facing “Claude-style” agent for H.P.003-SKILLS/H.P.002-COMMANDS/tools needs to model the same lifecycle Anthropic uses (discovery → activation → execution → reflection), but surfaced as an interaction contract between H.P.001-AGENTS, frontends, and humans-in-the-loop.123

Below is a requirements breakdown you can turn into specs, schemas, or H.P.007-PROMPTS for your own platform.


Core lifecycle model

Design the UI/UX agent around an explicit agent lifecycle, mirroring Claude Code’s H.P.003-SKILLS flow.21

  • Discovery phase
    • Skill registry with name + short description discoverable by the model at startup without loading full instructions, to keep context lean.1
    • UX affordances for surfacing discoverable H.P.003-SKILLS as suggestions, typeahead, and contextual “available actions” based on current task.
  • Activation & permissions
    • “Ask to use skill” confirmation step before loading full SKILL.md / capability spec into context, with clear UX H.P.007-PROMPTS for user approval, denial, or scoping (e.g., “only for this file/folder/session”).21
    • Policy- and context-aware H.P.007-PROMPTS/UI (e.g., show file access scope, network/tool access, data classification labels).
  • Execution & monitoring
    • Streaming interaction channel that exposes intermediate tool calls, subagent delegation, and hook events to the UI (similar to AG-UI’s structured event model).32
    • Visual indicators for: active skill, current step, tool usage, background tasks, and pending human approvals.32
  • Reflection & learning
    • Surfaces for viewing what the agent learned from corrections (e.g., “autoskill” patterns that update durable preferences from user feedback).4
    • UX to accept/reject suggested skill updates or new H.P.003-SKILLS, with change previews and rollback.

Skills, H.P.002-COMMANDS, tools, H.P.004-SCRIPTS, H.P.005-HOOKS (Claude-style)

Model the agent capability layer explicitly, aligned with Claude Code’s skill and hook primitives.12

  • Skills
    • Each skill has: metadata (name, description, tags), instructions, optional scoped H.P.005-HOOKS, bundled H.P.004-SCRIPTS, and a visibility flag (user-invocable vs. auto-discovered).1
    • UX needs:
      • Slash-command palette for manual invocation (/skill-name).1
      • Contextual suggestions (“Claude suggests: Frontend design skill”) based on current buffer/intent.1
      • Skill detail panel showing description, required permissions, and recent runs.
  • Commands
    • Thin, user-facing entrypoints mapped to H.P.003-SKILLS/tools (e.g., “Refactor file”, “Generate test suite”), similar to Claude Code H.P.002-COMMANDS lists.56
    • Keyboard-first UX: command palette, H.P.009-CONFIGurable shortcuts, toggle to show/hide advanced H.P.002-COMMANDS.
    • Command categories for navigation (code, docs, infra, review, data, etc.).
  • Tools & programmatic tool use
    • Tools exposed to the agent via a code-level API for multi-step orchestration (Anthropic’s “Programmatic Tool Calling”).7
    • Requirements:
      • Typed tool schemas (input/output), error surfaces, and user-readable labels.
      • Safe defaults: dry-run mode, preview mode, tool access policies rendered in UI (e.g., “Can call prod API? No”).
      • Ability to show tool chains and dependency graphs for advanced users.87
  • Scripts & automation
    • Scripts as reusable, parameterized routines associated with a skill (e.g., run_tests.sh, lint_and_fix.sh).21
    • UI for: creating/editing H.P.004-SCRIPTS, viewing logs, re-running with adjusted parameters, and bookmarking common sequences.
  • Hooks
    • Lifecycle H.P.005-HOOKS like PreToolUse, PostToolUse, Stop scoped to H.P.003-SKILLS or sessions.21
    • UX elements:
      • Hook H.P.009-CONFIGuration panel per skill (“Run tests on save”, “Auto-format before commit”).
      • Event timeline showing hook triggers and outcomes.
      • Toggle controls to quickly disable/override H.P.005-HOOKS for a session.

UI/UX interaction contract (AG-UI / Magentic-UI style)

Define an explicit agent–UI protocol that your UI agent speaks, so any frontend can implement it.91083

  • Event-based protocol
    • Standardized event types: message, tool_call, tool_result, skill_activation_request, permission_request, layout_update, human_input_request, action_guard, memory_update.1093
    • Frontend subscribes to a stream of these events and renders them as chat bubbles, side panels, modals, or dashboards.
  • Generative UI patterns
    • Support for static, declarative, and open-ended generative UI layouts, per CopilotKit’s “Generative UI” patterns.93
    • Requirements:
      • Static: hand-crafted flows for critical tasks with predictable UI, agent mainly filling fields and explaining choices.9
      • Declarative: agent outputs structured UI schemas (AG-UI or similar) that the frontend renders into components.39
      • Open-ended: conversational flows with mixed tool usage, minimal structural constraints, but with visible tool traces for debugging.89
  • Agent ↔ frontend separation
    • Agent handles reasoning, tools, H.P.006-WORKFLOWS; the UI agent translates between agent events and AG-UI-like protocol for the frontend.83
    • Transport-agnostic design (WebSocket, SSE, HTTP) with a stable schema so multiple clients (IDE, web, mobile) can connect.

Human-in-the-loop and safety

Build HITL patterns directly into the UI agent, inspired by Magentic-UI and HITL surveys.111210

  • Interaction modes
    • Co-planning: UI surfaces agent’s task decomposition plan; user can reorder, edit, or veto steps before execution.10
    • Co-tasking/multi-tasking: parallel tasks with clearly separated threads and shared state; users can jump between tasks and redirect H.P.001-AGENTS.10
    • Action guards: certain high-risk actions trigger mandatory human approvals (file writes, prod API calls, data exports).1210
  • HumanInputRequest UX
    • Standard “HumanInputRequest” event type with: reason, suggested options, freeform field, timeout/deferral semantics.1110
    • UI patterns: inline in chat, modal dialogs for critical approvals, batched queues for reviewers in team settings.
  • Oversight & auditability
    • Per-session timeline view: model messages, tool calls, skill activations, H.P.005-HOOKS, human interventions.102
    • Exportable logs for compliance, with references to code diffs, data accesses, and decisions.
    • Configurable oversight levels per workspace/project (e.g., “always ask before external network calls”).

Architecture & process flow (referenceable in H.P.007-PROMPTS)

An Anthropic-aligned process flow for the UI/UX agent might look like:

  1. Session start
    • Load skill metadata only; present contextual hints in UI based on project type/filetype.1
  2. User intent capture
    • Chat message, command palette invocation, or UI action (e.g., selecting code range).
  3. Skill discovery & proposal
    • Agent selects candidate H.P.003-SKILLS/tools; UI agent emits skill_activation_request with rationale and scope.21
  4. Permission and scope negotiation
    • User approves/denies; optional HITL flow for sensitive tools via action guards.1210
  5. Execution loop with H.P.005-HOOKS
    • Agent orchestrates tools/H.P.004-SCRIPTS via programmatic tool calling, H.P.005-HOOKS fire around tool events; UI displays streaming events and partial results.72
  6. Human-in-loop checkpoints
    • For designated steps, emit human_input_request and pause; UI surfaces review/approval and injects responses back into the agent plan.1110
  7. Reflection and skill evolution
    • Agent proposes updates to H.P.003-SKILLS/preferences; UI shows diff and lets user accept or reject.4
  8. Completion & logging
    • Final summary (with diffs, tools used, risks considered), logs stored for observability/compliance.122

Suggested next-step prompt for a deeper iteration

You can paste and specialize this:

“Design a Claude-Code-inspired UI/UX agent that implements the lifecycle (discovery → activation → execution → reflection) for H.P.003-SKILLS, H.P.002-COMMANDS, tools, H.P.004-SCRIPTS, and H.P.005-HOOKS. Use AG-UI-style event schemas and Magentic-UI human-in-the-loop mechanisms (co-planning, action guards, HumanInputRequest). Target a web/IDE hybrid client. Produce:

  1. JSON schemas for all agent↔UI events (skill activation, tool_call, hook events, human_input_request, layout_update),
  2. A sequence diagram for a single complex task (e.g., ‘implement feature X’), showing H.P.003-SKILLS, subH.P.001-AGENTS, and HITL checkpoints,
  3. A TypeScript interface layer that maps Anthropic programmatic tool-calling and skill metadata into the event stream. Cite Anthropic H.P.003-SKILLS/H.P.005-HOOKS docs, AG-UI, CopilotKit Generative UI, and Magentic-UI.”7983101 131415

Design a Claude-Code-inspired UI/UX agent that implements the lifecycle (discovery → activation → execution → reflection) for H.P.003-SKILLS, H.P.002-COMMANDS, tools, H.P.004-SCRIPTS, and H.P.005-HOOKS. Use AG-UI-style event schemas and Magentic-UI human-in-the-loop mechanisms (co-planning, action guards, HumanInputRequest). Target a web/IDE hybrid client. Produce:

1.	JSON schemas for all agent↔UI events (skill activation, tool_call, hook events, human_input_request, layout_update),
2. A sequence diagram for a single complex task (e.g., ‘implement feature X’), showing H.P.003-SKILLS, subH.P.001-AGENTS, and HITL checkpoints,
3. A TypeScript interface layer that maps Anthropic programmatic tool-calling and skill metadata into the event stream. Cite Anthropic H.P.003-SKILLS/H.P.005-HOOKS docs, AG-UI, CopilotKit Generative UI, and Magentic-UI.”
A Claude-Code-inspired UI/UX agent for a web/IDE hybrid should expose a clean event protocol that mirrors the H.P.003-SKILLS lifecycle (discovery → activation → execution → reflection) while embedding AG-UI-style events and Magentic-UI human-in-the-loop mechanisms.[^2_1][^2_2][^2_3][^2_4]

Below is a compact, implementation-ready design focusing on events, a concrete sequence, and a TypeScript interface layer.


1. JSON schemas for agent↔UI events

Event envelope (AG-UI style unified stream).161718

{
"$id": "AgentUIEvent",
"type": "object",
"required": ["id", "type", "timestamp", "role", "payload"],
"properties": {
"id": { "type": "string", "description": "Event UUID" },
"type": {
"type": "string",
"enum": [
"TEXT_MESSAGE_START",
"TEXT_MESSAGE_CONTENT",
"TEXT_MESSAGE_END",
"SKILL_SUGGESTION",
"SKILL_ACTIVATION_REQUEST",
"SKILL_ACTIVATION_RESULT",
"TOOL_CALL_START",
"TOOL_CALL_RESULT",
"HOOK_EVENT",
"HUMAN_INPUT_REQUEST",
"HUMAN_INPUT_RESPONSE",
"LAYOUT_UPDATE",
"PLAN_UPDATE",
"ACTION_GUARD_TRIGGER",
"STATE_DELTA",
"ERROR"
]
},
"role": {
"type": "string",
"enum": ["agent", "user", "system"]
},
"timestamp": { "type": "string", "format": "date-time" },
"correlationId": { "type": "string" },
"payload": { "type": "object" }
}
}

Skill discovery & activation events (Claude H.P.003-SKILLS lifecycle)

Discovery → Activation → Execution mapping Claude H.P.003-SKILLS doc.192021

{
"$id": "SkillSuggestionPayload",
"type": "object",
"required": ["skillId", "name", "description", "confidence"],
"properties": {
"skillId": { "type": "string" },
"name": { "type": "string" },
"description": { "type": "string" },
"confidence": { "type": "number", "minimum": 0, "maximum": 1 },
"reason": { "type": "string" },
"tags": { "type": "array", "items": { "type": "string" } }
}
}
{
"$id": "SkillActivationRequestPayload",
"type": "object",
"required": ["skillId", "name", "description", "scope", "requestedBy"],
"properties": {
"skillId": { "type": "string" },
"name": { "type": "string" },
"description": { "type": "string" },
"requestedBy": { "type": "string", "enum": ["agent", "user"] },
"scope": {
"type": "object",
"properties": {
"files": { "type": "array", "items": { "type": "string" } },
"directories": { "type": "array", "items": { "type": "string" } },
"networkAccess": { "type": "boolean" },
"tools": { "type": "array", "items": { "type": "string" } },
"duration": { "type": "string", "description": "e.g. 'session', 'task', 'persistent'" }
}
},
"riskLevel": {
"type": "string",
"enum": ["low", "medium", "high"]
},
"reason": { "type": "string" }
}
}
{
"$id": "SkillActivationResultPayload",
"type": "object",
"required": ["skillId", "status"],
"properties": {
"skillId": { "type": "string" },
"status": { "type": "string", "enum": ["approved", "denied", "scoped"] },
"approvedScope": { "$ref": "SkillActivationRequestPayload#/properties/scope" },
"denialReason": { "type": "string" }
}
}

Tool call events (Anthropic programmatic tool use aligned with AG-UI TOOL_CALL_*)

Anthropic advanced tool use: structured tool inputs/outputs.171822

{
"$id": "ToolCallStartPayload",
"type": "object",
"required": ["toolCallId", "toolName", "arguments"],
"properties": {
"toolCallId": { "type": "string" },
"toolName": { "type": "string" },
"skillId": { "type": "string" },
"subagentId": { "type": "string" },
"arguments": { "type": "object" },
"actionGuard": {
"type": "object",
"properties": {
"required": { "type": "boolean" },
"reason": { "type": "string" },
"riskLevel": { "type": "string", "enum": ["low", "medium", "high"] }
}
}
}
}
{
"$id": "ToolCallResultPayload",
"type": "object",
"required": ["toolCallId", "status"],
"properties": {
"toolCallId": { "type": "string" },
"status": { "type": "string", "enum": ["success", "error", "cancelled"] },
"output": { "type": "object" },
"logs": { "type": "array", "items": { "type": "string" } },
"error": {
"type": "object",
"properties": {
"message": { "type": "string" },
"code": { "type": "string" }
}
}
}
}

Hook events (Claude H.P.005-HOOKS: PreToolUse, PostToolUse, Stop)

Reflect Claude H.P.005-HOOKS reference and lifecycle.232425

{
"$id": "HookEventPayload",
"type": "object",
"required": ["hookId", "hookType", "phase", "skillId"],
"properties": {
"hookId": { "type": "string" },
"hookType": {
"type": "string",
"enum": ["PreToolUse", "PostToolUse", "Stop", "Custom"]
},
"phase": {
"type": "string",
"enum": ["before", "after", "final"]
},
"skillId": { "type": "string" },
"subagentId": { "type": "string" },
"toolCallId": { "type": "string" },
"scriptName": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "running", "success", "error"]
},
"logs": { "type": "array", "items": { "type": "string" } },
"error": {
"type": "object",
"properties": {
"message": { "type": "string" },
"code": { "type": "string" }
}
}
}
}

HumanInputRequest & action guards (Magentic-UI mechanisms)

Co-planning, action approval/action guards, multi-tasking.262728

{
"$id": "HumanInputRequestPayload",
"type": "object",
"required": ["requestId", "reason", "schema", "mode"],
"properties": {
"requestId": { "type": "string" },
"reason": { "type": "string" },
"mode": {
"type": "string",
"enum": ["co_planning", "action_guard", "answer_verification", "disambiguation"]
},
"taskId": { "type": "string" },
"planSnapshot": { "type": "object" },
"schema": {
"type": "object",
"description": "JSON Schema or AG-UI compatible form definition"
},
"suggestedOptions": {
"type": "array",
"items": { "type": "string" }
},
"timeoutSeconds": { "type": "number" },
"allowPartial": { "type": "boolean" }
}
}
{
"$id": "HumanInputResponsePayload",
"type": "object",
"required": ["requestId", "status"],
"properties": {
"requestId": { "type": "string" },
"status": { "type": "string", "enum": ["approved", "rejected", "modified"] },
"input": { "type": "object" },
"comments": { "type": "string" }
}
}
{
"$id": "ActionGuardTriggerPayload",
"type": "object",
"required": ["guardId", "reason", "riskLevel"],
"properties": {
"guardId": { "type": "string" },
"reason": { "type": "string" },
"riskLevel": { "type": "string", "enum": ["medium", "high"] },
"relatedToolCallId": { "type": "string" },
"proposedAction": { "type": "object" }
}
}

Layout & plan updates (AG-UI generative UI + Magentic-UI co-planning)

Generative UI & plan editing for co-planning/co-tasking.292718

{
"$id": "LayoutUpdatePayload",
"type": "object",
"required": ["layoutId", "components"],
"properties": {
"layoutId": { "type": "string" },
"mode": {
"type": "string",
"enum": ["static", "declarative", "generative"]
},
"components": {
"type": "array",
"items": {
"type": "object",
"required": ["id", "componentType"],
"properties": {
"id": { "type": "string" },
"componentType": { "type": "string" },
"props": { "type": "object" },
"bindings": { "type": "object" }
}
}
}
}
}
{
"$id": "PlanUpdatePayload",
"type": "object",
"required": ["planId", "steps"],
"properties": {
"planId": { "type": "string" },
"mode": {
"type": "string",
"enum": ["initial", "revised", "user_edited"]
},
"steps": {
"type": "array",
"items": {
"type": "object",
"required": ["id", "description", "status"],
"properties": {
"id": { "type": "string" },
"description": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "in_progress", "completed", "blocked"]
},
"skillId": { "type": "string" },
"subagentId": { "type": "string" },
"requiresApproval": { "type": "boolean" }
}
}
}
}
}

2. Sequence diagram for “implement feature X”

Textual sequence diagram showing lifecycle + HITL; aligns with Claude H.P.003-SKILLS phases and Magentic-UI mechanisms.27282129

Actors

  • User
  • IDE/Web Client
  • UI Agent (this component)
  • Orchestrator / Main Agent
  • Subagent(s) (e.g., “FrontendAgent”, “TestAgent”)
  • Tools (VCS, TestRunner, Linter, API client, etc.)

Flow

  1. Discovery 1.1 User selects code and runs “Implement feature X” via command palette in IDE/web. 1.2 IDE sends TEXT_MESSAGE_START/CONTENT/END event with user intent to UI Agent → Orchestrator.2917 1.3 Orchestrator inspects H.P.003-SKILLS metadata (name+description only) and emits SKILL_SUGGESTION for feature_implementation_skill and api_consult_skill.21
  2. Skill activation (with HITL) 2.1 UI Agent forwards SKILL_ACTIVATION_REQUEST to client, requesting use of feature_implementation_skill with scope (repo path, network disabled, test tools enabled).1921 2.2 Client presents modal; user approves with narrowed scope (no network, only local tests) → sends SKILL_ACTIVATION_RESULT (status=approved, scoped).2729 2.3 Orchestrator loads full SKILL.md into context and may spawn subH.P.001-AGENTS for frontend/backend impacts.2021
  3. Co-planning (Magentic-UI) 3.1 Orchestrator generates an execution plan: analyze requirements → modify API → update UI → write tests → run tests → create diff.2627 3.2 UI Agent emits PLAN_UPDATE (mode=initial) + LAYOUT_UPDATE for a plan panel and diff viewer.2927 3.3 Client shows editable checklist; user reorders steps, adds “update docs”, and toggles “requiresApproval” for “run migrations”. 3.4 Client sends PLAN_UPDATE (mode=user_edited) back to orchestrator (co-planning completed).2827
  4. Execution with tools, H.P.005-HOOKS, and subH.P.001-AGENTS 4.1 Orchestrator dispatches “update UI” to FrontendAgent subagent; UI Agent gets TEXT_MESSAGE_* and TOOL_CALL_START for apply_patch tool.2129 4.2 PreToolUse hook fires for apply_patch (e.g., run formatter/linter) → HOOK_EVENT with phase=before.2523 4.3 TOOL_CALL_RESULT returns patch success, plus logs; PostToolUse hook runs tests if lightweight.2423 4.4 In parallel, TestAgent subagent runs TestRunner tool; tool logs stream via TOOL_CALL_RESULT and TEXT_MESSAGE_CONTENT events.1729
  5. Action guards & HumanInputRequest 5.1 Plan has step “run DB migration” flagged requiresApproval=true; orchestrator is about to call db_migrate tool. 5.2 UI Agent emits ACTION_GUARD_TRIGGER with reason and high risk level.2827 5.3 Simultaneously, it sends HUMAN_INPUT_REQUEST (mode=action_guard) with schema requiring checkbox confirmation and environment selection (e.g., staging vs prod).27 5.4 User selects “staging only” and approves → client returns HUMAN_INPUT_RESPONSE (approved, input=staging) → orchestrator calls db_migrate against staging only.2628
  6. Reflection & wrap-up 6.1 Once all steps complete, orchestrator emits PLAN_UPDATE with all steps completed and a final TEXT_MESSAGE_END summary.29 6.2 Hooks with type Stop fire to clean up temp state and notify user (e.g., notification hook) → HOOK_EVENT (phase=final).2325 6.3 UI Agent presents diff view + “learn from corrections” toggle; user tweaks output and selects “remember style for this repo”. 6.4 Orchestrator updates skill preferences (e.g., repo-specific guidelines) and logs a STATE_DELTA event for audit/long-term memory.2821

3. TypeScript interface layer for Anthropic tools & H.P.003-SKILLS → event stream

This layer sits between an Anthropic-style orchestrator (H.P.003-SKILLS, programmatic tools) and AG-UI-compatible event consumers.18221621

Core TypeScript types

// Shared envelope
export type AgentEventType =
| "TEXT_MESSAGE_START"
| "TEXT_MESSAGE_CONTENT"
| "TEXT_MESSAGE_END"
| "SKILL_SUGGESTION"
| "SKILL_ACTIVATION_REQUEST"
| "SKILL_ACTIVATION_RESULT"
| "TOOL_CALL_START"
| "TOOL_CALL_RESULT"
| "HOOK_EVENT"
| "HUMAN_INPUT_REQUEST"
| "HUMAN_INPUT_RESPONSE"
| "LAYOUT_UPDATE"
| "PLAN_UPDATE"
| "ACTION_GUARD_TRIGGER"
| "STATE_DELTA"
| "ERROR";

export interface AgentUIEvent<TPayload = any> {
id: string;
type: AgentEventType;
role: "agent" | "user" | "system";
timestamp: string;
correlationId?: string;
payload: TPayload;
}

Skill metadata (Claude H.P.003-SKILLS + H.P.005-HOOKS).2321

export interface SkillMetadata {
id: string;
name: string;
description: string;
tags?: string[];
userInvocable: boolean;
H.P.005-HOOKS?: {
preToolUse?: string[]; // hook IDs or script names
postToolUse?: string[];
stop?: string[];
};
}

Tool schema aligned with Anthropic programmatic tool calling (JSON-schema-like).22

export interface ToolSchema {
name: string;
description: string;
inputSchema: Record<string, any>;
outputSchema?: Record<string, any>;
skillId?: string;
riskLevel?: "low" | "medium" | "high";
}

Mapping Anthropic tool calls → events

Suppose Anthropic SDK gives you tool calls like:

export interface AnthropicToolCall {
id: string;
name: string;
arguments: Record<string, any>;
}

Adapter:

export function toToolCallStartEvent(
call: AnthropicToolCall,
opts: { skillId?: string; subagentId?: string; riskLevel?: "low" | "medium" | "high" }
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "TOOL_CALL_START",
role: "agent",
timestamp: new Date().toISOString(),
correlationId: call.id,
payload: {
toolCallId: call.id,
toolName: call.name,
skillId: opts.skillId,
subagentId: opts.subagentId,
arguments: call.arguments,
actionGuard: opts.riskLevel && opts.riskLevel !== "low"
? {
required: true,
reason: "High-risk tool call requires human approval",
riskLevel: opts.riskLevel
}
: { required: false }
}
};
}

Tool result mapping:

export interface AnthropicToolResult {
id: string;
status: "success" | "error";
output?: any;
error?: { message: string; code?: string };
logs?: string[];
}

export function toToolCallResultEvent(
result: AnthropicToolResult
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "TOOL_CALL_RESULT",
role: "agent",
timestamp: new Date().toISOString(),
correlationId: result.id,
payload: {
toolCallId: result.id,
status: result.status,
output: result.output,
logs: result.logs ?? [],
error: result.error
}
};
}

Mapping Claude H.P.003-SKILLS lifecycle → events

At startup, only name/description are loaded (discovery).1921

export function suggestSkills(
H.P.003-SKILLS: SkillMetadata[],
context: { intent: string }
): AgentUIEvent[] {
// Placeholder scoring; in practice based on embeddings/rules.
return H.P.003-SKILLS.map((skill) => ({
id: crypto.randomUUID(),
type: "SKILL_SUGGESTION",
role: "agent",
timestamp: new Date().toISOString(),
payload: {
skillId: skill.id,
name: skill.name,
description: skill.description,
confidence: 0.75,
reason: `Matches intent: ${context.intent}`,
tags: skill.tags ?? []
}
}));
}

When agent decides to activate a skill, request user confirmation (Claude-style activation).2021

export interface SkillActivationScope {
files?: string[];
directories?: string[];
networkAccess?: boolean;
tools?: string[];
duration?: "session" | "task" | "persistent";
}

export function toSkillActivationRequestEvent(
skill: SkillMetadata,
scope: SkillActivationScope,
reason: string
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "SKILL_ACTIVATION_REQUEST",
role: "agent",
timestamp: new Date().toISOString(),
payload: {
skillId: skill.id,
name: skill.name,
description: skill.description,
requestedBy: "agent",
scope,
riskLevel: "medium",
reason
}
};
}

Client’s decision → event back in:

export interface SkillActivationDecision {
skillId: string;
approved: boolean;
approvedScope?: SkillActivationScope;
denialReason?: string;
}

export function fromSkillActivationDecision(
decision: SkillActivationDecision
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "SKILL_ACTIVATION_RESULT",
role: "user",
timestamp: new Date().toISOString(),
payload: {
skillId: decision.skillId,
status: decision.approved ? "approved" : "denied",
approvedScope: decision.approvedScope,
denialReason: decision.denialReason
}
};
}

Human-in-the-loop (Magentic-UI co-planning & action guards)

Emit HumanInputRequest for plan editing and guards.262728

export function emitCoPlanningRequest(
taskId: string,
planSnapshot: any
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "HUMAN_INPUT_REQUEST",
role: "agent",
timestamp: new Date().toISOString(),
payload: {
requestId: crypto.randomUUID(),
reason: "Review and edit execution plan (co-planning)",
mode: "co_planning",
taskId,
planSnapshot,
schema: {
type: "object",
properties: {
steps: { type: "array" },
requiresApprovalIds: { type: "array", items: { type: "string" } }
}
},
timeoutSeconds: 0,
allowPartial: true
}
};
}

Guarded action:

export function emitActionGuard(
relatedToolCallId: string,
reason: string,
riskLevel: "medium" | "high"
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "ACTION_GUARD_TRIGGER",
role: "agent",
timestamp: new Date().toISOString(),
payload: {
guardId: crypto.randomUUID(),
reason,
riskLevel,
relatedToolCallId,
proposedAction: {}
}
};
}

UI response:

export interface HumanInputResponse {
requestId: string;
status: "approved" | "rejected" | "modified";
input?: any;
comments?: string;
}

export function toHumanInputResponseEvent(
resp: HumanInputResponse
): AgentUIEvent {
return {
id: crypto.randomUUID(),
type: "HUMAN_INPUT_RESPONSE",
role: "user",
timestamp: new Date().toISOString(),
payload: resp
};
}

This gives you:

  • A Claude-accurate lifecycle mapped into AG-UI-style events (streaming, tool events, layout updates).16182129
  • Magentic-UI-inspired HITL flows via co-planning, action guards, and HumanInputRequest semantics.272826
  • A concrete TypeScript layer to bridge Anthropic H.P.003-SKILLS and programmatic tool calls into a single event bus suitable for a web/IDE hybrid client. 3031

Define AG-UI event schema examples for Skill discovery and activation

Skill discovery and activation in AG-UI are modeled as plain JSON events on the same stream as text, tools, and state updates. Below are focused examples you can adopt or adapt.323334


Event envelope (AG-UI style)

{
"type": "SKILL_SUGGESTION", // custom event type
"threadId": "thread-123",
"runId": "run-456",
"timestamp": "2026-01-19T19:32:00.000Z",
"payload": { /* see below */ }
}

AG-UI conventions to keep: uppercase type, camelCase fields, threadId / runId for correlation.353332


Skill discovery events

1) Agent suggests candidate H.P.003-SKILLS (discovery)

{
"type": "SKILL_SUGGESTION",
"threadId": "thread-123",
"runId": "run-456",
"timestamp": "2026-01-19T19:32:00.000Z",
"payload": {
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"description": "Plans and implements multi-file code changes for a requested feature.",
"confidence": 0.86,
"reason": "User asked to 'add a bulk edit feature' in this repository.",
"tags": ["code", "refactor", "multi-file"]
}
}

This mirrors AG-UI’s pattern of structured, typed events where payloads carry domain-specific data.3432

2) Snapshot of discoverable H.P.003-SKILLS (for UI palette)

{
"type": "SKILLS_SNAPSHOT",
"threadId": "thread-123",
"runId": "run-456",
"timestamp": "2026-01-19T19:32:05.000Z",
"payload": {
"H.P.003-SKILLS": [
{
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"description": "Plans and implements multi-file code changes.",
"userInvocable": true,
"tags": ["code", "planning"]
},
{
"skillId": "test_generation_skill",
"name": "Generate Tests",
"description": "Creates unit and integration tests for selected code.",
"userInvocable": true,
"tags": ["tests", "quality"]
}
]
}
}

This uses AG-UI’s snapshot/delta pattern applied to a “H.P.003-SKILLS” substate.32


Skill activation events

3) Agent requests activation (needs user approval)

{
"type": "SKILL_ACTIVATION_REQUEST",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:00.000Z",
"payload": {
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"description": "Plans and implements multi-file code changes.",
"requestedBy": "agent",
"scope": {
"files": ["src/components/**/*"],
"directories": ["src/H.P.005-HOOKS"],
"networkAccess": false,
"tools": ["git_diff", "apply_patch", "run_tests"],
"duration": "task"
},
"riskLevel": "medium",
"reason": "Best-suited skill for implementing the requested feature in the selected project."
}
}

This fits AG-UI’s “stateful structured event” style and is compatible with Magentic-style HITL approvals.3634

4) User responds to activation request

{
"type": "SKILL_ACTIVATION_RESULT",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:20.000Z",
"payload": {
"skillId": "feature_implementation_skill",
"status": "approved",
"approvedScope": {
"files": ["src/components/**/*"],
"directories": [],
"networkAccess": false,
"tools": ["git_diff", "apply_patch"],
"duration": "task"
},
"denialReason": null
}
}

This reuses the same threadId/runId and acts like a small “state delta” confirming the skill H.P.009-CONFIGuration.3332

5) Optional: activation state delta (for UI state sync)

{
"type": "STATE_DELTA",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:21.000Z",
"payload": {
"H.P.003-SKILLS": {
"active": [
{
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"scope": {
"files": ["src/components/**/*"],
"networkAccess": false,
"tools": ["git_diff", "apply_patch"],
"duration": "task"
}
}
]
}
}
}

This follows AG-UI’s snapshot/delta state management guidance while keeping skill activation visible to all clients.3732 38394041424344454647


Explain required AG-UI fields for skill discovery events

For skill discovery events in an AG-UI-based system, the required fields are the standard AG-UI event envelope plus a minimal, well-typed payload describing the skill suggestion.4849

Below is a concise breakdown.


Base AG-UI event fields

Every skill discovery event should include the core AG-UI event identifiers so frontends can route and correlate it.505148

  • type
    • Required. Uppercase event identifier like SKILL_SUGGESTION or SKILLS_SNAPSHOT.
    • Must be atomic and unambiguous (AG-UI requirement for event types).5248
  • threadId
    • Required. Conversation/session identifier so the UI knows which chat/editor context this discovery belongs to.48
  • runId
    • Required. Execution identifier for the current agent run, used to group related events (discovery → activation → execution).5348
  • timestamp
    • Recommended/usually present. ISO string indicating when the event was emitted, enabling ordering, logs, and replay.5148
  • payload
    • Required. JSON object containing the domain-specific data for skill discovery (see next section).48

These mirror AG-UI’s base event structure and the RunStarted / text / tool events (same identifiers, different type).5448


Required payload fields for SKILL_SUGGESTION

For a single skill discovery event (one candidate skill), the payload should minimally contain:4948

  • skillId
    • Required. Stable, unique identifier for the skill, used later for activation, logging, and state deltas.
    • Should not change across runs so the UI can remember preferences per skill.
  • name
    • Required. Short, human-readable label (e.g., “Implement Feature”) used in menus, chips, and tooltips.55
  • description
    • Required. One or two-sentence explanation of what the skill does, helping both agent and user understand when to apply it.55
  • confidence
    • Required for ranking. Float in $[0, 1]$ indicating how strongly the agent believes this skill fits the current intent; UIs use this for ordering or thresholding.

Recommended (but optional) fields that complete a practical discovery payload:

  • reason
    • Short natural-language explanation of why the agent suggested this skill (e.g., “Matches request to ‘add bulk edit’ in this file”).
    • Supports transparency and better UX for selecting among multiple H.P.003-SKILLS.5657
  • tags
    • Light categorization (e.g., ["code", "tests", "refactor"]) for filtering and grouping in palettes or sidebars.55

This matches AG-UI’s guidance: custom events keep the base envelope and use a clearly documented value/payload schema for domain-specific data.4948


Required payload fields for SKILLS_SNAPSHOT (list discovery)

For a snapshot event listing multiple discoverable H.P.003-SKILLS (e.g., to populate a command palette), the payload should contain:4948

  • H.P.003-SKILLS
    • Required. Array of objects, each with the same minimal fields as above:
      • skillId (required)
      • name (required)
      • description (required)
    • Optional per skill: userInvocable, tags, defaultEnabled, etc.

This aligns with AG-UI’s “state snapshot / delta” pattern, where STATE_DELTA or domain-specific snapshots update shared state on the client, and the UI can locally maintain a H.P.003-SKILLS registry tied to the agent run.505448 5859606162


Show AG-UI events for a Skill activation confirmation flow

A skill activation confirmation flow in AG-UI is just a small, custom event protocol built on top of AG-UI’s standard patterns: agent proposes → UI renders confirmation UI → user decision → state delta.636465

Below is a concrete, minimal flow you can drop into your design.


1) Agent proposes a skill (request activation)

{
"type": "SKILL_ACTIVATION_REQUEST",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:00.000Z",
"payload": {
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"description": "Plans and implements multi-file code changes.",
"requestedBy": "agent",
"scope": {
"files": ["src/components/**/*"],
"directories": ["src/H.P.005-HOOKS"],
"networkAccess": false,
"tools": ["git_diff", "apply_patch", "run_tests"],
"duration": "task"
},
"riskLevel": "medium",
"reason": "Best-suited skill for the requested feature in this repository."
}
}
  • Pattern: custom event type with structured payload, following AG-UI conventions (threadId, runId, ISO timestamp).6663
  • UI: shows modal/card with description, scope, risk, and Approve/Deny/Modify options.67

2) UI emits the user’s decision back to the agent

{
"type": "SKILL_ACTIVATION_RESULT",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:20.000Z",
"payload": {
"skillId": "feature_implementation_skill",
"status": "approved",
"approvedScope": {
"files": ["src/components/**/*"],
"directories": [],
"networkAccess": false,
"tools": ["git_diff", "apply_patch"],
"duration": "task"
},
"denialReason": null
}
}
  • Pattern: a simple response event on the same threadId/runId, analogous to a state-confirmation callback.6563
  • If the user denies: status: "denied" and denialReason set accordingly.

3) Agent publishes a state delta to mark the skill as active

{
"type": "STATE_DELTA",
"threadId": "thread-123",
"runId": "run-789",
"timestamp": "2026-01-19T19:33:21.000Z",
"delta": [
{
"op": "add",
"path": "/H.P.003-SKILLS/active/-",
"value": {
"skillId": "feature_implementation_skill",
"name": "Implement Feature",
"scope": {
"files": ["src/components/**/*"],
"networkAccess": false,
"tools": ["git_diff", "apply_patch"],
"duration": "task"
}
}
}
]
}
  • Pattern: AG-UI’s JSON Patch–based STATE_DELTA to synchronize active H.P.003-SKILLS in the shared state model.6465
  • UI: updates “Active H.P.003-SKILLS” list, badges, and command palette context automatically.

This three-event pattern (request → result → state delta) fits cleanly into AG-UI’s event model and gives you a reusable confirmation flow for any Claude-style skill activation. 686970717273747576777879


LIST the SKILLS a UI / UX agent must have be very comprehensive and detailed do a web search to help determine the requirements

A UI/UX agent in an agentic system needs a broad skill surface: interaction design, safety/HITL, stateful AG-UI communication, and Claude-style design intelligence for multi-platform apps. Below is a comprehensive H.P.003-SKILLS list grouped into capability domains.80818283


1. UX research & user modeling H.P.003-SKILLS

  • Persona synthesis skill
    • Ingests brief product context and analytics to generate concise personas, goals, and constraints.
    • Maintains a living “user model” that conditions subsequent design decisions and recommendations.8482
  • Task & journey mapping skill
    • Converts free-text feature ideas into user journeys, flows, and edge cases.
    • Outputs structured flows (steps, actors, channels) that can be rendered as AG-UI layouts or diagrams.8284
  • Problem framing & UX brief skill
    • Takes vague H.P.007-PROMPTS (“make this dashboard better”) and returns a UX brief: problem statement, success metrics, constraints, risks.8586
    • Used before any visual/UI generation to reduce thrash.

2. Information architecture & navigation H.P.003-SKILLS

  • IA & sitemap design skill
    • Proposes navigation structures (global nav, local nav, content hierarchy) based on product scope and personas.84
    • Creates structured IA artifacts (trees, menus) that can be turned into components.
  • Flow restructuring & simplification skill
    • Identifies unnecessary steps, branching, and confusing flows.
    • Suggests alternatives (wizards vs inline steps, progressive disclosure, shortcuts).84
  • Multi-device navigation adaptation skill
    • Adapts IA to mobile/desktop/web vs IDE sidebars/panels, respecting platform patterns.
    • For IDE-embedded UIs, designs non-intrusive overlays, panels, and command palette entries.8783

3. Layout, interaction patterns, and generative UI H.P.003-SKILLS

  • ASCII / low-fidelity layout skill
    • Produces ASCII-first screen layouts and component structures for rapid iteration (fields, buttons, list views, states) before pixels.88
    • Supports progressive fidelity: layout → component metadata → design tokens → high-fidelity outputs.
  • Component system & pattern library skill
    • Designs component taxonomies: buttons, forms, tables, cards, toasts, modals, etc., with states and variations.8189
    • Provides usage guidelines and interaction rules per component.
  • Generative UI schema skill (AG-UI aware)
    • Outputs AG-UI-compatible layout/event schemas from natural language requirements (e.g., cards, forms, tool panels).9083
    • Chooses static vs declarative vs generative UI patterns based on task duration and complexity.
  • Microinteraction & feedback skill
    • Designs loading states, progress indicators, inline validation, undo/redo affordances, and subtle animations.
    • Focuses on error recovery and clarity for long-running agent tasks (e.g., “nightly report bot”).8387

4. Visual design & design system H.P.003-SKILLS

  • Design tokens & theming skill
    • Generates color palettes, typography scales, spacing, radius, shadows, and motion tokens consistent with brand guidelines.8981
    • Produces platform-specific token exports (CSS variables, Tailwind H.P.009-CONFIG, MUI theme objects).
  • Component skinning & theming skill
    • Applies design tokens to existing component structures while preserving semantics and accessibility.
    • Supports multiple themes (light/dark/high-contrast) with clear switching rules.89
  • Cross-platform UI adaptation skill
    • Translates core designs into platform idioms: web app, mobile, IDE plugin, admin console, etc.8783
    • Ensures visual coherence while respecting host environment conventions.

5. Content design, UX writing, and onboarding H.P.003-SKILLS

  • UX writing & microcopy skill
    • Generates concise labels, helper texts, error messages, and empty states aligned with tone of voice guidelines.84
    • Adapts language for different audiences (end-users vs operators vs engineers).
  • Onboarding & education skill
    • Designs first-run experiences, tooltips, walkthroughs, and “explain what this agent is doing” hints.
    • Integrates with AG-UI to show contextual explanations during tool calls and plan updates.8387
  • Notification, alerts, and escalation skill
    • Plans notification channels, frequency, and escalation policies for agent actions (e.g., approvals needed, failures).9184
    • Ensures users aren’t overwhelmed but stay in control.

6. Accessibility & inclusive design H.P.003-SKILLS

  • Accessibility audit skill
    • Reviews UI specs or code and flags issues: color contrast, focus order, keyboard traps, ARIA misuse, motion sensitivity.84
    • Suggests concrete fixes and improved patterns.
  • Accessible interaction pattern skill
    • Proposes keyboard-first flows, screen-reader-friendly structures, and alternatives to drag-and-drop or complex gestures.
    • Adapts agent H.P.006-WORKFLOWS to be accessible in non-visual or limited input environments.8783
  • Inclusive content & localization skill
    • Detects non-inclusive language, culture-specific assumptions, and localization pitfalls.
    • Generates localized content variants and layout adaptations for RTL or long text.84

7. Human-in-the-loop and safety UX H.P.003-SKILLS (Magentic-UI / oversight)

  • Co-planning UI skill
    • Designs plan visualization and editing surfaces: step lists, dependencies, timelines.9282
    • Ensures users can reorder, add/remove, or annotate steps before execution.
  • Co-tasking / multi-tasking UI skill
    • Designs interfaces for multiple concurrent agent tasks, including task switching and prioritization.9282
    • Shows clear ownership: what the agent is doing vs what the human should do now.
  • Action guard & approval flow skill
    • Specifies when to require approval (high-risk actions) and the UI for confirming or rejecting actions.8291
    • Designs rich context views so humans can make informed decisions (diffs, logs, risk summaries).
  • Answer verification & review skill
    • Creates review surfaces to check outputs, mark them as correct/incorrect, and provide structured feedback.8284
    • Integrates with metrics and long-term learning (e.g., feeding DSPy/optimizer loops, though not required for AG-UI).

8. AG-UI protocol & state management H.P.003-SKILLS

  • Event modeling & schema design skill
    • Defines and maintains event schemas for text, tools, layout, state deltas, and HITL interactions.939483
    • Ensures events remain backward compatible and inspectable.
  • Stateful UI synchronization skill
    • Chooses what belongs in shared state vs transient UI state, using AG-UI’s snapshot/delta conventions.9495
    • Optimizes for multi-client sync (e.g., web + IDE + mobile at once).
  • Tool visualization & traceability skill
    • Designs UI for tools: parameters, progress, outcomes, errors, tool chains.9690
    • Ensures every agent action is inspectable (“why did it do that?”).

9. Agent behavior explanation & trust-building H.P.003-SKILLS

  • Intent & rationale explanation skill
    • Translates agent reasoning and plan into concise, human-readable explanations and “why now?” justifications.8284
    • Critical for trust in autonomous flows.
  • Risk & uncertainty communication skill
    • Surfaces confidence levels, potential failure modes, and safe fallback actions.9182
    • Designs non-alarming but honest risk warnings.
  • Change/diff explanation skill
    • For UI or code changes, generates visual and textual diffs that are easy to review.9083
    • Helps users approve or reject actions efficiently.

10. Implementation and handoff H.P.003-SKILLS (code, design tools)

  • Design-to-code handoff skill
    • Outputs implementation-ready code (React, Tailwind, design system components) from UX specs.8881
    • Respects existing design systems and component libraries.
  • Design tool integration skill
    • Produces specs or plugins for Figma/Sketch/other tools based on .uxm/schema definitions.8688
    • Keeps code and design artifacts aligned.
  • Agent-in-IDE UX skill
    • Designs IDE-native experiences: inline hints, side panels, “explain this design” H.P.002-COMMANDS, and safe file modifications.8087
    • Balances unobtrusive assistance with visibility into agent actions.

11. Meta-H.P.003-SKILLS: orchestration, evaluation, and evolution

  • UX evaluation & heuristic review skill
    • Applies known UX heuristics and platform guidelines to existing designs and produces prioritized issue lists.84
    • Suggests A/B test ideas or metrics.
  • Telemetry & feedback loop design skill
    • Defines what UX-related signals to log (friction points, drop-offs, corrections).9187
    • Connects UI feedback with agent behavior changes.
  • Skill composition & pipeline skill
    • Orchestrates multiple UX H.P.003-SKILLS for complex tasks (e.g., “design feature X” → research → IA → layout → visual design → copy → accessibility check).9780
    • Maintains clear boundaries and handoffs between H.P.003-SKILLS.

This list gives you a comprehensive skill surface for a Claude-style UI/UX agent embedded in AG-UI/Magentic-UI H.P.006-WORKFLOWS, spanning discovery, design, interaction, safety, and continuous improvement.8090838284 98


List technical H.P.003-SKILLS required for a UI/UX designer with examples

A UI/UX designer’s technical skill set spans research, design, prototyping, handoff, and evaluation. Below is a concise, implementation-focused list with concrete examples.


1. Wireframing & prototyping

  • Low- and high-fidelity wireframing
    • Sketching core screens and flows before visual polish.
    • Tools: Figma, Sketch, Adobe XD, Balsamiq.99100101102
  • Interactive prototyping
    • Clickable user flows for usability tests and stakeholder reviews.
    • Tools: Figma prototypes, InVision, Axure, ProtoPie.103100101104

Example: Taking a rough feature idea and producing low-fi user flows in Figma, then a high-fi clickable prototype for user testing.10199


2. Visual & UI design

  • Layout, spacing, and visual hierarchy
    • Applying grid systems, whitespace, and hierarchy to guide attention.10210599
  • Color, typography, and iconography
    • Choosing accessible color palettes and typography scales; designing or selecting icon sets.10410599102
  • Design systems and reusable components
    • Building and maintaining component libraries and design tokens.102104

Example: Creating a reusable card component system in Figma with variants (states, densities) tied to a design token set.104102


3. UX research & information architecture

  • UX research methods
    • Interviews, surveys, usability tests, A/B tests, and log/analytics review.100106103101
  • Information architecture & sitemaps
    • Structuring navigation, content hierarchies, and labeled menus.10399100101
  • User flows & journey mapping
    • Mapping steps users take to complete tasks across screens and channels.105100

Example: Running a remote usability test on an onboarding flow and updating the sitemap and flow diagrams based on observed drop-off points.100101


4. Tools, front-end awareness & collaboration

  • Design tools
    • Figma, Sketch, Adobe XD, Photoshop, Illustrator for UI, assets, and specs.99103102104
  • Basic front-end understanding
    • HTML/CSS and responsive design concepts; understanding constraints of web/mobile frameworks.106103102104
  • Handoff & documentation
    • Exporting design specs, redlines, and interaction notes; using design-dev handoff tools (Figma Inspect, Zeplin, Storybook docs).101102

Example: Designing a responsive dashboard and collaborating with developers by providing Figma specs, CSS-friendly spacing/typography scales, and component states.102104


5. Testing, analytics & accessibility

  • Usability testing & iteration
    • Planning tests, capturing issues, translating findings into design changes.103100101
  • Analytics & experiment literacy
    • Reading funnels, heatmaps, and behavior analytics to prioritize UX work (e.g., Hotjar, GA, product analytics tools).100102
  • Accessibility (a11y) fundamentals
    • WCAG basics, color contrast, keyboard navigation, semantics, and focus management awareness.107105102

Example: Adjusting color contrast and focus states after running an accessibility check and a small usability test with keyboard-only navigation.105102 108


Examples of technical UI design tools and when to use them

Here are key technical UI design tools, grouped by stage, with when to use each.109110111112


Wireframing & early exploration

  • Balsamiq
    • Use when you want very fast, low-fidelity “sketch-like” wireframes to align on flows and structure without visual polish.112109
    • Ideal for early workshops with PM/engineering where changing layouts frequently is expected.
  • FigJam / Miro (whiteboarding)
    • Use for brainstorming user flows, information architecture, and early UI ideas collaboratively on an infinite canvas.111112
    • Good for mapping journeys, storyboarding, and capturing team input before committing to screens.

Visual design & component systems

  • Figma
    • Use as the primary UI design tool for creating layouts, design systems, and interactive prototypes with real-time collaboration.110109111
    • Best when teams need shared libraries, design tokens, and dev handoff from a single source of truth.
  • Sketch
    • Use mainly on macOS-centric teams for vector UI design, simple prototypes, and integration with a large plugin ecosystem.109110112
    • Suited to teams with existing Sketch libraries and a native-app preference.
  • Adobe Photoshop / Illustrator
    • Use when you need pixel-perfect asset creation, complex image editing, or custom illustrations and icons to import into your UI tool.110112
    • Helpful for marketing-heavy products or visually rich interfaces.

Prototyping & interaction design

  • Figma prototypes / Smart Animate
    • Use for typical product flows with transitions, overlays, and simple micro-interactions directly in your main design files.111110
    • Good default for most product teams; reduces tool-switching.
  • Axure RP
    • Use when you need high-fidelity, logic-heavy prototypes: conditional flows, variables, data-driven interactions, or adaptive views.109110111
    • Useful for complex enterprise apps, form-heavy products, and UX specs that must encode logic.
  • ProtoPie / Framer
    • Use for advanced interaction and motion design when you need near-production fidelity micro-interactions or device-level behaviors.112109
    • Ideal for projects where animation and gesture feel are critical (e.g., mobile apps, rich dashboards).

Testing, validation & collaboration

  • InVision / Marvel / Proto.io / Justinmind
    • Use when you want link-based prototypes for stakeholder review and basic usability testing with comments and flows.113110
    • Good for sharing with non-designers or clients who only need to click through flows and leave feedback.
  • Maze (and similar remote testing platforms)
    • Use to run unmoderated usability tests, task success analyses, and surveys on your prototypes.113
    • Helpful when you need quick, quantitative validation on design iterations.
  • Zeplin / Figma Inspect / Storybook integration
    • Use at handoff time to give engineers specs, assets, and component documentation tightly linked to design files.110113
    • Best for organizations with formal design-to-dev H.P.006-WORKFLOWS and component libraries.

When to choose what (rules of thumb)

  • Early idea / buy-in: Balsamiq or FigJam for rough flows and low-fi wireframes.112109
  • Main product UI and systems: Figma or Sketch as the core design environment.111109110
  • Complex logic-heavy prototypes: Axure RP when you need conditions, variables, and rich interactions.109111
  • Motion-critical experiences: ProtoPie or Framer to explore advanced micro-interactions.112109
  • Validation and handoff: InVision/Marvel for review, Maze for testing, Zeplin/Figma Inspect for developer-ready specs.113110 114115116

When to use Figma vs Sketch for UI design

Use Figma when you need cross‑platform, real‑time collaboration and integrated design → prototype → feedback H.P.006-WORKFLOWS; use Sketch when you are a Mac‑centric team optimizing for a highly customizable, plugin‑driven vector design environment.117118119


When Figma is the better choice

  • Distributed or cross‑platform teams
    • Works in the browser and native apps on macOS and Windows, so everyone (designers, PMs, engineers) can access the same files without OS constraints.118119117
    • Strong for remote teams that need to jump into files live.
  • Heavy collaboration and fast iteration
    • Real‑time multi‑cursor editing, comments, and shared libraries make it ideal for product teams iterating quickly with stakeholders and developers.119117118
    • Use it for design systems, multi-squad product design, and frequent design reviews.
  • All‑in‑one workflow (design + prototype + feedback)
    • Robust built‑in prototyping and comments reduce the need for separate tools or plugins.117118119
    • Good default if you want a single environment for UI design, prototypes, and dev handoff.

When Sketch is the better choice

  • Mac‑only design teams
    • Native macOS app with a polished, performant experience for Mac users.118119
    • Use if your entire design staff is on macOS and prefers native tools.
  • Plugin‑heavy, craft‑focused H.P.006-WORKFLOWS
    • Rich plugin ecosystem (e.g., for asset generation, advanced exports, custom H.P.006-WORKFLOWS) and strong symbol/shared style management for pixel‑perfect design.119118
    • Suited to designers who deeply customize their environment and rely on plugins for specific tasks.
  • Individual or small teams focused on detailed UI craft
    • Excellent for high‑fidelity asset creation, brand/marketing visuals, and detailed product UI where real‑time multi‑user editing is less critical.118119

Quick rule of thumb

  • Prefer Figma for: multi‑platform, collaborative product teams, design systems, live reviews, and integrated prototyping.117119118
  • Prefer Sketch for: Mac‑only teams or individual designers wanting a deeply customizable, plugin‑driven tool for precise visual/UI work.119118 120121122123124125126

Can I switch design libraries from Sketch to Figma easily

You can import Sketch files into Figma, but “switching” a full design library is not a one‑click, lossless operation; expect to rebuild key parts of your system (styles, components) to do it well.127128129130


What transfers well

  • Artboards, layers, vectors, images
    • Figma can open .sketch files and recreate pages, artboards, and most shapes and layers so your screens look visually similar.128129131132
  • Symbols → components
    • Sketch symbols are converted to Figma components, often grouped on a “Symbols” page for you to reorganize.131130

Implication: For mockups and one‑off screens, importing and cleaning up may be “good enough.”


What does not migrate cleanly

  • Styles (colors, text, effects)
    • Sketch styles do not map 1:1; Figma explicitly notes that styles are not retained and must be recreated as Figma styles/variables.130128
  • Complex design systems
    • Component structure, naming, responsive behaviors (Auto Layout vs pinning), and variants usually need rethinking and rebuilding.133134127

Implication: For a serious library/design system, treat Figma as a fresh system and use Sketch only as visual/reference input.134127133


  • Import for reference, rebuild for quality
    • Import core Sketch files into Figma, keep them as “Imported – [File]” references, then rebuild foundations (colors, type, icons) and components natively in Figma.127133134
  • Prioritize foundations and high‑leverage components
    • Start with design tokens (color, typography), then common components (buttons, inputs, cards) and only recreate screens as needed.135131133

Rule of thumb:

  • Small projects / few screens → import .sketch files and fix issues by hand.
  • Large, library‑driven products → import as reference and plan a structured rebuild in Figma for a cleaner, future‑proof system.128133130127 136137138139140141

Best practices to rebuild a Sketch design system in Figma

Rebuilding a Sketch design system in Figma works best if you treat it as a re‑architecture, not a direct import. Plan to rebuild foundations (tokens, components, libraries) and use Sketch only as a reference layer.142143144145


1. Make a migration plan first

  • Inventory and prioritize
    • List your current Sketch assets: colors, text styles, grids, icons, core components (buttons, inputs, cards, modals), and product‑specific patterns.144145
    • Decide what must migrate 1:1 (brand tokens, core components) vs what can be deprecated or redesigned.
  • Separate “old” from “new”
    • Keep the Sketch system as-is for reference and start a new Figma system file instead of trying to evolve imported .sketch files.145146
    • Use migration checklists: backup Sketch files, import references, rebuild tokens/components, update products incrementally.144145

2. Rebuild foundations as Figma-native tokens & styles

  • Colors and typography as tokens/variables
    • Recreate your color and type scales as Figma styles and variables (semantic + component-specific tokens) instead of direct copies of Sketch styles.147148149
    • Map brand colors and text roles (e.g., text/primary, bg/surface) to Figma variable collections for easier updates.149147
  • Spacing, radius, elevation
    • Define spacing, radii, and elevation scales as tokens or documented primitives before touching components.148150
    • This ensures components can be refactored later without breaking consistency.

Goal: get a clean, Figma-native token layer that may differ structurally from Sketch but preserves visual identity.147148


3. Design components for Auto Layout & variants (not direct symbol copies)

  • Re-model core components using Auto Layout
    • Build buttons, inputs, lists, and cards around Auto Layout (padding, gap, alignment, “packed/space between”) instead of fixed groups mimicking Sketch symbols.151148
    • Assume complexity will grow; design base components so states can be added without rebuilds.150151
  • Use variants instead of many similar symbols
    • Replace symbol permutations (e.g., button + icon left/right, size, state) with Figma variants and component properties.151148150
    • Plan variant naming for scale (size, state, iconPosition) to avoid combinatorial explosion.149151
  • Minimize detaching instances
    • Avoid detaching instances to keep inheritance; use properties and nested components for flexibility.146151
    • Only detach when truly necessary for one-off designs.

Rule: design each component as if it will become more complex later; Auto Layout + variants are your primary tools.148150151


4. Structure libraries and files for scale

  • One core system file + product-level component files
    • Maintain a global “Core” library for tokens and foundational components, and separate product/local files for specialized components.143146150
    • Promote patterns from product files into Core only when reused across products.146148
  • Use Figma shared libraries deliberately
    • Publish tokens, icons, primitives, and core components as a shared library; keep heavy page layouts in separate files that consume the library.150144148
    • Document which teams use which libraries to avoid dependency tangles.
  • Organize components logically
    • Group related components in pages (e.g., Inputs, Navigation, Feedback) and ensure clear naming (button/primary, field/text/base).151150
    • This improves discoverability and reduces duplicate variants.

5. Migrate screens incrementally and refactor in place

  • Import for reference, then swap to Figma components
    • Import representative Sketch screens into a “Legacy” or “Reference” Figma file; visually match them but replace imported components with your new system components over time.152142144
    • Use component-swap H.P.006-WORKFLOWS (instance swapping) instead of redrawing everything where possible.153154
  • Phase-by-phase migration
    • Start with most-used flows or products, swap in the new Figma components, and retire old Sketch assets gradually.145144
    • Track coverage (e.g., percentage of screens using new system) as a migration KPI.
  • Validate with design system analytics and consumers
    • Use Figma’s design system best practices (analytics, usage views) to see which components are used and where to focus refactoring.155148150
    • Collect feedback from developers and designers; refine naming, props, and documentation accordingly.

6. Document usage and governance early

  • Usage guidelines & examples
    • For each major component, document intended usage, do/don’t examples, and H.P.009-CONFIGuration patterns in Figma or a docs tool (zeroheight, internal wiki).148150
    • Ensure product teams know how to use variants and avoid “design system drift.”
  • Change management & versioning
    • Define how breaking component changes are communicated, batched, and rolled out.144145
    • Maintain a clear changelog for the design system so teams can align releases.

If you want, next step can be: a concrete Figma file structure + naming convention tailored to your stack (e.g., mapping tokens/components to your React design system and Tailwind H.P.009-CONFIG). 156157


IS THERE AN OPEN SOURCE FIGMA like toolset that runs in a browser with screenshot and file management capability?

Yes. Penpot is the main open‑source, Figma‑like design tool that runs in the browser and supports multi‑file design work and asset management.158159160


Penpot (primary candidate)

  • What it is
    • Web-based, open-source UI/UX and prototyping tool with a Figma-style canvas, layers, components, and design system features.159160
    • Can be used as a hosted SaaS or fully self-hosted (Docker/Kubernetes) for tighter control and integration.160159
  • Why it’s Figma-like
    • Vector editing, components, design tokens, grids, and layout tools aimed at UI/product design.161159160
    • Collaboration, comments, and shared libraries oriented to design–dev H.P.006-WORKFLOWS, with output aligned to web standards (SVG, CSS, HTML, JSON).159160
  • File & asset management
    • Organizes work in projects/files with pages and components, plus export in multiple formats (SVG, PNG, PDF, etc.).160159
    • Open formats make it straightforward to integrate with external screenshot/asset pipelines or custom storage, and it can ingest/export SVG and other design assets for reuse.159160

Other open or free browser-based options (less “Figma‑complete”)

  • Uizard – free, browser-based tool focused on fast idea-to-UI with AI, but not open source and more aimed at rapid prototyping than full design systems.162
  • Alva – open-source design tool that works with live React components, but more specialized for design systems than general Figma replacement and less mature as a full UI editor.161

Screenshot and workflow considerations

  • None of these are “screenshot managers” in the sense of a dedicated capture tool, but:
    • Penpot’s browser UI works well with standard OS/browser screenshot tools, and its asset exports can act as “canonical” snapshots in a repo or asset pipeline.160159
    • For full screenshot and file lifecycle management, you’d typically pair Penpot with your own storage (Git, object storage, DAM) or a separate screenshot system while using Penpot as the design/editor layer.

If you describe your workflow (self‑hosted vs SaaS, need for programmatic asset export, multi‑tenant constraints), a tailored stack diagram (e.g., Penpot + S3/GCS + Git + CI) can be sketched next. 163164165166167


Compare Penpot vs Figma feature-by-feature

Penpot and Figma overlap heavily on core UI design/prototyping, but diverge on ecosystem, extensibility, and control. Figma is still ahead on polish, AI, and integrations; Penpot leads on openness, self‑hosting, and standards‑based output.168169170171172173174175


Core design & layout

AreaFigmaPenpot
Vector & layoutAdvanced vector tools, frames, constraints, Auto Layout with robust responsive behavior.170173Advanced vector editor with components, constraints, and CSS Flex/Grid‑style layout.170171175
Components & librariesMature components, variants, component properties, nested instances, strong library tooling.170173Components, variants, design tokens; still closing the gap but covers most common system use cases.171169
Design systemsRich design‑system H.P.006-WORKFLOWS (variables, tokens, Dev Mode, analytics).176172173Components + variants + tokens with open formats (CSS/SVG/HTML), good for code‑aligned systems.171177

Implication: For large, complex design systems with heavy variant use and Dev Mode H.P.006-WORKFLOWS, Figma is more mature; for standards‑oriented systems where CSS/SVG fidelity matters, Penpot is attractive.169171177


Prototyping & interactions

AreaFigmaPenpot
Prototyping depthAdvanced prototyping with interactive components, smart animations, conditions, and now AI‑assisted “Figma Make” for turning UIs into prototypes/code.170172“Robust but not as advanced”: links, flows, transitions, interactive prototyping; fewer advanced animation/logic features.170171178
Presentation & reviewsStrong presentation/Observation Mode, multi-flow previews, easy sharing.179173Real-time multiplayer, comments, shareable prototypes; presentation/review flows less polished.171179

Implication: If your workflow leans heavily on complex protos, interactive components, and polished design reviews, Figma is stronger; Penpot is fine for mainstream product flows and user tests.170179171


Collaboration, performance & ecosystem

AreaFigmaPenpot
Collaboration modelBest‑in‑class real‑time editing, comments, Observation Mode, integrated FigJam; deep tooling for large teams.169179173Real‑time multiplayer, comments, shared links; collaboration solid but less feature‑rich (no full Observation equivalent).171179
IntegrationsBroad integrations: Slack, Jira, Trello, Git-based tools, dev H.P.006-WORKFLOWS; third‑party plugin ecosystem plus FigJam.179172173Fewer integrations; focus on open formats and dev‑friendly exports (CSS/SVG/HTML) rather than large marketplace.179177
Performance & hostingCloud SaaS (plus enterprise options), strong infra but you depend on Figma’s uptime/pricing/policies; large files can feel heavy.169179180Web-based and self-hostable with strong performance on large, SVG-heavy and design-system-heavy projects; control reliability yourself.169171177178

Implication: For enterprise SaaS with broad toolchains and non‑regulated workloads, Figma’s ecosystem wins; for teams needing self‑hosting, data sovereignty, or close dev alignment, Penpot is compelling.171181177169


Openness, pricing & governance

AreaFigmaPenpot
Licensing & costProprietary, per‑seat pricing; free tier with limitations, paid tiers for org features.170173Open-source, free; can be self‑hosted or used via hosted offering, cost scales with your infra not per‑seat license.168175177
Data ownershipData lives in Figma’s cloud (with enterprise agreements for bigger orgs); migration and lock‑in are ongoing concerns for some teams.169181Open standards (CSS/SVG/HTML), full data ownership with self‑hosting; no vendor lock‑in by design.168175177
Governance & extensibilityRoadmap controlled by vendor; extensibility via plugins and APIs.169172174Community-driven development; extensibility via open codebase and APIs, easier to embed in custom H.P.006-WORKFLOWS.182177178

High-level takeaway:

  • Choose Figma when you prioritize polished collaboration, advanced prototyping, ecosystem integrations, and AI‑accelerated H.P.006-WORKFLOWS.
  • Choose Penpot when you need open source, self‑hosting, strong performance on standards-based design systems, and tighter design–code alignment with full control over data and infra.177175168169170171 183184

Footnotes

  1. https://code.claude.com/docs/en/H.P.003-SKILLS 2 3 4 5 6 7 8 9 10 11 12 13

  2. https://www.anthropic.com/news/enabling-claude-code-to-work-more-autonomously 2 3 4 5 6 7 8 9 10 11 12

  3. https://docs.ag-ui.com 2 3 4 5 6 7 8 9

  4. https://www.youtube.com/watch?v=3EHnp-SH4O8 2

  5. https://github.com/hesreallyhim/awesome-claude-code

  6. https://www.aitmpl.com/H.P.003-SKILLS

  7. https://www.anthropic.com/engineering/advanced-tool-use 2 3 4

  8. https://www.copilotkit.ai/blog/how-microsoft-agent-framework-ag-ui-enable-agentic-ux-and-generative-ui 2 3 4 5

  9. https://www.copilotkit.ai/generative-ui 2 3 4 5 6 7

  10. https://arxiv.org/html/2507.22358v1 2 3 4 5 6 7 8 9 10 11

  11. https://arxiv.org/abs/2507.22358 2 3

  12. https://dev.to/camelai/H.P.001-AGENTS-with-human-in-the-loop-everything-you-need-to-know-3fo5 2 3 4

  13. https://code.claude.com/docs/en/H.P.005-HOOKS

  14. https://www.anthropic.com/engineering/claude-code-best-practices

  15. https://www.youtube.com/watch?v=Avh6G3Y9lJU

  16. https://github.com/ag-ui-protocol/ag-ui/ 2 3

  17. https://www.datacamp.com/tutorial/ag-ui 2 3 4

  18. https://dev.to/copilotkit/introducing-ag-ui-the-protocol-where-H.P.001-AGENTS-meet-users-10gp 2 3 4 5

  19. https://www.youngleaders.tech/p/claude-H.P.003-SKILLS-H.P.002-COMMANDS-subH.P.001-AGENTS-plugins 2 3

  20. https://www.linkedin.com/posts/lewisowain_claude-code-simplified-activity-7388911764161122304-7j7U 2 3

  21. https://code.claude.com/docs/en/H.P.003-SKILLS 2 3 4 5 6 7 8 9 10 11 12

  22. https://www.anthropic.com/engineering/advanced-tool-use 2 3

  23. https://code.claude.com/docs/en/H.P.005-HOOKS 2 3 4 5

  24. https://www.eesel.ai/blog/H.P.005-HOOKS-reference-claude-code 2

  25. https://www.youtube.com/watch?v=J5B9UGTuNoM 2 3

  26. https://www.youtube.com/watch?v=kfHA1GnzZpA 2 3 4 5

  27. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/magentic-ui-report.pdf 2 3 4 5 6 7 8 9 10 11

  28. https://arxiv.org/abs/2507.22358 2 3 4 5 6 7 8

  29. https://docs.ag-ui.com/introduction 2 3 4 5 6 7 8 9

  30. https://www.reddit.com/r/ClaudeAI/comments/1peql33/finally_claude_started_using_my_H.P.001-AGENTS_and_H.P.003-SKILLS/

  31. https://www.youtube.com/watch?v=kFpLzCVLA20

  32. https://docs.ag-ui.com/concepts/events 2 3 4 5 6

  33. https://docs.ag-ui.com/concepts/H.P.001-AGENTS 2 3

  34. https://docs.ag-ui.com 2 3

  35. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/getting-started

  36. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/07/magentic-ui-report.pdf

  37. https://blog.logrocket.com/build-real-ai-with-ag-ui/

  38. https://xaltiusacademy.com/ag-ui-design-for-human-agent-interfaces/

  39. https://ai.pydantic.dev/ui/ag-ui/

  40. https://ai.pydantic.dev/examples/ag-ui/

  41. https://dev.to/copilotkit/build-a-fullstack-stock-portfolio-ai-agent-with-pydantic-ai-ag-ui-3e2e

  42. https://www.reddit.com/r/LocalLLaMA/comments/1kl2rol/agui_the_protocol_that_bridges_ai_H.P.001-AGENTS_and_the/

  43. https://docs.agno.com/agent-os/interfaces/ag-ui/introduction

  44. https://dev.to/copilotkit/how-microsoft-agent-framework-ag-ui-enable-agentic-ux-generative-ui-eci

  45. https://github.com/ag-ui-protocol/ag-ui

  46. https://hackernoon.com/the-16-events-you-need-to-master-to-build-ag-ui-apps

  47. https://www.datacamp.com/tutorial/ag-ui

  48. https://docs.ag-ui.com/concepts/events 2 3 4 5 6 7 8 9 10 11 12

  49. https://docs.ag-ui.com 2 3 4

  50. https://pkg.go.dev/github.com/ag-ui-protocol/ag-ui/sdks/community/go/pkg/core/events 2

  51. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/getting-started 2

  52. https://hackernoon.com/the-16-events-you-need-to-master-to-build-ag-ui-apps

  53. https://hackernoon.com/building-your-first-ag-ui-app-a-smart-echo-agent-implementation

  54. https://www.datacamp.com/tutorial/ag-ui 2

  55. https://docs.ag-ui.com/concepts/tools 2 3

  56. https://dev.to/rakshith2605/building-agent-based-guis-the-future-of-human-computer-interaction-18kp

  57. https://blog.logrocket.com/build-real-ai-with-ag-ui/

  58. https://github.com/ag-ui-protocol/ag-ui

  59. https://learn.microsoft.com/pt-br/agent-framework/integrations/ag-ui/getting-started

  60. https://ai.pydantic.dev/ui/ag-ui/

  61. https://github.com/ag-ui-protocol/ag-ui/blob/main/README.md

  62. https://ai.pydantic.dev/examples/ag-ui/

  63. https://docs.ag-ui.com/concepts/events 2 3

  64. https://docs.ag-ui.com/concepts/state 2

  65. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/state-management 2 3

  66. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/getting-started

  67. https://blog.logrocket.com/build-real-ai-with-ag-ui/

  68. https://experienceleaguecommunities.adobe.com/t5/journey-optimizer-questions/offers-ajo-we-can-only-select-event-schema-in-datastream/m-p/717262

  69. https://ai.pydantic.dev/ui/ag-ui/

  70. https://github.com/adobe/xdm/blob/master/docs/reference/adobe/experience/decisioning/decisionevent-all.schema.md

  71. https://mindflow.io/blog/what-is-ag-ui

  72. https://ai.pydantic.dev/examples/ag-ui/

  73. https://docs.uipath.com/activities/other/latest/workflow/forms-advanced-logic-old

  74. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/

  75. https://docs.ag-ui.com/concepts/tools

  76. https://github.com/GreenShadeZhang/ag-ui-sample

  77. https://docs.ag-ui.com/sdk/js/core/events

  78. https://www.copilotkit.ai/blog/master-the-17-ag-ui-event-types-for-building-H.P.001-AGENTS-the-right-way

  79. https://github.com/ag-ui-protocol/ag-ui

  80. https://code.claude.com/docs/en/H.P.003-SKILLS 2 3 4

  81. https://github.com/nextlevelbuilder/ui-ux-pro-max-skill/blob/main/CLAUDE.md 2 3 4

  82. https://arxiv.org/abs/2507.22358 2 3 4 5 6 7 8 9 10

  83. https://github.com/ag-ui-protocol/ag-ui/ 2 3 4 5 6 7 8 9 10

  84. https://www.bprigent.com/article/7-ux-patterns-for-human-oversight-in-ambient-ai-H.P.001-AGENTS 2 3 4 5 6 7 8 9 10 11 12

  85. https://www.youtube.com/watch?v=nDHXLnwlIaY

  86. https://www.linkedin.com/posts/jasonzhoudesign_how-to-get-claude-code-output-top-tier-ui-activity-7393232579505627136-hxZW 2

  87. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/ 2 3 4 5 6 7

  88. https://www.reddit.com/r/ClaudeAI/comments/1oh01tt/fluxwing_claude_H.P.003-SKILLS_for_asciifirst_ux_design/ 2 3

  89. https://ui-ux-pro-max-skill.nextlevelbuilder.io 2 3

  90. https://dev.to/copilotkit/how-microsoft-agent-framework-ag-ui-enable-agentic-ux-generative-ui-eci 2 3 4

  91. https://aws.amazon.com/blogs/machine-learning/implement-human-in-the-loop-confirmation-with-amazon-bedrock-H.P.001-AGENTS/ 2 3 4

  92. https://arxiv.org/html/2507.22358v1 2

  93. https://docs.ag-ui.com/concepts/events

  94. https://docs.ag-ui.com/concepts/state 2

  95. https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/state-management

  96. https://docs.ag-ui.com/concepts/tools

  97. https://www.youngleaders.tech/p/claude-H.P.003-SKILLS-H.P.002-COMMANDS-subH.P.001-AGENTS-plugins

  98. https://www.youtube.com/watch?v=vRQQo2gTLWw

  99. https://designlab.com/blog/important-ux-design-H.P.003-SKILLS 2 3 4 5 6

  100. https://www.careerist.com/insights/20-H.P.003-SKILLS-required-for-ui/ux-designers 2 3 4 5 6 7 8

  101. https://brainstation.io/career-guides/what-H.P.003-SKILLS-do-you-need-to-be-a-ux-designer 2 3 4 5 6 7 8

  102. https://www.geeksforgeeks.org/blogs/top-H.P.003-SKILLS-for-ui-designers-technical-and-interpersonal/ 2 3 4 5 6 7 8 9 10 11 12

  103. https://www.coursera.org/articles/essential-H.P.003-SKILLS-for-ux-designers 2 3 4 5 6

  104. https://www.netguru.com/blog/ui-designer-H.P.003-SKILLS 2 3 4 5 6 7

  105. https://www.justinmind.com/ux-design/H.P.003-SKILLS-ux-ui-designers 2 3 4 5

  106. https://www.reddit.com/r/UXDesign/comments/gtn5q1/uxui_design_subH.P.003-SKILLS_list/ 2

  107. https://octet.design/journal/check-ui-ux-designer-H.P.003-SKILLS/

  108. https://www.reddit.com/r/UXDesign/comments/y0b1fu/what_H.P.003-SKILLS_should_ux_designers_have/

  109. https://www.aela.io/en/blog/all/ux-ui-design-tools 2 3 4 5 6 7 8 9 10

  110. https://contentsquare.com/guides/ui-design/tools/ 2 3 4 5 6 7 8 9 10

  111. https://www.interaction-design.org/literature/article/ux-design-tools-definitive-guide 2 3 4 5 6 7

  112. https://www.codica.com/blog/design-tools/ 2 3 4 5 6 7 8

  113. https://www.jobsity.com/blog/10-ui-design-tools-you-should-know-about 2 3 4

  114. https://www.figma.com/resource-library/ui-design-principles/

  115. https://www.youtube.com/watch?v=11ixIV656X0

  116. https://www.reddit.com/r/gamedev/comments/rw1y6r/what_ui_design_tools_are_mainly_used_in_the/

  117. https://kinsta.com/blog/figma-vs-sketch/ 2 3 4 5

  118. https://www.wildnetedge.com/blogs/figma-vs-sketch-which-ui-ux-tool-wins-for-designers 2 3 4 5 6 7 8 9

  119. https://detachless.com/blog/figma-vs-sketch 2 3 4 5 6 7 8 9

  120. https://www.reddit.com/r/FigmaDesign/comments/1935exj/figma_vs_sketch/

  121. https://www.reddit.com/r/FigmaDesign/comments/mjwa2l/figma_vs_sketch_which_one_do_you_suggest/

  122. https://www.sketch.com/vs/figma/

  123. https://www.youtube.com/watch?v=TxGY35ZcFcE

  124. https://www.dash.app/blog/figma-vs-sketch

  125. https://www.toptal.com/designers/figma/figma-vs-adobe-xd-vs-sketch

  126. https://www.youtube.com/watch?v=mA7mi6KRfT8

  127. https://www.reddit.com/r/FigmaDesign/comments/1al1pfy/sketch_library_migration_to_figma/ 2 3 4 5

  128. https://www.figma.com/best-practices/what-to-expect-when-moving-from-sketch-to-figma/ 2 3 4

  129. https://help.figma.com/hc/en-us/articles/360040514273-Import-Sketch-files 2

  130. https://figma-signup.helpjuice.com/import-and-export/import-sketch-files 2 3 4

  131. https://www.digidop.com/blog/how-to-migrate-sketch-designs-to-figma 2 3

  132. https://www.youtube.com/watch?v=64Baf8B6sYg

  133. https://www.reddit.com/r/FigmaDesign/comments/rv3728/sketch_figma_transition/ 2 3 4 5

  134. https://www.reddit.com/r/FigmaDesign/comments/ttkf7g/sketch_to_figma/ 2 3

  135. https://www.reddit.com/r/FigmaDesign/comments/lrmm6s/once_figma_never_sketch_and_now_i_am_looking_for/

  136. https://www.reddit.com/r/FigmaDesign/comments/mkxjuj/importing_individual_artboards_from_sketch/

  137. https://www.flinto.com/learn/import-from-sketch

  138. https://www.youtube.com/watch?v=sT0FbAv2Vcg

  139. https://www.youtube.com/watch?v=RovOvUuDcNA

  140. https://www.reddit.com/r/UXDesign/comments/12zheka/from_sketch_to_figma/

  141. https://www.sketch.com/changelog/import-your-figma-files/

  142. https://www.figma.com/best-practices/what-to-expect-when-moving-from-sketch-to-figma/ 2

  143. https://uxdesign.cc/we-gave-up-sketch-and-rebuilt-our-design-system-in-figma-a1dfc64948e5 2

  144. https://www.weareconflux.com/en/blog/migration-from-sketch-to-figma-guide-for-ux-ui-designers/ 2 3 4 5 6 7

  145. https://www.dmcgroup.eu/en/blog/innovation/migration-of-a-global-design-system-from-sketch-to-figma/ 2 3 4 5 6

  146. https://uxdesign.cc/dont-make-the-mistakes-i-did-moving-your-design-system-from-sketch-to-figma-11002c43b29f 2 3 4

  147. https://help.figma.com/hc/en-us/articles/18490793776023-Update-1-Tokens-variables-and-styles 2 3

  148. https://www.figma.com/blog/design-systems-102-how-to-build-your-design-system/ 2 3 4 5 6 7 8 9 10

  149. https://www.zeroheight.com/blog/figma-variables-and-design-tokens-part-one-variable-architecture/ 2 3

  150. https://www.figma.com/best-practices/components-styles-and-shared-libraries/ 2 3 4 5 6 7 8 9

  151. https://www.borderux.com/blog/migrating-your-design-system-from-sketch-to-figma-best-practices 2 3 4 5 6 7

  152. https://www.digidop.com/blog/how-to-migrate-sketch-designs-to-figma

  153. https://www.reddit.com/r/FigmaDesign/comments/oasrlv/we_gave_up_sketch_and_rebuilt_our_design_system/

  154. https://forum.figma.com/ask-the-community-7/switching-from-sketch-to-figma-8371

  155. https://www.figma.com/best-practices/

  156. https://www.sketch.com/blog/design-systems-best-practices/

  157. https://www.reddit.com/r/FigmaDesign/comments/1al1pfy/sketch_library_migration_to_figma/

  158. https://www.xda-developers.com/tried-free-open-source-browser-based-alternative-figma/

  159. https://penpot.app 2 3 4 5 6 7 8

  160. https://github.com/penpot/penpot 2 3 4 5 6 7 8

  161. https://codelab.pk/27-top-figma-alternatives-2025-open-source/ 2

  162. https://uizard.io/figma-alternative/

  163. https://webstudio.is

  164. https://www.reddit.com/r/UXDesign/comments/11veg61/open_source_alternatives_to_figma/

  165. https://www.banani.co/blog/figma-alternatives-2024

  166. https://www.mockplus.com/blog/post/10-best-offline-alternatives-to-figma

  167. https://www.youtube.com/watch?v=FSmJm6i4WAM

  168. https://penpot.app/penpot-vs-figma 2 3 4

  169. https://detachless.com/blog/figma-vs-penpot-2025-complete-comparison 2 3 4 5 6 7 8 9 10

  170. https://www.banani.co/blog/penpot-vs-figma-review 2 3 4 5 6 7 8 9

  171. https://penpot.app/features 2 3 4 5 6 7 8 9 10 11 12

  172. https://www.eweek.com/artificial-intelligence/figma-make-ai-tool-designers/ 2 3 4 5

  173. https://en.wikipedia.org/wiki/Figma 2 3 4 5 6 7 8

  174. https://techcrunch.com/2025/12/10/figma-launches-new-ai-powered-object-removal-and-image-extension/ 2

  175. https://penpot.app 2 3 4 5

  176. https://www.figma.com/blog/design-systems-102-how-to-build-your-design-system/

  177. https://nestify.io/blog/what-is-a-penpot/ 2 3 4 5 6 7 8 9

  178. https://blog.elest.io/penpot-free-open-source-design-tool/ 2 3

  179. https://figtowp.com/penpot-vs-figma/ 2 3 4 5 6 7 8

  180. https://www.linkedin.com/pulse/understanding-penpot-vs-figma-performance-tres-henry-i7npe

  181. https://www.ux-republic.com/en/Penpot-vs.-Figma:-Can-the-open-source-alternative-dethrone-the-leader/ 2

  182. https://github.com/penpot/penpot

  183. https://www.reddit.com/r/web_design/comments/1ijojui/its_2025_how_does_penpot_compare_to_figma/

  184. https://www.designwhine.com/penpot-vs-figma-review/