Skip to main content

General-Purpose Agentic System Designs

Abstracted Patterns from Clinical AI Research

This document extracts domain-agnostic architectural patterns from the clinical dialogue survey, providing reusable designs for any autonomous agent system.


The Universal Agentic Framework

Core Principle

Every autonomous agent operates in a Partially Observable Decision Process where:

  • The true state of the environment is hidden or uncertain
  • Actions must be taken based on incomplete information
  • Long-term outcomes matter more than immediate responses

The Fundamental Trade-offs

All agentic systems must navigate two orthogonal axes:

                    COGNITION                    EXECUTION
(Advisor) (Collaborator)
│ │
│ │
IMPLICIT ─────────────────────────────────────────────────────── EXPLICIT
(Creative) │ │ (Reliable)
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ Latent Space │ │ Emergent │
│ Reasoner │ │ Planner │
│ │ │ │
│ Creative │ │ Autonomous │
│ synthesis │ │ action │
└─────────────────┘ └─────────────────┘
│ │
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ Grounded │ │ Verifiable │
│ Synthesizer │ │ Executor │
│ │ │ │
│ Evidence-based │ │ Protocol- │
│ understanding │ │ driven action │
└─────────────────┘ └─────────────────┘

Pattern 1: Latent Space Reasoner (LSR)

When to Use

  • Creative problem-solving required
  • Novel situations without established protocols
  • Human expert available for validation
  • Speed valued over auditability

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ LATENT SPACE REASONER │
├─────────────────────────────────────────────────────────────────┤
│ INPUT │
│ └── Context + Query + Constraints │
├─────────────────────────────────────────────────────────────────┤
│ REASONING ENGINE │
│ ├── Chain-of-Thought Decomposition │
│ │ └── Break complex problem into reasoning steps │
│ ├── Multi-Perspective Analysis │
│ │ └── Generate multiple viewpoints internally │
│ └── Self-Consistency Iteration │
│ └── Multiple paths → voting → consensus │
├─────────────────────────────────────────────────────────────────┤
│ MEMORY │
│ ├── Parametric: Domain expertise from training │
│ └── Non-Parametric: Conversation context, user preferences │
├─────────────────────────────────────────────────────────────────┤
│ OUTPUT │
│ └── Synthesized insight/recommendation with reasoning trace │
└─────────────────────────────────────────────────────────────────┘

Key Implementation Details

Planning Module:

class LatentSpaceReasoner:
def decompose(self, query, context):
# Generate reasoning chain
steps = self.generate_cot(query, context)

# Apply multi-perspective analysis
perspectives = self.generate_perspectives(query)

# Synthesize across perspectives
return self.merge_perspectives(steps, perspectives)

def iterate(self, hypothesis, context):
# Self-consistency: generate multiple reasoning paths
paths = [self.reason(hypothesis, context) for _ in range(N)]

# Aggregate via voting
return self.majority_vote(paths)

Memory Module:

class LSRMemory:
def __init__(self):
self.context_buffer = [] # Non-parametric
self.summary_cache = {} # Compressed context

def update(self, observation, response):
self.context_buffer.append((observation, response))
if len(self.context_buffer) > THRESHOLD:
self.compress()

def compress(self):
# Recursive summarization
summary = self.model.summarize(self.context_buffer)
self.context_buffer = [(summary, None)]

Expected Outcomes

MetricExpected ResultRationale
CreativityHighUnconstrained by external sources
SpeedFastNo retrieval latency
ConsistencyMediumSelf-consistency helps but not guaranteed
AuditabilityLowReasoning is probabilistic

Risk Mitigations

  • Human validation required for consequential decisions
  • Confidence calibration to flag uncertain outputs
  • Regular benchmarking against known-good answers

Pattern 2: Grounded Synthesizer (GS)

When to Use

  • Accuracy and verifiability are paramount
  • Working with structured data sources
  • Audit trails required
  • Regulatory compliance context

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ GROUNDED SYNTHESIZER │
├─────────────────────────────────────────────────────────────────┤
│ INPUT │
│ └── Query requiring factual grounding │
├─────────────────────────────────────────────────────────────────┤
│ QUERY DECOMPOSITION │
│ ├── Sub-query generation │
│ └── Source routing (which sources for which sub-queries) │
├─────────────────────────────────────────────────────────────────┤
│ ACTION EXECUTION │
│ ├── Knowledge-Based: Structured queries to databases/APIs │
│ ├── Search-Based: Retrieve from unstructured sources │
│ └── Tool-Based: Deterministic calculations │
├─────────────────────────────────────────────────────────────────┤
│ EVIDENCE MANAGEMENT │
│ ├── Firewalled Context (external data only) │
│ ├── Conflict Detection │
│ └── Citation Tracking │
├─────────────────────────────────────────────────────────────────┤
│ SYNTHESIS │
│ └── Generate response constrained to retrieved evidence │
├─────────────────────────────────────────────────────────────────┤
│ OUTPUT │
│ └── Grounded response with explicit source citations │
└─────────────────────────────────────────────────────────────────┘

Key Implementation Details

Action Execution Module:

class GroundedSynthesizer:
def execute_actions(self, sub_queries):
evidence = []

for query in sub_queries:
source_type = self.route_query(query)

if source_type == "knowledge_base":
result = self.query_kb(query)
elif source_type == "search":
result = self.search_with_attribution(query)
elif source_type == "tool":
result = self.invoke_tool(query)

evidence.append({
"query": query,
"result": result,
"source": source_type,
"timestamp": now()
})

return evidence

def search_with_attribution(self, query):
# Three-stage process: Forage, Constrain, Attribute

# 1. Forage: Multiple search passes
raw_results = self.multi_round_search(query)

# 2. Constrain: Firewall in isolated context
constrained = self.isolate_context(raw_results)

# 3. Attribute: Link claims to sources
attributed = self.extract_with_citations(constrained)

return attributed

Evidence Management Module:

class EvidenceManager:
def __init__(self):
self.evidence_ledger = [] # Append-only
self.conflict_log = []

def add_evidence(self, evidence):
# Check for conflicts
conflicts = self.detect_conflicts(evidence)
if conflicts:
self.conflict_log.append(conflicts)
evidence = self.resolve_by_hierarchy(evidence, conflicts)

self.evidence_ledger.append(evidence)

def synthesize(self, query):
# Generate response using ONLY ledger contents
return self.model.generate(
query=query,
context=self.evidence_ledger,
instruction="Cite sources for every claim"
)

Expected Outcomes

MetricExpected ResultRationale
AccuracyHighGrounded in verified sources
AuditabilityHighComplete citation trail
CreativityLowConstrained to retrieved content
LatencyMediumRetrieval overhead

Risk Mitigations

  • Gap detection when evidence is insufficient
  • Conflict escalation to human reviewers
  • Source quality scoring

Pattern 3: Emergent Planner (EP)

When to Use

  • Multi-step autonomous action required
  • Novel situations without established protocols
  • Flexibility valued over predictability
  • Outcome-based evaluation possible

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ EMERGENT PLANNER │
├─────────────────────────────────────────────────────────────────┤
│ INPUT │
│ └── High-level goal + environment state │
├─────────────────────────────────────────────────────────────────┤
│ PLANNING ENGINE │
│ ├── Cognitive Map Construction │
│ │ └── Generate possible action sequences │
│ ├── Tree-of-Thought Exploration │
│ │ └── Evaluate branches, prune implausible │
│ └── Plan Selection │
│ └── Choose optimal sequence │
├─────────────────────────────────────────────────────────────────┤
│ EXECUTION LOOP │
│ ├── Execute current step │
│ ├── Observe outcome │
│ ├── Update internal state │
│ └── Refine remaining plan │
├─────────────────────────────────────────────────────────────────┤
│ MEMORY │
│ ├── Parametric: Procedural knowledge, workflow patterns │
│ └── Non-Parametric: Execution log, state tracking │
├─────────────────────────────────────────────────────────────────┤
│ REFLECTION │
│ ├── Self-critique after each step │
│ └── Plan adjustment based on feedback │
├─────────────────────────────────────────────────────────────────┤
│ OUTPUT │
│ └── Goal completion + execution trace │
└─────────────────────────────────────────────────────────────────┘

Key Implementation Details

Planning Engine:

class EmergentPlanner:
def create_plan(self, goal, state):
# Build cognitive map
action_space = self.generate_actions(goal, state)

# Tree-of-thought exploration
tree = self.build_action_tree(action_space)

# Evaluate branches
for branch in tree.branches:
branch.score = self.evaluate_branch(branch, goal)

# Select optimal path
return tree.best_path()

def execute_with_reflection(self, plan, environment):
execution_log = []

for step in plan:
# Execute
result = environment.execute(step)
execution_log.append({"step": step, "result": result})

# Reflect
critique = self.self_critique(step, result, plan)

# Adjust if needed
if critique.suggests_adjustment:
plan = self.replan(plan, critique, environment.state)

return execution_log

Reflection Module:

class ReflectionModule:
def critique(self, action, result, plan):
# Internal adversarial evaluation
questions = [
"Did this action advance the goal?",
"Were there unexpected consequences?",
"Is the remaining plan still optimal?",
"What could go wrong next?"
]

analysis = self.model.analyze(
action=action,
result=result,
remaining_plan=plan,
questions=questions
)

return CritiqueResult(
should_adjust=analysis.flags_issues,
suggested_changes=analysis.recommendations
)

Expected Outcomes

MetricExpected ResultRationale
FlexibilityHighAdapts plans dynamically
AutonomyHighSelf-directed execution
PredictabilityLowEmergent behavior
EfficiencyVariableDepends on plan quality

Risk Mitigations

  • Mandatory reflection checkpoints
  • Rollback capability for each step
  • Goal drift detection (verify still pursuing original goal)
  • Human escalation triggers

Pattern 4: Verifiable Executor (VE)

When to Use

  • High-stakes actions with clear protocols
  • Auditability and compliance required
  • Predictable, repeatable execution needed
  • Integration with enterprise systems

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ VERIFIABLE EXECUTOR │
├─────────────────────────────────────────────────────────────────┤
│ INPUT │
│ └── Task request + context │
├─────────────────────────────────────────────────────────────────┤
│ PROTOCOL MAPPING │
│ ├── Intent Classification │
│ │ └── Match request to pre-defined workflow │
│ ├── State Assessment │
│ │ └── Determine current position in workflow │
│ └── Step Selection │
│ └── Identify next verified action │
├─────────────────────────────────────────────────────────────────┤
│ VALIDATION │
│ ├── Authority Check │
│ ├── Precondition Verification │
│ └── Business Rule Compliance │
├─────────────────────────────────────────────────────────────────┤
│ ACTION EXECUTION │
│ ├── Deterministic Tool Invocation │
│ ├── Outcome Verification │
│ └── State Update │
├─────────────────────────────────────────────────────────────────┤
│ MEMORY │
│ ├── Parametric: Workflow understanding, NL→action mapping │
│ └── Non-Parametric: Protocol library, state register, audit log│
├─────────────────────────────────────────────────────────────────┤
│ OUTPUT │
│ └── Verified action + complete audit trail │
└─────────────────────────────────────────────────────────────────┘

Key Implementation Details

Protocol Mapping:

class VerifiableExecutor:
def __init__(self, workflow_library):
self.workflows = workflow_library
self.state_register = StateRegister()
self.audit_log = AuditLog()

def map_to_protocol(self, request, context):
# Classify intent
intent = self.classify_intent(request)

# Find matching workflow
workflow = self.workflows.find(intent)
if not workflow:
return self.escalate_to_human("No matching protocol")

# Determine current state
state = self.state_register.get(context.session_id)

# Find next step
next_step = workflow.get_next_step(state)

return ProtocolMapping(
workflow=workflow,
current_state=state,
next_step=next_step
)

Validated Execution:

class ValidatedExecution:
def execute(self, step, context):
# Pre-validation
validation = self.validate(step, context)
if not validation.passed:
return self.handle_validation_failure(validation)

# Execute with circuit breaker
try:
result = self.execute_with_timeout(
step.action,
step.parameters,
timeout=step.max_duration
)
except Exception as e:
self.audit_log.record_failure(step, e)
return self.rollback_if_needed(step, e)

# Post-validation
if not self.verify_outcome(result, step.expected_outcome):
return self.handle_unexpected_outcome(result, step)

# Update state and log
self.state_register.update(step, result)
self.audit_log.record_success(step, result)

return result

def validate(self, step, context):
checks = [
self.check_authority(step, context.user),
self.check_preconditions(step, context.state),
self.check_business_rules(step, context.rules)
]
return ValidationResult(all(checks), checks)

Expected Outcomes

MetricExpected ResultRationale
ReliabilityVery HighProtocol-constrained
AuditabilityVery HighComplete trace
FlexibilityLowLimited to defined workflows
PredictabilityVery HighDeterministic execution

Risk Mitigations

  • Mandatory validation before every action
  • Timeout and circuit breaker patterns
  • Automatic rollback on failure
  • Human escalation for edge cases

Hybrid System Design

Combining Patterns for Complex Applications

Real-world systems often need capabilities from multiple patterns. Here's how to combine them:

HYBRID AGENT ARCHITECTURE

┌─────────────────────────────────────────────────────────────────┐
│ REQUEST ROUTER │
│ │
│ Analyzes incoming request and routes to appropriate pattern │
└──────────────────────────┬──────────────────────────────────────┘

┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ STANDARD PATH │ │ GROUNDING PATH │ │ CREATIVE PATH │
│ (VE Pattern) │ │ (GS Pattern) │ │ (LSR Pattern) │
│ │ │ │ │ │
│ Known workflows │ │ Factual queries │ │ Novel problems │
│ Compliance req. │ │ Audit needed │ │ Ideation │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└───────────────────┴───────────────────┘


┌─────────────────┐
│ AGGREGATOR │
│ │
│ Combines outputs│
│ Resolves conflict│
│ Formats response│
└─────────────────┘

Router Decision Logic

class HybridRouter:
def route(self, request, context):
# Check for standard workflow match
if self.executor.has_workflow(request):
if self.meets_protocol_requirements(request, context):
return "verifiable_executor"

# Check if grounding needed
if self.requires_factual_accuracy(request):
return "grounded_synthesizer"

# Check if novel/creative
if self.is_novel_problem(request):
return "latent_space_reasoner"

# Check if autonomous action needed
if self.requires_multi_step_action(request):
if self.has_defined_process(request):
return "verifiable_executor"
else:
return "emergent_planner"

# Default to grounded for safety
return "grounded_synthesizer"

Evolution Strategies

Pattern-Specific Learning

PatternWhat EvolvesFeedback SignalUpdate Frequency
LSRModel parametersTask performanceBatch (offline)
GSQuery strategiesRetrieval qualityContinuous
EPAction policiesOutcome successPer-episode
VEWorkflow efficiencyCompletion metricsPeriodic review

Universal Evolution Framework

class AgentEvolution:
def __init__(self, agent, pattern_type):
self.agent = agent
self.pattern = pattern_type
self.history = ExperienceBuffer()

def record(self, episode):
self.history.add(episode)

def evolve(self):
if self.pattern == "LSR":
# Update model with successful examples
self.agent.fine_tune(
self.history.get_positive_examples()
)

elif self.pattern == "GS":
# Refine query strategies
self.agent.optimize_retrieval(
self.history.get_query_feedback()
)

elif self.pattern == "EP":
# RL-style policy update
self.agent.update_policy(
self.history.get_trajectories()
)

elif self.pattern == "VE":
# Workflow refinement
self.agent.optimize_workflows(
self.history.get_efficiency_metrics()
)

Selection Guide Summary

Quick Reference

FactorLSRGSEPVE
Accuracy requirementMediumHighMediumVery High
Creativity requirementHighLowHighLow
Autonomy levelLowLowHighHigh
Auditability needLowHighMediumVery High
Novel situation handlingGoodPoorBestPoor
Regulatory contextAvoidSuitableCautionIdeal
Resource availabilityLowMediumMediumHigh

Decision Tree

START

├─ Is auditability/compliance critical?
│ ├─ YES → Does a standard process exist?
│ │ ├─ YES → VERIFIABLE EXECUTOR
│ │ └─ NO → GROUNDED SYNTHESIZER
│ │
│ └─ NO → Does task require action?
│ ├─ YES → Is process defined?
│ │ ├─ YES → VERIFIABLE EXECUTOR
│ │ └─ NO → EMERGENT PLANNER
│ │
│ └─ NO → Is factual accuracy critical?
│ ├─ YES → GROUNDED SYNTHESIZER
│ └─ NO → LATENT SPACE REASONER

Conclusion

These four patterns—Latent Space Reasoner, Grounded Synthesizer, Emergent Planner, and Verifiable Executor—provide a comprehensive toolkit for building autonomous agents across any domain. The key to successful implementation is:

  1. Understand your trade-offs — Creativity vs. reliability, safety vs. autonomy
  2. Match pattern to context — Not every problem needs the same solution
  3. Implement appropriate safeguards — Each pattern has specific risks
  4. Plan for evolution — Systems should improve over time
  5. Consider hybrid approaches — Real-world systems often need multiple patterns

By applying these patterns thoughtfully, you can build agents that are appropriate for your specific requirements, whether that's maximum creativity, maximum reliability, or something in between.