zero trust architecture diagram agentic ai model
You can model a zero trust architecture for an agentic LLM system as a control-plane-centric diagram where agents never talk directly to sensitive resources; everything is mediated by a policy decision/enforcement fabric inspired by NIST SP 800‑207.123
Core components to include
- Identities & credentials
- Zero trust control plane
- Agent execution layer
- LLM gateway (model router) fronting base models and fine‑tunes; all access authenticated and authorized per request.78
- Orchestrator / workflow engine (e.g., LangGraph‑style) that:
- Tooling layer (retrievers, CRUD services, external APIs) behind the tool proxy, subject to least‑privilege and strong scoping.89
- Data and resource plane
- Monitoring, safety, and oversight
How to draw the diagram
Layout suggestion (for a single-page architecture view):
- Top row – Identities & ingress
- Middle row – Control and agent layer
- Center: “Zero Trust Control Plane”
- Below that, horizontally:
- Show arrows:
- Users → Auth Gateway → Orchestrator / LLM Gateway.
- Orchestrator → PDP for each action (“Can this agent call this tool with these params?”).
- Bottom row – Data & services
- Cross‑cutting overlays
Prompt you can paste into a diagramming agent
You can drop this directly into a diagram‑capable agent (e.g., Mermaid, PlantUML, or a canvas‑building tool) as a starting point:
Design a zero trust architecture diagram for an agentic LLM system, inspired by NIST SP 800‑207, BSI/ANSSI guidance for LLM‑based systems, and recent Zero‑Trust Agent Architecture patterns. Show:37111
- Human users and external systems authenticating via an IdP/SSO into an Auth Gateway (PEP).
- A central Zero Trust Control Plane containing a PDP (policy engine), policy store, telemetry/risk engine, and audit log.
- An LLM gateway and an agent orchestrator where each agent runs in an isolated sandbox, has a distinct machine identity, and cannot store long‑lived credentials.
- A tool proxy / service mesh enforcing per‑request authorization for all agent tool calls, HTTP calls, and data accesses.
- Microsegmented backend zones: public/context services, business services, and sensitive/regulated data stores, each fronted by a PEP.
- Continuous monitoring links from all PEPs and agent sandboxes back into the telemetry/risk engine and SIEM.
- Human‑in‑the‑loop review as a side component that can gate or override high‑risk policy decisions from the PDP.
References for deeper design
- NIST SP 800‑207, “Zero Trust Architecture”.31
- CyberArk and similar explainers on NIST zero trust components and reference models.25
- BSI/ANSSI “Design Principles for LLM‑based Systems with Zero Trust”.711
- Zero‑Trust Agent Architecture pattern (ZTAA) and related agentic design patterns.9
- Recent IAM framework for Agentic AI with layered identity and policy enforcement.4
- Articles on zero trust for agentic AI in cloud environments (AWS/Azure; Microsoft and others).12138106
If you want, I can turn this into a concrete Mermaid or PlantUML spec for your repo, tuned for your current stack (GCP / GKE / LangGraph‑like orchestrator).
NIST SP 800‑207 Machine identities for each agent, tool, and service (e.g., DIDs/VCs if you want to mirror recent IAM research for agents).
Central IdP / IAM (OIDC/SAML, SCIM provisioning) issuing short‑lived tokens to agents and tools. Zero trust control plane Policy Decision Point (PDP) implementing dynamic, context‑aware authorization (ABAC/RBAC, risk scores). Policy Enforcement Points (PEPs) at: API gateway / service mesh (Envoy/istio sidecars) for all agent→service calls. Tool proxy for all agent→tool invocations. Data access layer for agent→data store queries. Design a zero trust architecture diagram for an agentic LLM system, inspired by NIST SP 800‑207, BSI/ANSSI guidance for LLM‑based systems, and recent Zero‑Trust Agent Architecture patterns. Show: Human users and external systems authenticating via an IdP/SSO into an Auth Gateway (PEP). A central Zero Trust Control Plane containing a PDP (policy engine), policy store, telemetry/risk engine, and audit log. An LLM gateway and an agent orchestrator where each agent runs in an isolated sandbox, has a distinct machine identity, and cannot store long‑lived credentials. A tool proxy / service mesh enforcing per‑request authorization for all agent tool calls, HTTP calls, and data accesses. Microsegmented backend zones: public/context services, business services, and sensitive/regulated data stores, each fronted by a PEP. Continuous monitoring links from all PEPs and agent sandboxes back into the telemetry/risk engine and SIEM. Human‑in‑the‑loop review as a side component that can gate or override high‑risk policy decisions from the PDP. References for deeper design NIST SP 800‑207, “Zero Trust Architecture”. CyberArk and similar explainers on NIST zero trust components and reference models. BSI/ANSSI “Design Principles for LLM‑based Systems with Zero Trust”. Zero‑Trust Agent Architecture pattern (ZTAA) and related agentic design patterns. Recent IAM framework for Agentic AI with layered identity and policy enforcement. Articles on zero trust for agentic AI in cloud environments (AWS/Azure; Microsoft and others). If you want, I can turn this into a concrete Mermaid or PlantUML spec for your repo, tuned for your current stack (GCP / GKE / LangGraph‑like orchestrator). Follow-ups How to implement ZTAA microsegmentation for AI agents What are Agentic Trust Framework maturity levels Zero trust design principles from BSI ANSSI report ZT agentic AI examples in AWS and Azure Common pitfalls in deploying zero trust for agentic AI
You can treat that prompt as the spec for a diagram agent and make the follow‑ups separate, tightly scoped prompts. Below is a concrete, copy‑pasteable diagram spec plus focused follow‑up prompts.
Mermaid diagram spec (copy‑paste)
This diagram is consistent with NIST SP 800‑207’s logical components (PDP, PEP, PIP, control plane vs data plane) and aligns with the BSI/ANSSI LLM zero‑trust principles (continuous authZ, least privilege, no implicit trust, sandboxing, monitoring).161718192021
Focused follow‑up prompts (ready to paste)
Each of these is single‑intent so you can drive later iterations.
- ZTAA microsegmentation for AI agents
Design a zero trust microsegmentation strategy for an agentic LLM platform running on Kubernetes (GKE). Assume each agent and tool runs in its own pod with a distinct machine identity. Show how to implement identity‑aware microsegmentation consistent with NIST SP 800‑207 and agent‑based microsegmentation best practices (workload‑level agents, per‑process rules, and east‑west traffic control). Include concrete controls at network policy, service mesh, and data‑layer PEPs, and relate them to ZTAA patterns for agentic AI.18222316
- Agentic Trust Framework maturity levels
Define an Agentic Trust Framework maturity model for zero‑trust agentic AI systems. Base it on NIST SP 800‑207 zero trust deployment guidance, recent “agentic trust” and identity frameworks for AI agents, and DoD/enterprise zero trust reference architectures. Describe 3–4 levels from ad‑hoc to fully adaptive, and for each level specify: identity model for agents/tools, PDP/PEP sophistication, microsegmentation granularity, monitoring and HIL practices, and evidence you would collect for audits.241618
- Zero trust design principles from BSI/ANSSI
Summarize and operationalize the zero trust design principles from the BSI/ANSSI “Design Principles for LLM‑based Systems with Zero Trust” report. Map each principle (authN/authZ, least privilege, no implicit trust, input/output restriction, sandboxing, monitoring, threat intelligence, awareness) to concrete controls in an agentic LLM architecture (gateways, sandboxes, tool proxies, logging, red‑teaming, HIL). Show how these map onto NIST SP 800‑207 components PDP, PEP, and PIP.25192116
- ZT agentic AI examples in AWS and Azure
Provide concrete reference architectures for deploying a zero‑trust agentic LLM system on AWS and on Azure. For each cloud, show: identity fabric (IAM/AAD, workload identities, managed identities), placement of PDP/PEPs (API Gateway / App Gateway / WAF, service mesh, data‑layer firewalls), and patterns used in published “Zero‑Trust Agent Architecture” and “Zero‑Trust Agents” examples. Highlight how agents obtain short‑lived tokens, how tool calls are authorized, and how monitoring feeds SIEM.26272816
- Common pitfalls in deploying zero trust for agentic AI
List common pitfalls and anti‑patterns when applying zero trust to agentic AI systems. Use NIST SP 800‑207, BSI/ANSSI’s LLM zero‑trust blueprint, and recent articles on zero trust for agentic AI. Cover issues like over‑trusting the model, letting agents hold long‑lived secrets, coarse‑grained PEPs, lack of telemetry on tool calls, ignoring data exfiltration via outputs, and misconfigured microsegmentation that breaks functionality.1719291626
If you tell me which stack you want to target first (e.g., “GKE + Istio + LangGraph‑like orchestrator + OIDC”), I can refine the Mermaid into a PlantUML C4‑style diagram and add control‑by‑control implementation notes.
GCP and local mac OS, with CODITECT multi-agent built on top of Anthropic agent, skill, hook, command, scripts, workflow model, with OpenAI Codex, Kimi 2.5, Gemini, Claude Code and running in the cloud on Google Cloud workstations, GKE stack you want to target first (e.g., “GKE + Istio + LangGraph‑like orchestrator + OIDC”), I can refine the Mermaid into a PlantUML C4‑style diagram and add control‑by‑control implementation notes.
Follow-ups How do DIDs and VCs integrate with NIST SP 800-207 PEP/PDP Examples of SCIM provisioning for AI agents in zero trust OIDC vs SAML for machine identities in agentic AI Central IdP architectures for LLM systems per BSI ANSSI Micro-segmentation strategies for AI tools and services
Below is a C4‑ish PlantUML for your CODITECT stack on macOS + GCP (GKE + Workstations), then precise follow‑up prompts for the identity questions you listed.
PlantUML C4‑style diagram (CODITECT on GCP)
@startuml
!include <C4/C4_Container>
title CODITECT Agentic LLM Platform - Zero Trust Architecture (GCP + macOS)
Person(user, "Developer / Analyst", "Uses CODITECT from macOS and browser")
Person_Ext(extSys, "External Systems / Webhooks", "Jira, GitHub, CI/CD, etc.")
System_Boundary(coditect, "CODITECT Platform") {
Container_Boundary(idpZone, "Identity Zone") {
Container(idp, "Central IdP / IAM", "OIDC/SAML, SCIM", "Auth for humans, agents, tools; short-lived tokens")
}
Container_Boundary(ctrlPlane, "Zero Trust Control Plane") {
Container(pdp, "Policy Decision Point (PDP)", "OPA / Cedar / custom", "Dynamic ABAC/RBAC, risk-based authZ")
Container(policyStore, "Policy Store", "GitOps / DB", "ZTA and org policies")
Container(telemetry, "Telemetry & Risk Engine", "SIEM / UEBA", "Aggregates logs, signals, anomalies")
Container(audit, "Audit Log", "Immutable Log Store", "Prompts, tool calls, decisions, effects")
}
Container_Boundary(ingress, "Ingress & PEPs") {
Container(authGw, "Auth Gateway / API Gateway", "PEP", "Terminates TLS, enforces authZ, routes to orchestrator")
Container(meshPep, "Service Mesh PEPs", "Istio / Envoy", "Per-request enforcement for services/tools")
Container(dataPep, "Data Access PEP", "Proxy / PG bouncer / data firewall", "Row/column-level & query authZ")
Container(toolProxy, "Tool Proxy", "PEP", "Authorizes agent tool invocations")
Container(llmGw, "LLM Gateway", "PEP", "Fronts Anthropic, OpenAI, Gemini, Kimi models")
}
Container_Boundary(exec, "Agent Execution Layer") {
Container(orch, "CODITECT Orchestrator", "Anthropic agent/skill/hook/workflow model", "Coordinates multi-agent workflows")
Container(agentPool, "Agent Sandboxes (per-agent pods)", "GKE, containerized", "Isolated runtimes with machine identities")
Container(localRunner, "Local Agent Runner", "macOS", "Optional local sandboxes enforcing same ZT policies")
}
Container_Boundary(models, "Model Providers") {
Container(claude, "Claude / Claude Code", "Anthropic API", "")
Container(openai, "OpenAI Models (incl. Code)", "OpenAI API", "")
Container(gemini, "Gemini", "Vertex AI / Google AI Studio", "")
Container(kimi, "Kimi 2.5", "External API", "")
}
Container_Boundary(backend, "Microsegmented Backends") {
Container(pubZone, "Public / Context Services", "HTTP APIs, search", "Docs, web data, public code search")
Container(bizZone, "Business Services", "GCP services, internal APIs", "Core workflows, tickets, FP&A, etc.")
Container(sensZone, "Sensitive / Regulated Stores", "Postgres, GCS, DWH", "PII, PHI, financial records")
}
}
Rel(user, idp, "SSO/MFA via browser / CLI", "OIDC/SAML")
Rel(user, authGw, "API calls / UI requests", "HTTPS")
Rel(extSys, authGw, "Webhooks / callbacks", "HTTPS")
Rel(idp, authGw, "Token introspection / JWKS", "OIDC/SAML")
Rel(authGw, pdp, "AuthZ request (who/what/where)", "mTLS/REST")
Rel(pdp, authGw, "Allow/Deny + session context", "JSON")
Rel(pdp, policyStore, "Policy read/write", "mTLS")
Rel(telemetry, pdp, "Risk signals", "events")
Rel(authGw, orch, "Routed authenticated calls", "HTTPS/mTLS")
Rel(orch, agentPool, "Schedule workflows, spawn agent pods", "K8s API")
Rel(orch, llmGw, "LLM/Code calls (Anthropic, OpenAI, Gemini, Kimi)", "HTTPS/mTLS")
Rel(agentPool, toolProxy, "Tool invocation requests", "mTLS")
Rel(agentPool, meshPep, "HTTP/gRPC to services", "mTLS")
Rel(agentPool, dataPep, "SQL/NoSQL queries", "mTLS")
Rel(toolProxy, pdp, "AuthZ for tool use", "mTLS")
Rel(meshPep, pdp, "AuthZ for service call", "mTLS")
Rel(dataPep, pdp, "AuthZ for data access", "mTLS")
Rel(meshPep, pubZone, "Calls to public/context APIs", "HTTPS")
Rel(meshPep, bizZone, "Calls to business services", "mTLS")
Rel(dataPep, sensZone, "Queries sensitive stores", "mTLS")
Rel(agentPool, telemetry, "Runtime telemetry (actions, errors)", "events")
Rel(toolProxy, telemetry, "Tool usage logs", "events")
Rel(meshPep, telemetry, "Traffic logs", "events")
Rel(dataPep, telemetry, "Query logs", "events")
Rel(telemetry, audit, "Persist normalized events", "append-only")
Rel(llmGw, claude, "API calls", "HTTPS")
Rel(llmGw, openai, "API calls", "HTTPS")
Rel(llmGw, gemini, "API calls", "HTTPS")
Rel(llmGw, kimi, "API calls", "HTTPS")
@enduml
This reflects NIST SP 800‑207’s separation of Policy Engine/Administrator vs PEPs, continuous telemetry, and per‑request tokens, while aligning with BSI/ANSSI’s LLM zero‑trust reference (central orchestrator, IAM, sandboxing, input/output restrictions, monitoring).34353637
Focused follow‑up prompts (identity & micro‑segmentation)
Use these verbatim for deeper dives.
- DIDs/VCs with NIST PEP/PDP
Explain how Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) can be integrated into a NIST SP 800‑207‑compliant PDP/PEP architecture for an agentic LLM system. Use the “Zero‑Trust Identity Framework for Agentic AI” DID/VC model to show how agent and tool identities are represented, how VCs are validated at PEPs, and how their attributes feed into PDP policy decisions (ABAC/RBAC, risk scores). Include an interaction flow from agent bootstrap → VC presentation → PDP decision → PEP enforcement, and relate this to the Policy Engine/Policy Administrator split.363738
- SCIM provisioning for AI agents
Design a SCIM‑based provisioning model for AI agents and tools in a zero‑trust architecture. Show how SCIM groups and attributes represent agent capabilities and tenant scoping, how de‑provisioning works to revoke access, and how SCIM integrates with the central IdP and PDP/PEPs. Reference modern SCIM zero‑trust practices and proposals for extending SCIM for AI agents (e.g., delegated authority context with OAuth/OIDC).394041
- OIDC vs SAML for machine identities
Compare OIDC and SAML as protocols for machine identities in an agentic AI system. Focus on non‑interactive flows (client credentials, token exchange, workload identities), token formats (JWT vs SAML assertions), rotation and short‑lived credentials, and how easily each integrates with PDP/PEPs per NIST SP 800‑207. Provide guidance for GCP/GKE workloads and macOS local runners connecting to the same IdP.374236
- Central IdP per BSI/ANSSI for LLM systems
Describe a central IdP/IAM architecture for LLM systems following BSI/ANSSI “Design Principles for LLM‑based Systems with Zero Trust.” Show how human users, agents, tools, and external systems are represented, how authentication and authorization are enforced across the orchestrator, LLM gateway, and tool/data PEPs, and how multi‑tenant isolation and sensitivity‑tiered access (different LLMs per data sensitivity) are modeled in IdP policy.433534
- Micro‑segmentation for AI tools/services on GKE + macOS
Propose micro‑segmentation strategies for AI tools and services in a hybrid environment: local macOS runners plus GKE clusters on GCP. Use NIST SP 800‑207, agent‑based microsegmentation guidance, and workload‑identity patterns to define segments (per‑tenant, per‑tool, per‑sensitivity), K8s NetworkPolicies, service mesh authorization (Envoy/Istio), and data‑layer PEPs. Show how agents’ machine identities determine which segment they can talk to, and how this is enforced consistently for both local and cloud workloads.44453637
If you want, next step I can annotate the PlantUML with concrete GCP primitives (Workload Identity, BeyondCorp, Cloud Armor, Cloud SQL Auth Proxy, etc.) wired into this same ZT diagram.
How to deploy CODITECT multi-agent on GCP with zero trust
CODITECT setup guide for local macOS Integrating Anthropic agents with OpenAI Codex in CODITECT Zero trust machine identities for CODITECT agents on GCP Example CODITECT workflow using Gemini and Claude Code. annotate the PlantUML with concrete GCP primitives (Workload Identity, BeyondCorp, Cloud Armor, Cloud SQL Auth Proxy, etc.) wired into this same ZT diagram. Propose micro‑segmentation strategies for AI tools and services in a hybrid environment: local macOS runners plus GKE clusters on GCP. Use NIST SP 800‑207, agent‑based microsegmentation guidance, and workload‑identity patterns to define segments (per‑tenant, per‑tool, per‑sensitivity), K8s NetworkPolicies, service mesh authorization (Envoy/Istio), and data‑layer PEPs. Show how agents’ machine identities determine which segment they can talk to, and how this is enforced consistently for both local and cloud workloads. I use open tofu not terraform for infra as code
You can structure this as: (1) GCP‑side ZT deployment, (2) local macOS setup, (3) cross‑cloud model integration, (4) machine identities, (5) micro‑segmentation. Then evolve your OpenTofu modules around those seams.
Annotated PlantUML with GCP primitives
Key mappings: BeyondCorp / IAP as external PEP, Cloud Armor as edge PEP, Workload Identity / Federation for machine identities, Istio for in‑cluster PEPs, Cloud SQL Auth Proxy as data PEP.51525354
@startuml
!include <C4/C4_Container>
title CODITECT ZT on GCP (GKE + macOS) - Annotated with GCP Primitives
Person(user, "Developer / Analyst", "macOS CLI + Browser")
Person_Ext(extSys, "External Systems", "GitHub, Jira, CI/CD, etc.")
System_Boundary(coditect, "CODITECT Platform on GCP") {
Container_Boundary(idpZone, "Identity Zone") {
Container(idp, "Central IdP / IAM", "Cloud Identity / External IdP", "OIDC/SAML, SCIM; issues short-lived tokens")
}
Container_Boundary(access, "Edge & BeyondCorp") {
Container(cloudArmor, "Cloud Armor", "WAF / Edge PEP", "L7 filtering, rate limiting, geo/IP rules")
Container(iap, "BeyondCorp Enterprise / IAP", "ZT Access Proxy PEP", "Identity-aware access to CODITECT UI/APIs")
}
Container_Boundary(ctrlPlane, "Zero Trust Control Plane") {
Container(pdp, "PDP / Policy Engine", "OPA / custom + Access Context Manager", "Evaluates identity, attributes, context")
Container(policyStore, "Policy Store", "GCS / Git / DB", "ZTA, org, and tenant policies")
Container(telemetry, "Telemetry & Risk Engine", "Cloud Logging + SIEM", "Aggregates logs and risk signals")
Container(audit, "Audit Log", "Cloud Logging / BigQuery", "Immutable event/audit store")
}
Container_Boundary(gkeIngress, "GKE Ingress & Service Mesh") {
Container(authGw, "Istio IngressGateway", "mTLS PEP", "Terminates mTLS, validates JWTs from IdP/IAP")
Container(meshPep, "Istio Sidecars", "Envoy PEPs", "Service-to-service authZ, micro-segmentation")
Container(dataPep, "Cloud SQL Auth Proxy / Data Proxy", "Data PEP", "Per-connection identity to DBs")
Container(toolProxy, "Tool Proxy Service", "GKE Service + Istio", "Authorizes tool invocations")
Container(llmGw, "LLM Gateway", "GKE Service", "Routes to Anthropic, OpenAI, Gemini, Kimi")
}
Container_Boundary(exec, "Agent Execution Layer") {
Container(orch, "CODITECT Orchestrator", "GKE Deployment", "Anthropic agent/skill/hook/workflow model")
Container(agentPool, "Agent Pods", "GKE Pods w/ Workload Identity", "Isolated per-agent pods with SA identities")
Container(localRunner, "Local Runner", "macOS + Workload Identity Federation", "Runs same agent runtime locally")
}
Container_Boundary(models, "Model Providers") {
Container(claude, "Claude / Claude Code", "Anthropic API", "")
Container(openai, "OpenAI / Codex-like", "OpenAI API", "")
Container(gemini, "Gemini", "Vertex AI / Generative AI", "")
Container(kimi, "Kimi 2.5", "External API", "")
}
Container_Boundary(backend, "Microsegmented Backends") {
Container(pubZone, "Public / Context Services", "HTTP APIs, search", "Docs, web search, OSS code search")
Container(bizZone, "Business Services", "GKE / Cloud Run / Cloud Functions", "Core business / workflow services")
Container(sensZone, "Sensitive Stores", "Cloud SQL / Spanner / GCS", "PII/PHI/financial data")
}
}
' User flows
Rel(user, idp, "SSO / device-aware auth", "OIDC/SAML + BeyondCorp policies")
Rel(user, iap, "Access CODITECT UI/API", "HTTPS through Cloud Armor")
Rel(cloudArmor, iap, "Protected HTTP(S)", "WAF rules")
Rel(iap, authGw, "Identity-aware proxied traffic", "mTLS; JWT w/ user/device claims")
' PDP integration
Rel(authGw, pdp, "Check policy for request", "mTLS/REST; JWT, attributes")
Rel(pdp, authGw, "Allow/Deny + constraints", "JSON")
Rel(pdp, policyStore, "Read policies", "mTLS")
Rel(telemetry, pdp, "Risk signals", "events")
Rel(authGw, orch, "Routed authenticated calls", "mTLS inside mesh")
' Agents & Workload Identity
Rel(orch, agentPool, "Spawn agent pods", "K8s API; each with KSA→GSA Workload Identity")
Rel(agentPool, llmGw, "LLM/tool requests", "mTLS; JWT from Workload Identity")
Rel(agentPool, toolProxy, "Tool invocations", "mTLS")
Rel(agentPool, meshPep, "Service calls", "mTLS; Istio AuthorizationPolicy")
Rel(agentPool, dataPep, "DB access", "mTLS; Auth Proxy + IAM")
' PEPs consulting PDP
Rel(toolProxy, pdp, "AuthZ for tool call", "mTLS")
Rel(meshPep, pdp, "AuthZ for service call", "mTLS")
Rel(dataPep, pdp, "AuthZ for data access", "mTLS")
' Backends
Rel(meshPep, pubZone, "Public/context API calls", "HTTPS")
Rel(meshPep, bizZone, "Business services", "mTLS in mesh")
Rel(dataPep, sensZone, "DB/storage connections", "mTLS + IAM")
' Telemetry
Rel(agentPool, telemetry, "Runtime logs/metrics", "Cloud Logging / OTEL")
Rel(toolProxy, telemetry, "Tool audit logs", "Cloud Logging")
Rel(meshPep, telemetry, "Service traffic logs", "Cloud Logging")
Rel(dataPep, telemetry, "Query/access logs", "Cloud Logging")
Rel(telemetry, audit, "Persisted events", "append-only")
' LLM providers
Rel(llmGw, claude, "Model/Code calls", "HTTPS")
Rel(llmGw, openai, "Model/Code calls", "HTTPS")
Rel(llmGw, gemini, "Vertex AI / AI Studio", "HTTPS")
Rel(llmGw, kimi, "API calls", "HTTPS")
' Local Runner using Workload Identity Federation
Rel(localRunner, idp, "Authenticate local developer/host", "OIDC")
Rel(localRunner, gsaProxy, "Exchange OIDC for GSA", "Workload Identity Federation")
' (gsaProxy is conceptual: WI Federation endpoint)
@enduml
This uses Workload Identity and Federation to give each agent pod and local runner a distinct principal, following GCP best practices for workload pools and unique subject mappings.5355
1. How to deploy CODITECT multi‑agent on GCP with zero trust
High‑level sequence aligned with NIST SP 800‑207: define identities, place PEPs, implement PDP, turn on telemetry.5657
- Identity and access
- Network and edge
- GKE and service mesh
- PDP/PEP wiring
- Implement PDP as:
- OPA/Cedar side service, or
- centralized policy service consulted by IngressGateway, tool proxy, data proxy.
- PEPs:
- Cloud Armor + IAP (edge).
- Istio IngressGateway + sidecars.
- Tool Proxy.
- Cloud SQL Auth Proxy / custom data gateway.
- Implement PDP as:
- Telemetry and audit
You can wrap all of this in OpenTofu modules that create: GKE clusters, ASM/Istio, WI pools/providers, IAP/Armor configs, and bootstrap namespaces and policies.
2. CODITECT setup guide for local macOS
Goal: local runner behaves like another zero‑trust workload using Workload Identity Federation, not static keys.5355
- Prereqs
- Workload Identity Federation for local
- Create a Workload Identity Pool and provider for your IdP; map OIDC
sub/emailtogoogle.subject.5553 - Bind one GSA per “local agent runner type” with restrictive IAM (only call CODITECT APIs and specific backends).
- Configure CODITECT CLI to:
- Obtain OIDC token via device code or browser.
- Exchange it using WI Federation for an access token tied to that GSA.
- Create a Workload Identity Pool and provider for your IdP; map OIDC
- Local runner behavior
- Run agents in local containers with:
- No long‑lived secrets; use env‑injected short‑lived tokens.
- Outbound traffic restricted via macOS firewall/proxy to CODITECT endpoints and approved tools.
- Mirror the same policy model: all local tool and data calls go through cloud‑side PEPs, not directly to resources.
- Run agents in local containers with:
3. Integrating Anthropic agents with OpenAI Codex, Gemini, Kimi
You already have CODITECT built around Anthropic’s agent/skill/hook/workflow model; treat other models as tools behind the LLM gateway.
- Design
- Implement an LLM gateway with per‑provider adapters:
- Anthropic (Claude, Claude Code).
- OpenAI (GPT‑4/Omni, code‑oriented models).
- Gemini (Vertex AI), Kimi.
- Gateway:
- Implement an LLM gateway with per‑provider adapters:
- CODITECT wiring
- In your workflow model:
- Define “skills” that call “llm.call(provider=model, purpose=code‑gen|plan|analysis)”.
- Attach policy metadata: required sensitivity, tools allowed, tenant context.
- For OpenAI/Gemini/Kimi APIs:
- Use short‑lived GCP secrets: either Secret Manager + CMEK with KSA‑scoped access or WI to a broker service that signs provider tokens on demand.
- In your workflow model:
4. Zero trust machine identities for CODITECT agents on GCP
Use GKE Workload Identity and WI Federation to make each agent a first‑class principal.565355
- KSA/GSA mapping
- For each agent class (e.g., planning‑agent, coding‑agent, data‑agent, ops‑agent) create:
- A dedicated Kubernetes ServiceAccount in its namespace.
- A dedicated Google Service Account with IAM roles restricted to the segment it needs.
- Bind KSA→GSA via
iam.workloadIdentityUser.
- For each agent class (e.g., planning‑agent, coding‑agent, data‑agent, ops‑agent) create:
- Token usage
- Policy
- PDP decisions use attributes from:
- KSA/GSA name.
- Namespace (tenant).
- Agent type (label/claim).
- Sensitivity tier.
- PEPs (mesh, tool proxy, data proxy) enforce allow/deny per decision.
- PDP decisions use attributes from:
5. Micro‑segmentation strategies (GKE + macOS, OpenTofu‑friendly)
Combine namespace segmentation, service mesh policy, and data‑layer PEP, and mirror them for local workloads using WI Federation.63575954
- Segmentation dimensions
- Per‑tenant: separate namespaces
coditect-tenant-a,coditect-tenant-b. - Per‑sensitivity: within tenant, label workloads and services
tier=public|internal|sensitive. - Per‑tool: tools like “git‑tool”, “ticket‑tool”, “billing‑tool” get distinct services and KSAs.
- Per‑tenant: separate namespaces
- K8s controls
- NetworkPolicies:
- Default deny all cross‑namespace traffic.
- Allow only:
- Ingress from Istio IngressGateway to orchestrator.
- Orchestrator → agent namespaces.
- Agent → tool proxy/mesh only.
- Namespaces for control, shared tools, each tenant, each highly sensitive service.59
- NetworkPolicies:
- Istio authorization
- Enable:
- Example patterns:
planning-agent@tenant-acan talk to “context” and “business” services for tenant A, but nottier=sensitive.data-agent@tenant-acan talk totier=sensitivefor tenant A only.
- Data‑layer PEP
- Use Cloud SQL Auth Proxy or custom gateway:
- Each agent class uses a distinct DB role mapped from GSA.
- Enforce row/column policies (e.g., RLS in Postgres) per tenant and sensitivity.
- Use Cloud SQL Auth Proxy or custom gateway:
- macOS parity
- Local Runner uses Workload Identity Federation to assume a GSA that is treated as another “workload” in policies.5355
- All its calls still hit:
- Cloud Armor + IAP at edge.
- Istio gateway, then same mesh/Data PEPs.
- Effectively, local agents are just another segment with their own
source.principalin mesh policy.
OpenTofu angle
You can factor this into modules:
gcp-identity: Workload Identity pools, GSA, bindings, IAP/BeyondCorp access levels.5253gke-cluster: GKE cluster, ASM/Istio install, base mesh policy.coditect-namespaces: namespaces, KSAs, NetworkPolicies, Istio AuthorizationPolicies per segment.coditect-control-plane: PDP, policy store, telemetry wiring.coditect-exec: orchestrator, agent deployments, tool proxy, LLM gateway.coditect-local-wi: WI Federation config and minimal instructions for macOS runner.
If you want, next step I can give you concrete OpenTofu module skeletons (providers, resources, and key locals) for gcp-identity and gke-cluster adhering to this zero‑trust layout.
give concrete OpenTofu module skeletons (providers, resources, and key locals) for gcp-identity and gke-cluster adhering to this zero‑trust layout.
Follow-ups Istio service mesh setup for CODITECT agents on GKE BeyondCorp Enterprise pricing and licensing details Common pitfalls in Workload Identity Federation for multi-agents CODITECT hardware requirements for macOS deployment Compare Anthropic vs OpenAI agent performance in CODITECT
Below are opinionated but minimal OpenTofu module skeletons for gcp-identity and gke-cluster, tuned for zero‑trust + Workload Identity, and easy to extend. They’re Terraform‑compatible, just swap terraform with tofu in your workflow.6970
Module 1: modules/gcp-identity
Goal: centralize org/tenant identity, Workload Identity Federation for macOS, and GKE Workload Identity bindings.717273
modules/gcp-identity/variables.tf
variable "project_id" {
type = string
}
variable "location" {
type = string
default = "global"
}
variable "workload_identity_pool_id" {
type = string
default = "coditect-pool"
}
variable "workload_identity_pool_display_name" {
type = string
default = "CODITECT External Workloads"
}
variable "external_idp_issuer_uri" {
description = "OIDC issuer for your central IdP (e.g. Auth0/Okta/Cloud Identity)."
type = string
}
variable "external_idp_audience" {
description = "OIDC audience for CODITECT local runner / CI workloads."
type = string
}
variable "service_accounts" {
description = "Map of logical agent/tool roles to GSA IDs."
type = map(object({
account_id = string
display_name = string
}))
}
variable "bindings" {
description = "IAM bindings per logical role."
type = map(object({
roles = list(string)
}))
}
variable "labels" {
type = map(string)
default = {}
}
modules/gcp-identity/main.tf
terraform {
required_providers {
google = {
source = "opentofu/google"
version = "~> 6.14"
}
}
}
provider "google" {
project = var.project_id
}
locals {
wi_pool_name = "projects/${var.project_id}/locations/${var.location}/workloadIdentityPools/${var.workload_identity_pool_id}"
}
# Workload Identity Pool for macOS runners, CI, etc.
resource "google_iam_workload_identity_pool" "coditect" {
workload_identity_pool_id = var.workload_identity_pool_id
display_name = var.workload_identity_pool_display_name
project = var.project_id
}
resource "google_iam_workload_identity_pool_provider" "external_oidc" {
workload_identity_pool_id = google_iam_workload_identity_pool.coditect.workload_identity_pool_id
workload_identity_pool_provider_id = "external-oidc"
display_name = "External IdP for CODITECT workloads"
attribute_condition = "assertion.aud == \"${var.external_idp_audience}\""
oidc {
issuer_uri = var.external_idp_issuer_uri
}
attribute_mapping = {
"google.subject" = "assertion.sub"
"attribute.email" = "assertion.email"
"attribute.tenant" = "assertion.tenant"
"attribute.device_id" = "assertion.device_id"
"attribute.environment" = "assertion.environment"
}
project = var.project_id
}
# Google Service Accounts for CODITECT agents/tools
resource "google_service_account" "coditect" {
for_each = var.service_accounts
account_id = each.value.account_id
display_name = each.value.display_name
}
# IAM roles for each logical role
resource "google_project_iam_member" "coditect_sa_roles" {
for_each = {
for name, v in var.bindings : name => v
}
project = var.project_id
role = element(each.value.roles, 0) # extend to multiple if needed
member = "serviceAccount:${google_service_account.coditect[each.key].email}"
}
# Federation: external identities -> GSA
resource "google_service_account_iam_member" "coditect_wi_external" {
for_each = google_service_account.coditect
service_account_id = each.value.name
role = "roles/iam.workloadIdentityUser"
# Example: restrict to particular external identities (macOS runner, CI, etc.)
member = "principalSet://iam.googleapis.com/${local.wi_pool_name}/attribute.environment/dev"
}
modules/gcp-identity/outputs.tf
output "workload_identity_pool_name" {
value = local.wi_pool_name
}
output "workload_identity_provider_name" {
value = google_iam_workload_identity_pool_provider.external_oidc.name
}
output "service_accounts" {
value = {
for k, v in google_service_account.coditect : k => v.email
}
}
This follows GCP WI Federation guidance (single provider per pool, attribute mapping to avoid subject collisions).747371
Module 2: modules/gke-cluster
Goal: private GKE cluster, Workload Identity enabled, foundation for Istio/ASM and NetworkPolicies for ZT.757677
modules/gke-cluster/variables.tf
variable "project_id" {
type = string
}
variable "region" {
type = string
}
variable "network" {
type = string
}
variable "subnetwork" {
type = string
}
variable "cluster_name" {
type = string
}
variable "release_channel" {
type = string
default = "REGULAR"
}
variable "min_nodes" {
type = number
default = 1
}
variable "max_nodes" {
type = number
default = 5
}
variable "labels" {
type = map(string)
default = {}
}
variable "wi_pool_name" {
description = "Workload Identity pool name from gcp-identity."
type = string
}
modules/gke-cluster/main.tf
terraform {
required_providers {
google = {
source = "opentofu/google"
version = "~> 6.14"
}
}
}
provider "google" {
project = var.project_id
region = var.region
}
resource "google_container_cluster" "coditect" {
name = var.cluster_name
location = var.region
project = var.project_id
network = var.network
subnetwork = var.subnetwork
remove_default_node_pool = true
initial_node_count = 1
release_channel {
channel = var.release_channel
}
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}
network_policy {
enabled = true
provider = "CALICO"
}
logging_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS", "APISERVER"]
}
monitoring_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
resource_labels = merge(
{
"app" = "coditect"
"zero_trust" = "enabled"
},
var.labels
)
master_authorized_networks_config {
cidr_blocks = []
}
}
resource "google_container_node_pool" "coditect_default" {
name = "${var.cluster_name}-np"
location = var.region
project = var.project_id
cluster = google_container_cluster.coditect.name
node_count = var.min_nodes
autoscaling {
min_node_count = var.min_nodes
max_node_count = var.max_nodes
}
node_config {
machine_type = "e2-standard-4"
workload_metadata_config {
mode = "GKE_METADATA"
}
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
labels = {
"app" = "coditect"
"zero_trust" = "enabled"
}
tags = ["coditect-gke-node"]
}
management {
auto_repair = true
auto_upgrade = true
}
depends_on = [google_container_cluster.coditect]
}
modules/gke-cluster/outputs.tf
output "cluster_name" {
value = google_container_cluster.coditect.name
}
output "cluster_endpoint" {
value = google_container_cluster.coditect.endpoint
}
output "cluster_ca_certificate" {
value = google_container_cluster.coditect.master_auth[^5_0].cluster_ca_certificate
}
output "workload_pool" {
value = google_container_cluster.coditect.workload_identity_config[^5_0].workload_pool
}
This cluster is private, has NetworkPolicy enabled, and is ready for Istio/ASM and strict mTLS micro‑segmentation.7675
Example root usage (envs/prod/main.tf)
terraform {
required_providers {
google = {
source = "opentofu/google"
version = "~> 6.14"
}
}
}
provider "google" {
project = var.project_id
region = var.region
}
module "gcp_identity" {
source = "../../modules/gcp-identity"
project_id = var.project_id
external_idp_issuer_uri = var.external_idp_issuer_uri
external_idp_audience = var.external_idp_audience
workload_identity_pool_id = "coditect-pool"
workload_identity_pool_display_name = "CODITECT External Workloads"
service_accounts = {
"planning-agent" = {
account_id = "coditect-planning-agent"
display_name = "CODITECT Planning Agent"
}
"coding-agent" = {
account_id = "coditect-coding-agent"
display_name = "CODITECT Coding Agent"
}
"data-agent" = {
account_id = "coditect-data-agent"
display_name = "CODITECT Data Agent"
}
}
bindings = {
"planning-agent" = {
roles = ["roles/pubsub.publisher"]
}
"coding-agent" = {
roles = ["roles/artifactregistry.reader"]
}
"data-agent" = {
roles = ["roles/cloudsql.client"]
}
}
}
module "gke_cluster" {
source = "../../modules/gke-cluster"
project_id = var.project_id
region = var.region
network = var.network
subnetwork = var.subnetwork
cluster_name = "coditect-gke-prod"
wi_pool_name = module.gcp_identity.workload_identity_pool_name
}
Follow‑up prompts (ready to paste)
- Istio service mesh setup for CODITECT agents on GKE
Show an Istio/ASM configuration for a zero‑trust CODITECT deployment on GKE: strict mTLS, namespace‑level and workload‑level AuthorizationPolicies, and RequestAuthentication/JWT policies that use GKE Workload Identity identities and tenant labels to enforce micro‑segmentation between planning‑agent, coding‑agent, data‑agent, tool proxy, and backends.787976
- BeyondCorp Enterprise pricing/licensing
Summarize BeyondCorp Enterprise pricing and licensing options relevant to securing CODITECT (GKE APIs, Workstations, macOS access) and how they map to Google Cloud SKUs (IAP, Cloud Armor, context‑aware access). Include any per‑user or per‑request costs and considerations for multi‑tenant SaaS.808182
- Pitfalls in Workload Identity Federation for multi‑agents
List common pitfalls when using Workload Identity Federation for many CODITECT agents and local runners. Use GCP best practices on WI Federation and real‑world experience to cover subject collisions, pool/provider design, mapping external identities to GSA, revocation, and observability.83737174
- CODITECT macOS hardware requirements
Propose realistic hardware requirements and recommended configs for running CODITECT local runners and dev workloads on macOS, including CPU, RAM, disk, and network. Assume Docker‑based sandboxes, concurrent multi‑agent workflows, and IDE + browser use.
- Anthropic vs OpenAI agent performance in CODITECT
Compare Anthropic agents (Claude, Claude Code) vs OpenAI models for CODITECT multi‑agent workflows: planning quality, tool‑use reliability, code‑gen accuracy, latency, cost, and robustness under zero‑trust constraints (short‑prompts, redaction, no long‑term state). Use recent benchmarks or case studies where available.
create a detailed prompt to using PaperBanana to generate several diagrams for the CODITECT ZERO TRUST agentic system design
Below is a single, detailed prompt you can paste into PaperBanana. It’s structured to produce multiple diagrams (C4 + security views) for the CODITECT zero‑trust agentic system.
PaperBanana prompt: CODITECT Zero‑Trust Agentic Architecture (multi‑diagram)
Create a set of architecture diagrams for a system called CODITECT, a multi‑tenant, zero‑trust, agentic LLM platform. The goal is to visualize how agents, tools, LLMs, and infrastructure interact under a NIST SP 800‑207 style zero‑trust architecture, deployed on GCP (GKE + Workstations) with local macOS runners.
Generate four diagrams:
Diagram 1 – Context / C4 System diagram
Scope: high‑level view of CODITECT in its environment.
Include:
- External actors:
- “Developer / Analyst (macOS)” – uses web UI, CLI, IDE integrations.
- “External Systems” – GitHub, Jira, CI/CD, ticketing.
- “Model Providers” – Anthropic (Claude, Claude Code), OpenAI, Google Gemini, Kimi.
- CODITECT system (as one box), with:
- CODITECT Web UI / API.
- CODITECT Orchestrator (multi‑agent coordinator).
- CODITECT Agent Pool (multiple agents).
- Tooling layer (Git, ticketing, billing tools).
- Data layer (business DBs, sensitive data stores).
- Zero‑trust edge:
- GCP HTTP(S) Load Balancer.
- Cloud Armor (WAF / edge PEP).
- BeyondCorp Enterprise / IAP (identity‑aware proxy).
Show:
- Developer uses browser/CLI from macOS to reach the CODITECT Web UI/API via IAP, Cloud Armor, and the load balancer.
- External systems integrate via webhooks and APIs through the same edge.
- CODITECT calls out to model providers via an “LLM Gateway”.
- Emphasize “identity‑aware access”, “never trust, always verify”, and “multi‑tenant SaaS”.
Style: C4 System diagram with clear labels, minimal technical detail, focus on relationships.
Diagram 2 – C4 Container diagram (GCP + zero trust control plane)
Scope: containers and key GCP services inside CODITECT.
Include containers:
- Identity & access:
- “Central IdP / IAM” (Cloud Identity or external IdP).
- “SCIM / Identity Provisioning” (for users, agents, tools).
- Zero Trust Control Plane:
- “PDP / Policy Engine” (e.g., OPA/Cedar‑based).
- “Policy Store” (Git / DB).
- “Telemetry & Risk Engine” (Cloud Logging + SIEM/UEBA).
- “Audit Log Store” (Cloud Logging / BigQuery).
- Ingress & service mesh:
- “GKE IngressGateway (Istio)” as internal PEP.
- “Istio Sidecar Proxies” as PEPs for services and tools.
- “Data Access Proxy / Cloud SQL Auth Proxy” as data‑layer PEP.
- “Tool Proxy Service” – central tool invocation entry point.
- “LLM Gateway Service” – routes to Anthropic, OpenAI, Gemini, Kimi.
- Execution layer:
- “CODITECT Orchestrator” – Anthropic agent/skill/hook/workflow model.
- “Agent Pods (per‑agent sandbox)” – GKE pods, each with its own Kubernetes ServiceAccount and Google Service Account via Workload Identity.
- “Local Runner (macOS)” – local containers using Workload Identity Federation.
- Backends:
- “Public / Context Services Zone”.
- “Business Services Zone”.
- “Sensitive / Regulated Stores Zone” (Cloud SQL/Spanner/GCS).
Show flows:
- User → IdP for SSO/MFA → IAP / Cloud Armor → GKE IngressGateway → CODITECT Orchestrator.
- Orchestrator → Agent Pods.
- Agent Pods → Tool Proxy / mesh sidecars → backend zones.
- Agent Pods → LLM Gateway → external model providers.
- All PEPs (edge, mesh, tool proxy, data proxy) calling the PDP for allow/deny decisions and sending logs to Telemetry & Risk Engine and Audit Log Store.
Annotate:
- “Workload Identity for agent pods”.
- “Workload Identity Federation for local runner”.
- “Per‑request authorization, no long‑lived credentials”.
Style: C4 Container diagram with GCP services called out (Cloud Armor, IAP, GKE, Cloud SQL, Workload Identity).
Diagram 3 – Security / Zero‑Trust flow diagram (PDP/PEP focus)
Scope: detailed sequence of a single high‑risk workflow: an agent accessing sensitive data and external tools under zero trust.
Show a sequence or flow diagram with:
- Actors/components:
- Developer on macOS.
- IdP.
- IAP / Cloud Armor.
- GKE IngressGateway.
- CODITECT Orchestrator.
- Agent Pod with machine identity.
- Tool Proxy (PEP).
- Data Access Proxy (PEP).
- PDP / Policy Engine.
- Telemetry & Risk Engine.
- Audit Log.
Flow:
- Developer authenticates with IdP, gets short‑lived OIDC token.
- Developer accesses CODITECT via IAP/Cloud Armor; identity and device posture are checked.
- Request routed through GKE IngressGateway to CODITECT Orchestrator.
- Orchestrator schedules an agent: a dedicated Agent Pod with its own ServiceAccount and Workload Identity.
- Agent Pod needs to:
- Call an external tool via Tool Proxy.
- Query a sensitive database via Data Access Proxy.
- For each call:
- Tool Proxy and Data Access Proxy act as PEPs, sending context (agent identity, tenant, requested resource, sensitivity) to PDP.
- PDP consults Policy Store and Telemetry & Risk Engine, returns allow/deny + constraints.
- PEP enforces decision, logs the action to Audit Log and Telemetry & Risk Engine.
- If risk score is high, PDP requires human‑in‑the‑loop approval before proceeding.
Emphasize:
- PDP/PEP pattern (NIST SP 800‑207).
- Attributes used by PDP: agent type, tenant, sensitivity tier, device posture, recent behavior.
- Continuous monitoring and adaptive policies (risk‑based).
Diagram 4 – Micro‑segmentation & namespaces (GKE + macOS hybrid)
Scope: network and authorization segmentation across tenants, tools, and sensitivity levels.
Use a diagram that shows:
- GKE cluster with multiple namespaces:
coditect-control(orchestrator, PDP, tool proxy, LLM gateway).coditect-tools(shared multi‑tenant tools).coditect-tenant-a,coditect-tenant-b, etc. (tenant‑specific agents and services).coditect-sensitive(high‑sensitivity services and databases).
- For each namespace:
- Dedicated Kubernetes ServiceAccounts mapped to Google Service Accounts (Workload Identity).
- Istio sidecars with AuthorizationPolicies enforcing:
- Default‑deny between namespaces.
- Allow only explicitly configured flows (e.g., tenant‑A agents → tenant‑A services → sensitive zone via data proxy).
- NetworkPolicies enforcing pod‑level segmentation (no cross‑tenant pod‑to‑pod without going through mesh).
- macOS local runner:
- Shown as a separate “segment” with Workload Identity Federation principal.
- Its traffic passes through Cloud Armor + IAP + GKE IngressGateway and is subject to the same Istio and data‑layer policies.
Annotate:
- Segmentation dimensions: per‑tenant, per‑tool, per‑sensitivity.
- “Agent machine identity determines reachable segments.”
- “All paths traverse PEPs: edge, mesh, tool proxy, data proxy.”
Style: cluster/namespace diagram with clear segment boundaries and arrows only along allowed paths.
General instructions for all diagrams:
- Use consistent naming: “CODITECT Orchestrator”, “Agent Pod”, “Tool Proxy”, “LLM Gateway”, “PDP / Policy Engine”, “PEP”.
- Highlight zero‑trust concepts visually: PEP icons/badges, “deny by default”, “per‑request authZ”.
- Use clear labels for GCP primitives: Cloud Armor, IAP/BeyondCorp Enterprise, GKE, Workload Identity, Workload Identity Federation, Cloud SQL Auth Proxy.
- Ensure multi‑tenant concerns are visible: tenant‑scoped namespaces, identity attributes, and policies.
Produce all four diagrams in one project so they share a unified visual style and terminology.