Skip to main content

MoE Provider Configuration Guide

For AI Agents and Users: Configure the MoE Constitutional Court architecture based on your available LLM providers - from single-provider enterprise deployments to full multi-provider diversity.

Date: January 16, 2026 Status: Production ADR: ADR-073


Table of Contents

  1. Overview
  2. Provider Modes
  3. Quick Start
  4. Configuration
  5. Model Support
  6. Confidence Adjustment
  7. Best Practices
  8. Troubleshooting

Overview

The MoE (Mixture of Experts) Constitutional Court uses multiple LLM providers to ensure diverse, unbiased evaluations. However, not all deployments have access to multiple providers. This guide explains how to configure the system for your environment.

Key Concepts:

ConceptDescription
Provider ModeSingle, dual, or multi-provider configuration
Auto-DetectionSystem detects available providers from API keys
Confidence AdjustmentVerdicts reflect achieved diversity level
Model Tier DiversityFlagship/balanced/fast tiers for single-provider mode
Provider AlternationPersonas distributed across providers in dual mode

Provider Modes

Single-Provider Mode

When: Only ONE LLM provider is available (e.g., Anthropic-only enterprise agreement)

┌─────────────────────────────────────────────────────────────┐
│ SINGLE-PROVIDER MODE │
│ │
│ Diversity Strategy: Model Tiers │
│ Confidence Adjustment: -10% │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ FLAGSHIP │ │ BALANCED │ │ FAST │ │
│ │ (Opus) │ │ (Sonnet) │ │ (Haiku) │ │
│ │ │ │ │ │ │ │
│ │ • AI Risk │ │ • Technical │ │ • QA Eval │ │
│ │ • Privacy │ │ • Ethics │ │ │ │
│ │ • Compliance│ │ • Security │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Characteristics:

  • Uses model tier diversity (flagship → balanced → fast)
  • Prompt diversity provides additional perspective variation
  • 10% confidence penalty reflects reduced architectural diversity
  • Fully functional but with acknowledged limitations

Dual-Provider Mode

When: TWO LLM providers are available

┌─────────────────────────────────────────────────────────────┐
│ DUAL-PROVIDER MODE │
│ │
│ Diversity Strategy: Provider Alternation │
│ Confidence Adjustment: -5% │
│ │
│ PROVIDER A PROVIDER B │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ • Technical │ │ • Compliance │ │
│ │ • Ethics │ │ • Privacy │ │
│ │ • QA Evaluator │ │ • Data Gov │ │
│ │ • AI Risk │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Characteristics:

  • Personas alternate between providers
  • Cross-provider verification on disagreements
  • 5% confidence penalty (moderate diversity)
  • Good balance of cost and diversity

Multi-Provider Mode

When: THREE or more LLM providers are available (full Constitutional Court)

┌─────────────────────────────────────────────────────────────┐
│ MULTI-PROVIDER MODE │
│ │
│ Diversity Strategy: Full Distribution │
│ Confidence Adjustment: 0% (full diversity) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ANTHROPIC │ │ OPENAI │ │ DEEPSEEK │ │ ALIBABA │ │
│ │ (30%) │ │ (25%) │ │ (20%) │ │ (15%) │ │
│ ├──────────┤ ├──────────┤ ├──────────┤ ├──────────┤ │
│ │Tech Arch │ │Compliance│ │Security │ │Healthcare│ │
│ │Ethics │ │Privacy │ │ │ │Finance │ │
│ │AI Risk │ │Data Gov │ │ │ │ │ │
│ │QA Eval │ │ │ │ │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Characteristics:

  • Full ADR-060 Constitutional Court compliance
  • Maximum 40% weight on single provider
  • No confidence penalty
  • Best for high-stakes evaluations

Quick Start

The system auto-detects your provider mode from environment variables:

# Single provider (Anthropic only)
export ANTHROPIC_API_KEY=sk-ant-...

# Dual provider (Anthropic + OpenAI)
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...

# Multi provider (3+ providers)
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export DEEPSEEK_API_KEY=sk-...

Verify Detection

from core import detect_provider_mode, get_diversity_report

# Check detected mode
mode = detect_provider_mode()
print(f"Provider mode: {mode}") # single, dual, or multi

# Get detailed report
report = get_diversity_report()
print(f"Providers: {report['available_providers']}")
print(f"Confidence adjustment: {report['confidence_adjustment']}")

Configuration

Environment Variables

VariablePurposeExample
ANTHROPIC_API_KEYAnthropic Claude accesssk-ant-api03-...
OPENAI_API_KEYOpenAI GPT/o-series accesssk-proj-...
DEEPSEEK_API_KEYDeepSeek accesssk-...
GOOGLE_API_KEYGoogle Gemini accessAIza...
DASHSCOPE_API_KEYAlibaba Qwen accesssk-...
TOGETHER_API_KEYMeta Llama access (via Together)...

Override Mode

Force a specific mode (useful for testing):

# Force single-provider mode even with multiple keys
export CODITECT_PROVIDER_MODE=single

# Force multi-provider mode (system will use available providers)
export CODITECT_PROVIDER_MODE=multi

# Auto-detect (default)
export CODITECT_PROVIDER_MODE=auto

Override Specific Persona Models

Override the model for a specific judge persona:

# Use a specific model for the technical architect persona
export CODITECT_JUDGE_MODEL_TECHNICAL_ARCHITECT=claude-opus-4-5

# Use a specific model for security analyst
export CODITECT_JUDGE_MODEL_SECURITY_ANALYST=gpt-4.1

Programmatic Configuration

from core import (
MoEOrchestrator,
OrchestratorConfig,
ProviderMode,
MultiModelClient,
)

# Force multi-provider mode
config = OrchestratorConfig(
force_provider_mode=ProviderMode.MULTI,
apply_confidence_adjustment=True, # default
log_provider_info=True, # default
)

orchestrator = MoEOrchestrator(
analysts=[...],
judges=[...],
config=config,
)

# Check current mode
print(f"Mode: {orchestrator.provider_mode}")
print(f"Providers: {orchestrator.provider_info}")

Model Support

Supported Providers and Models (January 2026)

ProviderFlagshipBalancedFast
Anthropicclaude-opus-4-5claude-sonnet-4-5claude-haiku-4-5
OpenAIo3gpt-4.1gpt-4.1-mini
DeepSeekdeepseek-reasonerdeepseek-v3.2deepseek-chat
Googlegemini-3-progemini-3-flashgemini-2.0-flash
Metallama-4-maverickllama-4-scoutllama-3.3-70b
Alibabaqwen3-72bqwen2.5-72bqwen2.5-32b
PairBest ForNotes
Anthropic + OpenAIEnterprise, generalBest coverage
Anthropic + DeepSeekCost-optimizedStrong coding
OpenAI + GoogleEnterprise, multimodalDocument analysis
DeepSeek + MetaOpen-source, self-hostAir-gapped friendly

Self-Hostable Providers

For air-gapped or on-premises deployments:

ProviderSelf-HostableNotes
Meta (Llama)YesVia vLLM, TGI, or Ollama
DeepSeekYesOpen weights available
Alibaba (Qwen)PartialSome models open
AnthropicNoAPI only
OpenAINoAPI only
GoogleNoAPI only

Confidence Adjustment

The system automatically adjusts confidence scores based on achieved diversity:

ModeAdjustmentFinal Confidence
Single-10%confidence × 0.90
Dual-5%confidence × 0.95
Multi0%confidence × 1.00

Example

# Original calculated confidence: 0.85

# Single-provider mode
adjusted = 0.85 - 0.10 = 0.75 # Reflects limited diversity

# Dual-provider mode
adjusted = 0.85 - 0.05 = 0.80 # Moderate diversity

# Multi-provider mode
adjusted = 0.85 - 0.00 = 0.85 # Full diversity

Disabling Adjustment

config = OrchestratorConfig(
apply_confidence_adjustment=False # Raw confidence scores
)

Viewing Adjustment in Results

result = orchestrator.classify(document)

print(f"Raw confidence: {result.raw_confidence}")
print(f"Adjusted confidence: {result.confidence}")
print(f"Provider mode: {result.provider_mode}")
print(f"Adjustment applied: {result.provider_adjustment_applied}")

Best Practices

1. Start with Auto-Detection

Let the system detect your providers automatically:

# Just set your API keys
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...

# System auto-detects dual mode

2. Use Multi-Provider for High-Stakes Decisions

For critical evaluations (compliance, security, high-risk AI):

# Ensure at least 3 providers
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export DEEPSEEK_API_KEY=...

3. Monitor Provider Health

from core import check_provider_health

health = check_provider_health()
for provider, status in health.items():
print(f"{provider}: {'healthy' if status else 'unavailable'}")

4. Refresh Detection After Key Changes

# If you add new API keys at runtime
orchestrator.refresh_provider_detection()
print(f"New mode: {orchestrator.provider_mode}")

5. Log Provider Info for Audits

config = OrchestratorConfig(log_provider_info=True)

# Logs will include:
# INFO: Provider detection: mode=dual, providers=['anthropic', 'openai'], confidence_adjustment=-5%

Troubleshooting

Mode Not Detected Correctly

Symptom: System shows wrong provider mode

Solution:

# Check which keys are set
env | grep -E "(ANTHROPIC|OPENAI|DEEPSEEK|GOOGLE)_API_KEY"

# Verify key format (should not be empty or placeholder)
echo $ANTHROPIC_API_KEY | head -c 10

Confidence Too Low

Symptom: Confidence scores seem artificially low

Cause: Single-provider mode applies -10% adjustment

Solutions:

  1. Add more providers to increase diversity
  2. Check if apply_confidence_adjustment=False is appropriate for your use case
  3. Review raw confidence vs adjusted confidence in results

Model Not Available

Symptom: Error about model not found

Solution:

# Check available models for your providers
from core import get_provider_for_model, check_api_keys

keys = check_api_keys()
print(f"Available providers: {[k for k, v in keys.items() if v]}")

# Verify model mapping
provider = get_provider_for_model("claude-sonnet-4-5")
print(f"Provider for model: {provider}")

Fallback Not Working

Symptom: Single-provider fallback uses wrong model

Solution:

# Override specific persona model
export CODITECT_JUDGE_MODEL_TECHNICAL_ARCHITECT=claude-sonnet-4-5

# Or update judge-model-routing.json with correct fallbacks

Provider Health Check Failing

Symptom: Provider shows as unavailable despite having key

Solution:

import os

# Verify key is actually set (not just exported empty)
key = os.getenv("ANTHROPIC_API_KEY", "")
if not key or key.startswith("sk-ant-placeholder"):
print("Key is empty or placeholder")

# Check key format
if not key.startswith("sk-ant-"):
print("Invalid Anthropic key format")

API Reference

Core Functions

from core import (
# Detection
detect_provider_mode, # Get current mode
get_diversity_report, # Full diversity analysis
check_provider_health, # Provider availability

# Configuration
ProviderMode, # SINGLE, DUAL, MULTI enum
ProviderDetector, # Full detector class

# Convenience
get_provider_for_model, # Model → provider mapping
check_api_keys, # Which keys are set
adjust_provider_confidence, # Manual confidence adjustment
)

OrchestratorConfig Options

OptionTypeDefaultDescription
enable_provider_detectionboolTrueAuto-detect providers
apply_confidence_adjustmentboolTrueApply mode-based penalty
force_provider_modeProviderModeNoneOverride detection
log_provider_infoboolTrueLog provider details


Version: 1.0.0 | Updated: January 16, 2026 Implementation: ADR-073 (79 tests passing)