MoE Provider Configuration Guide
For AI Agents and Users: Configure the MoE Constitutional Court architecture based on your available LLM providers - from single-provider enterprise deployments to full multi-provider diversity.
Date: January 16, 2026 Status: Production ADR: ADR-073
Table of Contents
- Overview
- Provider Modes
- Quick Start
- Configuration
- Model Support
- Confidence Adjustment
- Best Practices
- Troubleshooting
Overview
The MoE (Mixture of Experts) Constitutional Court uses multiple LLM providers to ensure diverse, unbiased evaluations. However, not all deployments have access to multiple providers. This guide explains how to configure the system for your environment.
Key Concepts:
| Concept | Description |
|---|---|
| Provider Mode | Single, dual, or multi-provider configuration |
| Auto-Detection | System detects available providers from API keys |
| Confidence Adjustment | Verdicts reflect achieved diversity level |
| Model Tier Diversity | Flagship/balanced/fast tiers for single-provider mode |
| Provider Alternation | Personas distributed across providers in dual mode |
Provider Modes
Single-Provider Mode
When: Only ONE LLM provider is available (e.g., Anthropic-only enterprise agreement)
┌─────────────────────────────────────────────────────────────┐
│ SINGLE-PROVIDER MODE │
│ │
│ Diversity Strategy: Model Tiers │
│ Confidence Adjustment: -10% │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ FLAGSHIP │ │ BALANCED │ │ FAST │ │
│ │ (Opus) │ │ (Sonnet) │ │ (Haiku) │ │
│ │ │ │ │ │ │ │
│ │ • AI Risk │ │ • Technical │ │ • QA Eval │ │
│ │ • Privacy │ │ • Ethics │ │ │ │
│ │ • Compliance│ │ • Security │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Characteristics:
- Uses model tier diversity (flagship → balanced → fast)
- Prompt diversity provides additional perspective variation
- 10% confidence penalty reflects reduced architectural diversity
- Fully functional but with acknowledged limitations
Dual-Provider Mode
When: TWO LLM providers are available
┌─────────────────────────────────────────────────────────────┐
│ DUAL-PROVIDER MODE │
│ │
│ Diversity Strategy: Provider Alternation │
│ Confidence Adjustment: -5% │
│ │
│ PROVIDER A PROVIDER B │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ • Technical │ │ • Compliance │ │
│ │ • Ethics │ │ • Privacy │ │
│ │ • QA Evaluator │ │ • Data Gov │ │
│ │ • AI Risk │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Characteristics:
- Personas alternate between providers
- Cross-provider verification on disagreements
- 5% confidence penalty (moderate diversity)
- Good balance of cost and diversity
Multi-Provider Mode
When: THREE or more LLM providers are available (full Constitutional Court)
┌─────────────────────────────────────────────────────────────┐
│ MULTI-PROVIDER MODE │
│ │
│ Diversity Strategy: Full Distribution │
│ Confidence Adjustment: 0% (full diversity) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ANTHROPIC │ │ OPENAI │ │ DEEPSEEK │ │ ALIBABA │ │
│ │ (30%) │ │ (25%) │ │ (20%) │ │ (15%) │ │
│ ├──────────┤ ├──────────┤ ├──────────┤ ├──────────┤ │
│ │Tech Arch │ │Compliance│ │Security │ │Healthcare│ │
│ │Ethics │ │Privacy │ │ │ │Finance │ │
│ │AI Risk │ │Data Gov │ │ │ │ │ │
│ │QA Eval │ │ │ │ │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Characteristics:
- Full ADR-060 Constitutional Court compliance
- Maximum 40% weight on single provider
- No confidence penalty
- Best for high-stakes evaluations
Quick Start
Automatic Detection (Recommended)
The system auto-detects your provider mode from environment variables:
# Single provider (Anthropic only)
export ANTHROPIC_API_KEY=sk-ant-...
# Dual provider (Anthropic + OpenAI)
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
# Multi provider (3+ providers)
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export DEEPSEEK_API_KEY=sk-...
Verify Detection
from core import detect_provider_mode, get_diversity_report
# Check detected mode
mode = detect_provider_mode()
print(f"Provider mode: {mode}") # single, dual, or multi
# Get detailed report
report = get_diversity_report()
print(f"Providers: {report['available_providers']}")
print(f"Confidence adjustment: {report['confidence_adjustment']}")
Configuration
Environment Variables
| Variable | Purpose | Example |
|---|---|---|
ANTHROPIC_API_KEY | Anthropic Claude access | sk-ant-api03-... |
OPENAI_API_KEY | OpenAI GPT/o-series access | sk-proj-... |
DEEPSEEK_API_KEY | DeepSeek access | sk-... |
GOOGLE_API_KEY | Google Gemini access | AIza... |
DASHSCOPE_API_KEY | Alibaba Qwen access | sk-... |
TOGETHER_API_KEY | Meta Llama access (via Together) | ... |
Override Mode
Force a specific mode (useful for testing):
# Force single-provider mode even with multiple keys
export CODITECT_PROVIDER_MODE=single
# Force multi-provider mode (system will use available providers)
export CODITECT_PROVIDER_MODE=multi
# Auto-detect (default)
export CODITECT_PROVIDER_MODE=auto
Override Specific Persona Models
Override the model for a specific judge persona:
# Use a specific model for the technical architect persona
export CODITECT_JUDGE_MODEL_TECHNICAL_ARCHITECT=claude-opus-4-5
# Use a specific model for security analyst
export CODITECT_JUDGE_MODEL_SECURITY_ANALYST=gpt-4.1
Programmatic Configuration
from core import (
MoEOrchestrator,
OrchestratorConfig,
ProviderMode,
MultiModelClient,
)
# Force multi-provider mode
config = OrchestratorConfig(
force_provider_mode=ProviderMode.MULTI,
apply_confidence_adjustment=True, # default
log_provider_info=True, # default
)
orchestrator = MoEOrchestrator(
analysts=[...],
judges=[...],
config=config,
)
# Check current mode
print(f"Mode: {orchestrator.provider_mode}")
print(f"Providers: {orchestrator.provider_info}")
Model Support
Supported Providers and Models (January 2026)
| Provider | Flagship | Balanced | Fast |
|---|---|---|---|
| Anthropic | claude-opus-4-5 | claude-sonnet-4-5 | claude-haiku-4-5 |
| OpenAI | o3 | gpt-4.1 | gpt-4.1-mini |
| DeepSeek | deepseek-reasoner | deepseek-v3.2 | deepseek-chat |
| gemini-3-pro | gemini-3-flash | gemini-2.0-flash | |
| Meta | llama-4-maverick | llama-4-scout | llama-3.3-70b |
| Alibaba | qwen3-72b | qwen2.5-72b | qwen2.5-32b |
Recommended Provider Pairs (Dual Mode)
| Pair | Best For | Notes |
|---|---|---|
| Anthropic + OpenAI | Enterprise, general | Best coverage |
| Anthropic + DeepSeek | Cost-optimized | Strong coding |
| OpenAI + Google | Enterprise, multimodal | Document analysis |
| DeepSeek + Meta | Open-source, self-host | Air-gapped friendly |
Self-Hostable Providers
For air-gapped or on-premises deployments:
| Provider | Self-Hostable | Notes |
|---|---|---|
| Meta (Llama) | Yes | Via vLLM, TGI, or Ollama |
| DeepSeek | Yes | Open weights available |
| Alibaba (Qwen) | Partial | Some models open |
| Anthropic | No | API only |
| OpenAI | No | API only |
| No | API only |
Confidence Adjustment
The system automatically adjusts confidence scores based on achieved diversity:
| Mode | Adjustment | Final Confidence |
|---|---|---|
| Single | -10% | confidence × 0.90 |
| Dual | -5% | confidence × 0.95 |
| Multi | 0% | confidence × 1.00 |
Example
# Original calculated confidence: 0.85
# Single-provider mode
adjusted = 0.85 - 0.10 = 0.75 # Reflects limited diversity
# Dual-provider mode
adjusted = 0.85 - 0.05 = 0.80 # Moderate diversity
# Multi-provider mode
adjusted = 0.85 - 0.00 = 0.85 # Full diversity
Disabling Adjustment
config = OrchestratorConfig(
apply_confidence_adjustment=False # Raw confidence scores
)
Viewing Adjustment in Results
result = orchestrator.classify(document)
print(f"Raw confidence: {result.raw_confidence}")
print(f"Adjusted confidence: {result.confidence}")
print(f"Provider mode: {result.provider_mode}")
print(f"Adjustment applied: {result.provider_adjustment_applied}")
Best Practices
1. Start with Auto-Detection
Let the system detect your providers automatically:
# Just set your API keys
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
# System auto-detects dual mode
2. Use Multi-Provider for High-Stakes Decisions
For critical evaluations (compliance, security, high-risk AI):
# Ensure at least 3 providers
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export DEEPSEEK_API_KEY=...
3. Monitor Provider Health
from core import check_provider_health
health = check_provider_health()
for provider, status in health.items():
print(f"{provider}: {'healthy' if status else 'unavailable'}")
4. Refresh Detection After Key Changes
# If you add new API keys at runtime
orchestrator.refresh_provider_detection()
print(f"New mode: {orchestrator.provider_mode}")
5. Log Provider Info for Audits
config = OrchestratorConfig(log_provider_info=True)
# Logs will include:
# INFO: Provider detection: mode=dual, providers=['anthropic', 'openai'], confidence_adjustment=-5%
Troubleshooting
Mode Not Detected Correctly
Symptom: System shows wrong provider mode
Solution:
# Check which keys are set
env | grep -E "(ANTHROPIC|OPENAI|DEEPSEEK|GOOGLE)_API_KEY"
# Verify key format (should not be empty or placeholder)
echo $ANTHROPIC_API_KEY | head -c 10
Confidence Too Low
Symptom: Confidence scores seem artificially low
Cause: Single-provider mode applies -10% adjustment
Solutions:
- Add more providers to increase diversity
- Check if
apply_confidence_adjustment=Falseis appropriate for your use case - Review raw confidence vs adjusted confidence in results
Model Not Available
Symptom: Error about model not found
Solution:
# Check available models for your providers
from core import get_provider_for_model, check_api_keys
keys = check_api_keys()
print(f"Available providers: {[k for k, v in keys.items() if v]}")
# Verify model mapping
provider = get_provider_for_model("claude-sonnet-4-5")
print(f"Provider for model: {provider}")
Fallback Not Working
Symptom: Single-provider fallback uses wrong model
Solution:
# Override specific persona model
export CODITECT_JUDGE_MODEL_TECHNICAL_ARCHITECT=claude-sonnet-4-5
# Or update judge-model-routing.json with correct fallbacks
Provider Health Check Failing
Symptom: Provider shows as unavailable despite having key
Solution:
import os
# Verify key is actually set (not just exported empty)
key = os.getenv("ANTHROPIC_API_KEY", "")
if not key or key.startswith("sk-ant-placeholder"):
print("Key is empty or placeholder")
# Check key format
if not key.startswith("sk-ant-"):
print("Invalid Anthropic key format")
API Reference
Core Functions
from core import (
# Detection
detect_provider_mode, # Get current mode
get_diversity_report, # Full diversity analysis
check_provider_health, # Provider availability
# Configuration
ProviderMode, # SINGLE, DUAL, MULTI enum
ProviderDetector, # Full detector class
# Convenience
get_provider_for_model, # Model → provider mapping
check_api_keys, # Which keys are set
adjust_provider_confidence, # Manual confidence adjustment
)
OrchestratorConfig Options
| Option | Type | Default | Description |
|---|---|---|---|
enable_provider_detection | bool | True | Auto-detect providers |
apply_confidence_adjustment | bool | True | Apply mode-based penalty |
force_provider_mode | ProviderMode | None | Override detection |
log_provider_info | bool | True | Log provider details |
Related Documentation
- ADR-073: MoE Provider Flexibility
- ADR-060: MoE Verification Layer
- Judge Personas Configuration
- Model Routing Configuration
Version: 1.0.0 | Updated: January 16, 2026 Implementation: ADR-073 (79 tests passing)