Skip to main content

CODITECT Standard: Ambiguity Handling Protocols

Standard-ID: STD-AMBIGUITY-001
Version: 1.0.0
Status: APPROVED
Effective-Date: 2025-12-19
Enforcement: MANDATORY
Scope: All CODITECT inputs, processing, and outputs
Owner: AZ1.AI INC
Review-Cycle: Quarterly
Parent-Standard: CODITECT-STANDARD-TRUST-AND-TRANSPARENCY.md
Related-Standards:
- CODITECT-STANDARD-FACTUAL-GROUNDING.md
- CODITECT-STANDARD-LOGICAL-INFERENCE.md
Related-ADRs:
- ADR-011-UNCERTAINTY-QUANTIFICATION-FRAMEWORK
- ADR-013-MOE-JUDGES-FRAMEWORK
Research-Foundation:
- docs/09-research-analysis/ACADEMIC-RESEARCH-REFERENCES-UQ-MOE-2024-2025.md

Governing Principle

Ambiguity must never be hidden. It must be identified, documented, and resolved.

Hidden ambiguity leads to incorrect assumptions and unreliable outputs. When ambiguity cannot be resolved, it must be explicitly documented with the interpretation chosen and the reasoning for that choice.


1. Purpose and Scope

1.1 Purpose

This standard defines protocols for identifying, documenting, and resolving ambiguity in all CODITECT interactions, ensuring that:

  1. Ambiguity is detected at input, processing, and output stages
  2. Categories of ambiguity are correctly identified
  3. Resolution strategies are applied systematically
  4. Unresolvable ambiguity is transparently documented
  5. Assumptions are explicit and auditable

1.2 Scope

This standard applies to:

  • User prompts and requirements
  • Technical specifications and constraints
  • Domain terminology and definitions
  • Quantitative references
  • Temporal references
  • Scope boundaries
  • Output requirements

1.3 Ambiguity vs. Uncertainty

ConceptDefinitionExampleStandard
AmbiguityMultiple valid interpretations"Make it faster" (faster load? faster execution?)This standard
UncertaintyUnknown confidence in a claim"This approach may improve performance"STD-TRUST-001
InferenceConclusion without direct evidence"Given X and Y, therefore Z"STD-LOGICAL-001

2. Ambiguity Categories

2.1 Category Matrix

CategoryDefinitionDetection SignalResolution Strategy
DefinitionalTerms with multiple meaningsTechnical jargon, undefined acronymsDefine interpretation used
ReferentialUnclear what is being referencedPronouns, "it", "this" without antecedentRequest clarification
ScopeBoundaries not well-defined"The system", "all users"State assumed scope
TemporalTime frame not specified"Recently", "soon", "historical"State assumed time frame
QuantitativeVague quantities"Some", "many", "few", "high"Request specific numbers
ConditionalUnclear conditions or triggers"If applicable", "when needed"Define conditions
PriorityRelative importance unstated"Important features"Request explicit ranking

2.2 Definitional Ambiguity

Description: A term can be interpreted in multiple valid ways.

Examples:

  • "Performance" (speed? memory? throughput?)
  • "Security" (authentication? authorization? encryption?)
  • "Modern" (2024? 2020+? latest version?)
  • "Scalable" (users? data? requests?)

Detection Keywords:

  • Industry jargon without context
  • Undefined acronyms
  • Relative adjectives (better, faster, cleaner)
  • Subjective quality terms (good, robust, elegant)

Resolution Protocol:

## Ambiguity Detected: Definitional

**Term:** "[term]"
**Possible Interpretations:**
1. [Interpretation A] - [Context where this would apply]
2. [Interpretation B] - [Context where this would apply]
3. [Interpretation C] - [Context where this would apply]

**Selected Interpretation:** [A/B/C]
**Reasoning:** [Why this interpretation was chosen]
**Confidence:** [X%]

**Recommendation:** Confirm with stakeholder if incorrect.

2.3 Referential Ambiguity

Description: Unclear what entity is being referenced.

Examples:

  • "Update it" (the file? the function? the system?)
  • "This approach" (which approach among those discussed?)
  • "They" (which team? which users?)
  • "The API" (which API when multiple exist?)

Detection Keywords:

  • "it", "this", "that", "they"
  • "the [noun]" without specific identification
  • References to prior conversation context

Resolution Protocol:

## Ambiguity Detected: Referential

**Reference:** "[phrase]"
**Possible Referents:**
1. [Entity A] - [Why this could be the referent]
2. [Entity B] - [Why this could be the referent]

**Clarification Request:**
"Could you specify which [category] you're referring to: [A] or [B]?"

**If Cannot Wait for Clarification:**
**Assumed Referent:** [A]
**Reasoning:** [Why this assumption was made]
**Risk if Wrong:** [What goes wrong if assumption is incorrect]

2.4 Scope Ambiguity

Description: The boundaries of what is included/excluded are unclear.

Examples:

  • "The system" (entire platform? specific microservice?)
  • "All users" (registered? active? including admins?)
  • "The codebase" (monorepo? specific service? including tests?)
  • "Security review" (code only? infrastructure? CI/CD?)

Detection Keywords:

  • "all", "every", "complete"
  • "the [system/codebase/application]"
  • Unqualified collective nouns

Resolution Protocol:

## Ambiguity Detected: Scope

**Context:** "[phrase with scope ambiguity]"

**Scope Dimensions:**
| Dimension | Question | Options |
|-----------|----------|---------|
| Breadth | What is included? | [Option A, B, C] |
| Depth | How deep to go? | [Surface, Moderate, Thorough] |
| Exclusions | What is NOT included? | [List] |

**Assumed Scope:**
- Included: [List]
- Excluded: [List]
- Depth: [Level]

**Reasoning:** [Why these assumptions were made]
**Clarification Recommended:** [Yes/No - with question if Yes]

2.5 Temporal Ambiguity

Description: Time frame is unclear or unspecified.

Examples:

  • "Recent changes" (today? this week? this month?)
  • "Legacy code" (pre-2020? pre-migration? anything not new?)
  • "Soon" (hours? days? weeks?)
  • "Historical data" (all time? specific period?)

Detection Keywords:

  • "recent", "old", "new", "legacy"
  • "soon", "later", "eventually"
  • "before", "after" without dates
  • "historical", "current"

Resolution Protocol:

## Ambiguity Detected: Temporal

**Reference:** "[temporal phrase]"

**Possible Time Frames:**
| Interpretation | Start | End | Basis |
|----------------|-------|-----|-------|
| [A] | [Date/Time] | [Date/Time] | [Why this makes sense] |
| [B] | [Date/Time] | [Date/Time] | [Why this makes sense] |

**Assumed Time Frame:** [A/B]
**Reasoning:** [Why this interpretation was chosen]

**If Critical Decision:**
Request explicit dates before proceeding.

2.6 Quantitative Ambiguity

Description: Vague quantities that need precision.

Examples:

  • "Some users reported issues" (2? 20? 200?)
  • "High performance" (latency < 100ms? < 10ms?)
  • "Many requests" (100/s? 10000/s?)
  • "Few errors" (< 1%? < 0.1%?)

Detection Keywords:

  • "some", "many", "few", "several"
  • "high", "low" with metrics
  • "significant", "substantial"
  • "large", "small"
  • Relative comparisons without baseline

Resolution Protocol:

## Ambiguity Detected: Quantitative

**Phrase:** "[vague quantity phrase]"
**Metric Type:** [Count, Percentage, Rate, Threshold]

**Interpretation Options:**
| Label | Value Range | Basis |
|-------|-------------|-------|
| Conservative | [Range] | [Standard or precedent] |
| Moderate | [Range] | [Industry norm] |
| Aggressive | [Range] | [Upper bound] |

**Selected Interpretation:** [Conservative/Moderate/Aggressive]
**Specific Value Used:** [Exact value or range]
**Reasoning:** [Why this was chosen]

**Recommendation:**
"For precision, specify the exact threshold: [specific question]"

2.7 Conditional Ambiguity

Description: Conditions or triggers for actions are unclear.

Examples:

  • "If applicable" (what makes it applicable?)
  • "When needed" (what triggers the need?)
  • "Where appropriate" (what defines appropriate?)
  • "Optionally" (who decides? what criteria?)

Detection Keywords:

  • "if applicable/appropriate/necessary"
  • "when needed/required"
  • "optionally"
  • "depending on"
  • "where relevant"

Resolution Protocol:

## Ambiguity Detected: Conditional

**Condition:** "[conditional phrase]"

**Condition Analysis:**
- **Trigger Event:** [What should cause this condition?]
- **Evaluation Criteria:** [How to determine if condition is met?]
- **Decision Maker:** [Who evaluates? Human/System/Agent?]

**Resolved Condition:**
IF [specific condition] THEN [action]
ELSE [alternative action or skip]

**Examples of Condition Being TRUE:**
- [Scenario 1]
- [Scenario 2]

**Examples of Condition Being FALSE:**
- [Scenario 1]
- [Scenario 2]

2.8 Priority Ambiguity

Description: Relative importance is unstated.

Examples:

  • "Important features" (all equally important?)
  • "Key requirements" (ordered or unordered?)
  • "Nice to have" (zero priority or lowest?)
  • "Critical issues" (relative to what?)

Detection Keywords:

  • "important", "key", "critical"
  • "nice to have", "optional"
  • "must", "should", "could"
  • Lists without explicit ordering

Resolution Protocol:

## Ambiguity Detected: Priority

**Items:** [List of items with unclear priority]

**Priority Classification Request:**

Using MoSCoW or similar:
- **Must Have:** [Items that are required]
- **Should Have:** [Items that are important but not required]
- **Could Have:** [Items that are desirable]
- **Won't Have:** [Items explicitly excluded this time]

**If Cannot Wait for Clarification:**
**Assumed Priority Order:**
1. [Item] - Reason: [Why first]
2. [Item] - Reason: [Why second]
...

**Risk of Wrong Ordering:** [Consequences]

3. Detection Protocol

3.1 Input Analysis Checklist

Before processing any input, scan for:

  • Undefined technical terms
  • Unqualified pronouns or references
  • Scope words without boundaries (all, every, the)
  • Temporal words without dates (recent, soon, legacy)
  • Vague quantities (some, many, few, high, low)
  • Conditional phrases (if applicable, when needed)
  • Priority indicators without ordering (important, key)

3.2 Ambiguity Severity Levels

SeverityImpactAction
BLOCKINGCannot proceed safelyRequest clarification before continuing
HIGHCould significantly affect outcomeDocument assumption, highlight for review
MEDIUMAffects quality but not correctnessDocument assumption, proceed with caveat
LOWMinor impact on outputNote in documentation, proceed

3.3 Severity Classification

## Ambiguity Assessment

**Input:** "[The ambiguous input]"
**Ambiguity Count:** [X] ambiguities detected

### Ambiguity Inventory

| # | Category | Term/Phrase | Severity | Resolution |
|---|----------|-------------|----------|------------|
| 1 | Definitional | "performance" | HIGH | Define as response time |
| 2 | Scope | "the system" | MEDIUM | Assume auth service |
| 3 | Temporal | "recently" | LOW | Assume last 7 days |

### Overall Assessment

**Blocking Ambiguities:** [None / List]
**Action Required:** [Proceed / Clarify / Escalate]

4. Resolution Strategies

4.1 Resolution Decision Tree

┌─────────────────────────┐
│ Ambiguity Detected │
└────────────┬────────────┘


┌─────────────────────────┐
│ Is clarification │──Yes──▶ Request Clarification
│ available/practical? │
└────────────┬────────────┘
│ No

┌─────────────────────────┐
│ Is context sufficient │──Yes──▶ Resolve from Context
│ to resolve? │ (Document reasoning)
└────────────┬────────────┘
│ No

┌─────────────────────────┐
│ Is there a domain │──Yes──▶ Apply Default
│ standard/default? │ (Document source)
└────────────┬────────────┘
│ No

┌─────────────────────────┐
│ Is conservative │──Yes──▶ Apply Conservative
│ interpretation safe? │ (Document risk)
└────────────┬────────────┘
│ No

┌─────────────────────────┐
│ BLOCKING: Escalate │
│ Cannot proceed safely │
└─────────────────────────┘

4.2 Request Clarification

When to Use:

  • BLOCKING severity ambiguity
  • Multiple equally valid interpretations
  • High-stakes decision dependent on resolution

Format:

## Clarification Required

**Context:** [What we're trying to accomplish]

**Ambiguity:** [The specific unclear element]

**Options:**
1. **[Option A]:** [Description]
- Implications: [What would happen]
2. **[Option B]:** [Description]
- Implications: [What would happen]

**Question:** Which interpretation should we use?

**Default (if no response):** [Option that will be used]
**Reasoning for default:** [Why this is safest]

4.3 Resolve from Context

When to Use:

  • Conversation history provides clarity
  • Document context makes meaning clear
  • Prior examples establish pattern

Format:

## Ambiguity Resolution: Context-Based

**Ambiguous Element:** "[phrase]"
**Selected Interpretation:** [interpretation]

**Context Evidence:**
- [Quote or reference from context supporting this interpretation]
- [Additional supporting context]

**Confidence:** [X%] that this interpretation is correct
**Fallback:** If incorrect, impact is [description]

4.4 Apply Domain Default

When to Use:

  • Industry standard interpretation exists
  • Documented best practice defines term
  • Technical specification provides definition

Format:

## Ambiguity Resolution: Domain Default

**Ambiguous Element:** "[phrase]"
**Selected Interpretation:** [interpretation]

**Source of Default:**
- **Standard/Specification:** [Reference]
- **Definition:** [Exact definition from source]

**Applicability Check:**
- Does this context match where default applies? [Yes/No + reasoning]

4.5 Conservative Interpretation

When to Use:

  • No clarification available
  • No clear default
  • Need to proceed but must minimize risk

Format:

## Ambiguity Resolution: Conservative Approach

**Ambiguous Element:** "[phrase]"
**Selected Interpretation:** [interpretation]

**Why Conservative:**
- This interpretation minimizes: [risk/scope/impact]
- Wrong interpretation would cause: [consequence]
- Conservative choice limits damage if wrong

**Trade-offs:**
- We may be overly cautious if: [condition]
- This may miss opportunity to: [benefit]

**Explicit Assumption:** [What we are assuming]
**Review Trigger:** [When this should be revisited]

5. Documentation Requirements

5.1 Ambiguity Documentation Template

Every resolved ambiguity must be documented:

## Ambiguity Log

### Ambiguity #[N]

**Detected In:** [Input/Processing/Output]
**Category:** [Definitional/Referential/Scope/etc.]
**Severity:** [BLOCKING/HIGH/MEDIUM/LOW]

**Original Text:** "[The ambiguous text]"
**Possible Interpretations:**
1. [Interpretation A]
2. [Interpretation B]

**Resolution:**
- **Method:** [Clarification/Context/Default/Conservative]
- **Selected:** [Interpretation A or B]
- **Reasoning:** [Why this interpretation]
- **Confidence:** [X%]

**Assumptions Made:**
- [Explicit assumption 1]
- [Explicit assumption 2]

**If Wrong:**
- Impact: [What goes wrong]
- Recovery: [How to correct]

5.2 Aggregate Ambiguity Summary

For outputs with multiple ambiguities:

## Ambiguity Summary

**Total Ambiguities:** [X]
**By Severity:**
- BLOCKING: [N] (all resolved before proceeding)
- HIGH: [N]
- MEDIUM: [N]
- LOW: [N]

**Resolution Methods Used:**
- Clarification: [N]
- Context: [N]
- Default: [N]
- Conservative: [N]

**Key Assumptions:**
1. [Most significant assumption]
2. [Second most significant]
3. [Third most significant]

**Review Recommended If:**
- [Condition that would invalidate assumptions]

6. Prohibited Patterns

6.1 Never Do This

Anti-PatternExampleWhy Wrong
Ignore ambiguityProceed without notingHidden assumptions
Silent assumptionsChoose interpretation without documentingNo audit trail
False precision"Some users" → "15 users" (guess)Fabricated specificity
Overconfident resolution"Clearly means..." when not clearHides uncertainty
Skip clarificationAssume when could easily askMissed opportunity

6.2 Required Behaviors

SituationRequired Behavior
BLOCKING ambiguityMust request clarification or escalate
Multiple valid interpretationsMust document all, explain selection
Assumption madeMust explicitly state the assumption
Could affect outcomeMust flag for review

7. Integration with Other Standards

7.1 Connection to Trust & Transparency

When ambiguity is present in factual claims:

  1. Apply this standard to identify the ambiguity
  2. Apply STD-FACTUAL-001 for evidence requirements
  3. Document both the ambiguity AND the evidence basis

7.2 Connection to Logical Inference

When ambiguity forces inference:

  1. Apply this standard to document the ambiguity
  2. Apply STD-LOGICAL-001 for inference chain
  3. Clearly mark which inferences stem from ambiguity

7.3 Workflow Integration

Input → [Ambiguity Detection] → [Resolution] → Processing → Output
│ │
└─── Document ─────────┘

8. Quality Grading

8.1 Compliance Scoring

GradeScoreCriteria
A95-100%All ambiguities identified, properly resolved, fully documented
B85-94%Most ambiguities caught, good documentation, minor gaps
C70-84%Key ambiguities caught, some undocumented assumptions
D60-69%Many ambiguities missed, poor documentation
F<60%Hidden assumptions, silent interpretations

8.2 Minimum Requirements

  • Grade B (85%) minimum for production outputs
  • All BLOCKING ambiguities must be resolved
  • All HIGH ambiguities must be documented

9. Examples

9.1 Grade A Handling (Exemplary)

Input: "Optimize the authentication system for better performance"

## Ambiguity Analysis

**Ambiguity Count:** 3

### #1: Definitional - "authentication system"
**Severity:** HIGH
**Interpretations:**
1. Login flow only
2. Full AuthN/AuthZ including token refresh
3. Entire identity platform including SSO

**Resolution:** Context (prior discussion mentioned login latency)
**Selected:** Login flow only
**Confidence:** 85%

### #2: Definitional - "performance"
**Severity:** HIGH
**Interpretations:**
1. Response time (latency)
2. Throughput (requests/second)
3. Resource utilization (CPU/memory)

**Resolution:** Request clarification
**Question Sent:** "Should we optimize for response time, throughput, or resource usage?"
**Response:** "Response time for user-facing login"
**Selected:** Response time for login

### #3: Quantitative - "better"
**Severity:** MEDIUM
**Interpretations:**
1. Any measurable improvement
2. 10-20% improvement
3. Target specific SLA (e.g., p99 < 200ms)

**Resolution:** Applied domain default (industry practice)
**Selected:** 20% improvement or meet SLA, whichever is more aggressive
**Source:** Internal SLA documentation (p99 < 300ms current, target < 200ms)

9.2 Grade F Handling (Unacceptable)

Input: "Optimize the authentication system for better performance"

Response: "I'll optimize the authentication system. Here are the changes..."

Violations:

  • No ambiguity detection
  • No scope clarification
  • No performance metric defined
  • Silent assumptions about what "authentication system" means
  • No documentation of interpretation choices

10. Research Foundation

This standard is grounded in peer-reviewed research:

ResearchVenueContribution
Uncertainty of ThoughtsICLR 2024Information-seeking under ambiguity
G-Eval Prompt DesignEMNLP 2023Ambiguity detection in evaluation
LLM-RubricACL 2024Prompt clarity scoring
Constitutional AIAnthropic 2023Clarification request patterns

Full citations: See docs/09-research-analysis/ACADEMIC-RESEARCH-REFERENCES-UQ-MOE-2024-2025.md


Document Version: 1.0.0 Last Updated: 2025-12-19 Author: CODITECT Standards Team Enforcement: MANDATORY for all CODITECT processing Review Date: 2026-03-19