CODITECT STANDARDS
Product Requirements Document Template & Framework
For AI-Driven Autonomous Development in Regulated Industries
Version: 1.0
Classification: Template
Last Updated: [DATE]
Table of Contents
- Document Control
- Intent Understanding Framework
- Disambiguation Framework
- Requirements Structure
- Compliance Framework
- AI Agent Interpretation Guide
- 6.1 Decision Authority Matrix
- 6.2 Interpretation Rules
- 6.3 Planning Verification Protocol ⭐ NEW
- 6.4 Glossary Precedence
- Glossary
- Appendices
1. Document Control
Purpose of This Section
Document Control establishes traceability, accountability, and version management critical for regulated industry compliance (FDA 21 CFR Part 11, HIPAA, SOC2). For AI agentic systems, this section provides metadata that enables agents to understand document authority, currency, and stakeholder context.
Document Information
| Field | Value |
|---|---|
| Document Title | [PROJECT NAME] - Product Requirements Document |
| Document ID | PRD-[PROJECT-CODE]-[VERSION] |
| Status | Draft | In Review | Approved | Deprecated |
| Classification | Internal | Confidential | Regulated |
| Compliance Scope | FDA 21 CFR Part 11 | HIPAA | SOC2 | None |
| Created Date | [YYYY-MM-DD] |
| Last Updated | [YYYY-MM-DD] |
| Author | [Name, Role] |
| Approver | [Name, Role] |
Version History
| Version | Date | Author | Change Summary |
|---|---|---|---|
| 0.1 | [Date] | [Name] | Initial draft |
| 0.2 | [Date] | [Name] | [Description of changes] |
| 1.0 | [Date] | [Name] | Approved for development |
🤖 WHY THIS MATTERS FOR AI AGENTS
AI agents must reference version history to understand requirement evolution. When agents encounter conflicting information, version timestamps resolve precedence. Classification level informs what information can be shared or processed. Compliance scope triggers specific validation requirements in generated artifacts.
❌ NOT Needed in This Section
- Detailed change logs at line level (use version control systems)
- Meeting minutes or discussion threads (maintain separately)
- Approval signatures for drafts (only required for approved versions)
- Distribution lists (managed externally)
2. Intent Understanding Framework
⚠️ CRITICAL SECTION
This section is the cornerstone of AI agent comprehension. Without clear intent documentation, AI agents cannot distinguish between what the user SAID versus what they MEANT. The Intent Understanding Framework provides the cognitive scaffolding necessary for autonomous systems to make decisions aligned with stakeholder goals.
2.1 The Intent Hierarchy
User requirements exist at multiple levels of abstraction. AI agents must understand all levels to make appropriate decisions during autonomous development.
| Level | Definition | Example | AI Agent Use |
|---|---|---|---|
| Strategic Intent | The ultimate business outcome desired | Reduce medication errors in hospital settings | Guides architectural decisions and trade-off resolution |
| Tactical Intent | The approach chosen to achieve strategic goals | Build an AI-powered drug interaction checker | Informs technology selection and integration patterns |
| Operational Intent | Specific capabilities required | Alert clinicians when dangerous combinations are prescribed | Drives feature implementation and acceptance criteria |
| Functional Intent | Detailed behaviors and interactions | Display red warning banner with severity score | Determines UI/UX implementation specifics |
2.2 Why Intent Understanding is Critical
The Interpretation Gap
Every requirement statement contains implicit assumptions. When a stakeholder says "the system should be fast," they have a mental model of acceptable performance. Without explicit intent documentation, AI agents must guess:
| ❌ WITHOUT Intent Documentation | ✅ WITH Intent Documentation |
|---|---|
| "Fast" could mean: | "Fast" means: |
| • Sub-second response | • P95 latency < 200ms |
| • Faster than current system | • Context: Clinical workflow |
| • Real-time streaming | • Rationale: Match clinician attention span |
| • Just "not slow" | • Constraint: Must beat 3-second current state |
| Agent GUESSES and may be wrong | Agent KNOWS and implements correctly |
The Cost of Misunderstanding
In regulated industries, misinterpreted intent creates compliance risk. An AI agent that implements a "notification" as an email when the intent was a real-time alert could:
- Delay critical clinical information (patient safety impact)
- Fail FDA audit trail requirements (regulatory violation)
- Require costly rework after compliance review (budget impact)
- Erode stakeholder trust in autonomous development (adoption risk)
2.3 Capturing Intent: The Five Ws + How
For each requirement, document:
| Element | Question | Purpose for AI Agents |
|---|---|---|
| WHO | Who benefits? Who performs the action? | Identifies personas, roles, and permission boundaries |
| WHAT | What capability is needed? | Defines the functional requirement |
| WHY | Why is this needed? What problem does it solve? | Enables agents to resolve ambiguity by returning to first principles |
| WHEN | When is this needed? Under what conditions? | Establishes temporal and conditional logic |
| WHERE | Where in the system/workflow does this occur? | Determines integration points and context boundaries |
| HOW | How should it work? What are the constraints? | Provides implementation guidance and quality attributes |
2.4 Intent Documentation Template
┌─────────────────────────────────────────────────────────────────┐
│ INTENT BLOCK │
├─────────────────────────────────────────────────────────────────┤
│ Requirement ID: [REQ-XXX] │
│ Statement: [The system shall...] │
│ │
│ Strategic Context: [How this supports business goals] │
│ User Story: As a [role], I want [capability] │
│ so that [benefit] │
│ │
│ Success Looks Like: [Observable outcome when intent satisfied] │
│ Failure Looks Like: [Observable outcome when NOT satisfied] │
│ │
│ Implicit Assumptions: [What the stakeholder takes for granted] │
│ Boundary Conditions: [Edge cases and limits] │
│ Anti-Requirements: [What this explicitly is NOT] │
└─────────────────────────────────────────────────────────────────┘
2.5 Intent Survival Verification
🚨 CRITICAL FINDING (Empirically Validated)
Research shows that 20-30% of requirements are silently dropped during AI planning phases—not because they're unclear, but because planning modes act as "filters" that normalize inputs. Intent that doesn't survive translation is intent that won't get built.
The Silent Loss Problem
Intent Document → PRD → Plan → Build
100% 80% 56% 45% ← TYPICAL LOSS PATTERN
With Verification Loops:
Intent Document → PRD → Plan → Build
100% 95% 90% 85% ← CODITECT TARGET
Mandatory Verification Pattern
After each planning phase, AI agents MUST:
- Extract all requirements from source document (PRD)
- Extract all planned items from generated plan
- Diff to produce FALLOUT LIST (PRD items not in plan)
- Score coverage percentage
- Iterate if coverage < 95%
- Escalate if coverage < 80% after 3 iterations
Intent Survival Metrics
| Metric | Target | Action if Below |
|---|---|---|
| PRD Capture Rate | ≥95% | Revise PRD generation prompt |
| Plan Coverage Rate | ≥95% | Run verification loop |
| Build Implementation Rate | ≥90% | Review build execution |
| Total Survival Rate | ≥85% | Full pipeline audit |
Fallout List Template
┌─────────────────────────────────────────────────────────────────┐
│ FALLOUT REPORT │
├─────────────────────────────────────────────────────────────────┤
│ Planning Pass: #[N] │
│ Coverage: [X]% │
│ Status: ✅ PASS | ⚠️ ITERATE | 🚨 ESCALATE │
├─────────────────────────────────────────────────────────────────┤
│ MISSING REQUIREMENTS: │
│ │
│ | Req ID | Description | PRD Section | Severity | │
│ |--------|-------------|-------------|----------| │
│ | [ID] | [Desc] | [Section] | [H/M/L] | │
│ │
│ RECOMMENDED ACTION: [Iterate/Escalate/Accept] │
└─────────────────────────────────────────────────────────────────┘
❌ NOT Needed in Intent Documentation
- Implementation details (belongs in Technical Design Documents)
- Test cases (derived from intent but documented separately)
- Schedule or timeline information (belongs in project plans)
- Resource allocation (separate concern)
3. Disambiguation Framework
Purpose
Ambiguity is inherent in human communication. For AI agents to function autonomously, ambiguity must be systematically identified and resolved. This section provides the framework for transforming ambiguous requirements into actionable specifications.
3.1 Types of Ambiguity
| Ambiguity Type | Definition | Example | Resolution Strategy |
|---|---|---|---|
| Lexical | Words with multiple meanings | "Table" - furniture or data structure? | Define domain-specific glossary |
| Syntactic | Sentence structure allows multiple interpretations | "Users can edit documents with permissions" | Restructure using formal patterns |
| Semantic | Meaning unclear from context | "System should be user-friendly" | Define measurable criteria |
| Referential | Unclear what pronouns or references point to | "It should validate before that" | Explicit noun references |
| Pragmatic | Intent unclear despite clear words | "Consider implementing caching" | Clarify must-have vs nice-to-have |
| Omission | Missing information assumed known | "Standard security practices" | Enumerate all requirements explicitly |
3.2 The Disambiguation Process
Step 1: Ambiguity Detection
AI agents must be trained to recognize ambiguity indicators:
| Red Flag Words | Problematic Patterns | Missing Elements |
|---|---|---|
| appropriate, adequate | Passive voice constructions | No units of measure |
| easy, simple, intuitive | Missing actors/subjects | No frequency specified |
| fast, quick, responsive | Compound requirements | No error handling |
| user-friendly, modern | Negation chains | No boundary conditions |
| flexible, robust | Conditional cascades | No security context |
| etc., and so on, similar | Implied comparisons | No user role specified |
| some, several, many | Open-ended lists | No success criteria |
| usually, typically, often | Undefined acronyms | No compliance mapping |
Step 2: Disambiguation Questions
For each ambiguity detected, generate targeted questions:
┌─────────────────────────────────────────────────────────────────┐
│ DISAMBIGUATION TEMPLATE │
├─────────────────────────────────────────────────────────────────┤
│ Original Statement: │
│ "The system should respond quickly to user actions." │
│ │
│ Ambiguity Identified: │
│ "quickly" - undefined performance metric │
│ │
│ Disambiguation Questions: │
│ 1. What is the maximum acceptable response time in ms? │
│ 2. Does this apply to all actions or specific operations? │
│ 3. Under what load conditions must this be maintained? │
│ 4. What is the current baseline we are improving upon? │
│ 5. Are there regulatory requirements for response time? │
│ │
│ Disambiguated Statement: │
│ "The system SHALL respond to drug interaction checks within │
│ 200ms (P95) under normal load (100 concurrent users) and │
│ within 500ms under peak load (500 concurrent users), │
│ measured from request submission to result display, to meet │
│ clinical workflow requirements per CPOE guidelines." │
└─────────────────────────────────────────────────────────────────┘
Step 3: The Disambiguation Decision Tree
AMBIGUITY DETECTED
│
▼
┌──────────────────┐
│ Is compliance- │─── YES ──▶ ALWAYS ESCALATE TO HUMAN
│ related? │
└────────┬─────────┘
│ NO
▼
┌──────────────────┐
│ Score > 0.6? │─── YES ──▶ ESCALATE TO HUMAN
└────────┬─────────┘
│ NO
▼
┌──────────────────┐
│ Context │─── NO ───▶ ESCALATE TO HUMAN
│ sufficient? │
└────────┬─────────┘
│ YES
▼
RESOLVE + DOCUMENT
3.3 Ambiguity Scoring
AI agents should calculate ambiguity scores to determine escalation need:
| Score Range | Interpretation | Action |
|---|---|---|
| 0.0 - 0.2 | Clear requirement | Implement autonomously |
| 0.2 - 0.4 | Minor ambiguity | Implement with documentation of interpretation |
| 0.4 - 0.6 | Significant ambiguity | Propose interpretation for human review |
| 0.6 - 0.8 | High ambiguity | Request clarification before implementation |
| 0.8 - 1.0 | Critical ambiguity | Block implementation until resolved |
⚠️ AGENT AUTONOMY LEVELS
- Level 1 (Auto-resolve): Clear glossary match or explicit context - agent resolves independently
- Level 2 (Resolve + Flag): Reasonable inference but some uncertainty - agent resolves and flags for review
- Level 3 (Escalate): High ambiguity or compliance impact - agent MUST NOT proceed without human input
In regulated industries, default to Level 3 for any requirement with compliance implications.
3.4 Resolution Documentation
Every disambiguation creates an audit trail:
| Field | Purpose | Example |
|---|---|---|
| Ambiguity ID | Unique identifier for tracking | AMB-PRD-001 |
| Original Text | Verbatim requirement as received | "System should be fast" |
| Ambiguity Type | Classification from taxonomy | Semantic - undefined metric |
| Stakeholder Consulted | Who provided clarification | Dr. Smith, Clinical Informatics |
| Date Resolved | Timestamp for audit trail | 2025-01-15 |
| Resolution | Clear, testable statement | "Response time < 200ms P95" |
| Rationale | Why this resolution was chosen | Clinical workflow studies show >300ms disrupts clinician flow |
| Compliance Impact | Regulatory implications | FDA UDI response time guidelines |
❌ NOT Part of Disambiguation
- Arguing with stakeholders about what they meant (document their clarification)
- Making assumptions without documentation (all inferences must be recorded)
- Changing stakeholder intent to match easier implementation (disambiguation clarifies, does not alter intent)
4. Requirements Structure
Purpose
A well-structured requirements framework enables AI agents to understand scope, priority, and dependencies. Structure provides the semantic scaffolding necessary for autonomous decision-making during development.
4.1 Executive Summary
What This Section Is
A concise overview (250-500 words) that captures the essence of the product for any reader. This section should stand alone and provide enough context for stakeholders to understand the product vision without reading the entire document.
Why It Matters for AI Agents
The Executive Summary serves as the "north star" for agent decision-making. When agents face ambiguous choices, they reference this section to align decisions with overall product direction.
Template
[PRODUCT NAME] is a [product category] designed for [target users] in the [industry/domain] sector. The primary problem it solves is [problem statement]. Key differentiators include [2-3 unique value propositions]. Success will be measured by [key metrics]. The product must comply with [regulatory requirements]. Initial release is targeted for [timeline] with [scope description].
❌ NOT Needed in Executive Summary
- Technical architecture details
- Feature-level specifications
- Implementation approaches
- Team structure or resource plans
- Detailed timelines or Gantt charts
4.2 Problem Statement
What This Section Is
A clear articulation of the problem or opportunity that justifies the product. This section answers "Why does this product need to exist?" and "What happens if we do nothing?"
Template
┌─────────────────────────────────────────────────────────────────┐
│ PROBLEM STATEMENT │
├─────────────────────────────────────────────────────────────────┤
│ Current State: [Describe how things work today] │
│ Pain Points: [Specific problems, quantified] │
│ Impact: [Business/user consequences] │
│ Root Cause: [Why the problem exists] │
│ Desired Future State: [What success looks like] │
│ Constraints: [Boundaries that must be respected] │
└─────────────────────────────────────────────────────────────────┘
Why It Matters for AI Agents
The Problem Statement provides the "Why" that grounds all implementation decisions. When agents must choose between approaches, they evaluate options against how well they address the stated problem.
4.3 Functional Requirements
What This Section Is
Detailed descriptions of what the system must do. Functional requirements describe capabilities, features, and behaviors from the user perspective.
Requirement Format
| Element | Description | Example |
|---|---|---|
| ID | Unique identifier with hierarchy | FR-AUTH-001 |
| Title | Brief descriptive name | User Authentication |
| Priority | MoSCoW classification | MUST |
| Description | Clear statement using SHALL/SHOULD/MAY | The system SHALL authenticate users via SAML 2.0 SSO |
| Rationale | Why this requirement exists | Enterprise security policy requires centralized identity |
| Acceptance Criteria | Testable conditions for success | AC1: Valid SAML assertion grants access; AC2: Invalid assertion shows error |
| Dependencies | Related requirements | FR-SEC-001, NFR-PERF-003 |
| Compliance Mapping | Regulatory requirements addressed | HIPAA 164.312(d) |
Priority Definitions (MoSCoW)
| Priority | Definition | Agent Interpretation |
|---|---|---|
| MUST | Critical for release; non-negotiable | Implement first; block release if incomplete |
| SHOULD | Important but workarounds exist | Implement if time permits; can defer with documented trade-off |
| COULD | Desirable enhancement | Implement only after MUST/SHOULD complete |
| WON'T | Explicitly out of scope for this release | Do NOT implement; document as future consideration |
❌ NOT Needed in Functional Requirements
- How to implement the feature (belongs in TDD)
- UI mockups or wireframes (reference design documents)
- Database schemas (technical specification)
- API contracts (interface documents)
- Test scripts (QA documentation)
4.4 Non-Functional Requirements
What This Section Is
Quality attributes that constrain how the system operates. Non-functional requirements (NFRs) define performance, security, scalability, and other "-ility" characteristics.
NFR Categories
| Category | Description | Typical Metrics |
|---|---|---|
| Performance | Speed and responsiveness | Latency (ms), throughput (req/sec), page load time |
| Scalability | Handling growth | Max concurrent users, data volume limits |
| Availability | Uptime and reliability | SLA percentage, MTTR, MTBF |
| Security | Protection requirements | Encryption standards, auth protocols, audit logging |
| Compliance | Regulatory requirements | FDA 21 CFR Part 11, HIPAA, SOC2 controls |
| Maintainability | Ease of change | Code coverage, documentation requirements |
| Usability | User experience | Task completion time, error rate, satisfaction score |
| Accessibility | Inclusive design | WCAG level, assistive tech compatibility |
🚨 CRITICAL FOR REGULATED INDUSTRIES
In FDA-regulated environments, NFRs around audit trails, electronic signatures, and data integrity are not optional. AI agents must treat compliance NFRs as MUST priorities regardless of business priority. Failure to implement compliance requirements correctly can result in regulatory action.
4.5 User Stories and Personas
What This Section Is
Narrative descriptions of user needs and goals. Personas provide archetypal users; user stories describe specific interactions in their voice.
Persona Template
┌─────────────────────────────────────────────────────────────────┐
│ PERSONA TEMPLATE │
├─────────────────────────────────────────────────────────────────┤
│ Name: [Persona name - memorable, archetypal] │
│ Role: [Job title and responsibilities] │
│ Demographics: [Experience level, tech comfort, etc.] │
│ Goals: [What they want to achieve] │
│ Pain Points: [Current frustrations] │
│ Context of Use: [When, where, how they interact] │
│ Success Criteria: [How they measure value] │
└─────────────────────────────────────────────────────────────────┘
User Story Format
As a [persona], I want [capability], so that [benefit].
Why It Matters for AI Agents
Personas enable agents to evaluate decisions from the user perspective. When facing implementation choices, agents can simulate: "Would Dr. Sarah (busy ER physician) be able to use this effectively during a crisis?"
4.6 Constraints and Assumptions
What This Section Is
Documented limitations and underlying assumptions that bound the solution space. These prevent scope creep and surface hidden expectations.
| Type | Definition | Example |
|---|---|---|
| Technical Constraint | Technology limitations imposed | Must integrate with existing Oracle database |
| Business Constraint | Organizational limitations | Budget capped at $500K |
| Regulatory Constraint | Compliance mandates | Must complete FDA 510(k) clearance |
| Timeline Constraint | Schedule requirements | MVP by Q2 to meet HIMSS deadline |
| Assumption | Believed truths not yet validated | Users have reliable internet access |
⚠️ ASSUMPTION RISK
Every assumption carries risk. AI agents should flag assumptions that, if wrong, would invalidate requirements. High-risk assumptions need validation plans.
4.7 Dependencies and Integrations
What This Section Is
External systems, services, and data sources that the product relies upon or must interface with.
| Dependency Type | Description | Risk Consideration |
|---|---|---|
| Upstream | Systems that feed data to this product | Availability, data quality, schema changes |
| Downstream | Systems that consume this product's data | Contract compatibility, failure propagation |
| Peer | Systems that operate alongside | Resource contention, coordination |
| Infrastructure | Platform dependencies | Version compatibility, deprecation |
| Vendor | Third-party services | SLA, continuity, security |
5. Compliance Framework
🚨 REGULATORY CONTEXT
Coditect operates in regulated industries including healthcare, fintech, and life sciences. Every requirement must be evaluated against applicable regulatory frameworks. AI agents MUST NOT make autonomous decisions that could compromise compliance.
5.1 FDA 21 CFR Part 11
What This Covers
FDA regulations governing electronic records and electronic signatures. Applicable when building software for clinical trials, medical devices, or pharmaceutical manufacturing.
Key Requirements for AI Agents
| Requirement | Description | Agent Implementation Guidance |
|---|---|---|
| Audit Trail | System must maintain secure, computer-generated, time-stamped audit trails | Every data modification must log: who, what, when, why |
| Electronic Signatures | Signatures must be linked to records and non-repudiable | Implement cryptographic binding of signature to content |
| Access Controls | System access limited to authorized individuals | Role-based access with principle of least privilege |
| Record Integrity | Records must be accurate, complete, and protected | Implement checksums, prevent unauthorized modification |
| Validation | Systems must be validated for intended use | Maintain validation documentation, change control |
5.2 HIPAA Security Rule
What This Covers
Regulations governing Protected Health Information (PHI) in healthcare settings. Applicable when building software that creates, receives, maintains, or transmits PHI.
Safeguard Categories
| Safeguard | Requirements | Agent Implementation Guidance |
|---|---|---|
| Administrative | Policies, procedures, risk analysis | Document security policies in requirements |
| Physical | Facility access, workstation security | Infrastructure requirements for deployment |
| Technical | Access control, encryption, integrity | Specific NFRs for authentication, encryption |
5.3 SOC 2 Controls
What This Covers
Trust service criteria for service organizations. Applicable when building SaaS products that handle customer data.
Trust Service Criteria
| Criteria | Focus Area | Typical Requirements |
|---|---|---|
| Security | Protection against unauthorized access | Firewalls, encryption, access controls |
| Availability | System uptime and accessibility | SLAs, redundancy, disaster recovery |
| Processing Integrity | Complete, valid, accurate processing | Input validation, error handling, reconciliation |
| Confidentiality | Protection of confidential info | Data classification, encryption at rest |
| Privacy | Collection and use of personal info | Consent, data minimization, retention limits |
5.4 Compliance Mapping Template
Every requirement with compliance implications must include:
┌─────────────────────────────────────────────────────────────────┐
│ COMPLIANCE MAPPING TEMPLATE │
├─────────────────────────────────────────────────────────────────┤
│ Requirement ID: [ID] │
│ Applicable Regulations: [FDA Part 11 | HIPAA | SOC2 | Other] │
│ Specific Controls: [Control references, e.g., HIPAA │
│ 164.312(a)(1)] │
│ Evidence Required: [What must be documented for audit] │
│ Validation Approach: [How compliance will be verified] │
│ Risk if Non-Compliant: [Regulatory consequence] │
└─────────────────────────────────────────────────────────────────┘
6. AI Agent Interpretation Guide
Purpose
This section provides explicit guidance for AI agentic systems interpreting this PRD. It establishes decision boundaries, escalation triggers, and interpretation rules that enable autonomous operation while maintaining safety and quality.
6.1 Decision Authority Matrix
| Decision Type | Agent Authority | Escalation Trigger | Example |
|---|---|---|---|
| Implementation Detail | FULL - Agent decides autonomously | None unless conflicts with requirements | Choice of algorithm for sorting |
| Minor Requirement Interpretation | CONDITIONAL - Agent decides with documentation | Ambiguity score > 0.3 | Exact error message wording |
| Feature Trade-off | LIMITED - Agent proposes, human approves | Always escalate | Reducing scope to meet deadline |
| Compliance Decision | NONE - Human decision required | Always escalate | How to implement audit trail |
| Architecture Change | NONE - Human decision required | Always escalate | Changing database technology |
6.2 Interpretation Rules
Rule 1: Explicit Over Implicit
When requirements conflict between explicit statements and implicit assumptions, explicit statements always win. Document the conflict and resolution.
Rule 2: Safety Over Features
When a feature implementation could compromise security, privacy, or compliance, choose the safer implementation even if it reduces functionality. Escalate for human review.
Rule 3: User Intent Over Literal Text
When literal interpretation of a requirement produces an outcome that clearly contradicts user intent (as documented), flag the discrepancy and propose resolution aligned with intent.
Rule 4: Compliance is Non-Negotiable
Never implement shortcuts that could compromise regulatory compliance. When in doubt, escalate. Regulatory consequences can include fines, market removal, and criminal liability.
Rule 5: Document Everything
Every interpretation, assumption, and decision must be documented. Future agents and humans will need to understand why decisions were made.
6.3 Planning Verification Protocol
⚠️ EMPIRICALLY VALIDATED REQUIREMENT
AI planning modes drop 20-30% of requirements silently. This is not a bug—it's how planning filters work. Coditect agents MUST implement verification loops.
Three-Pass Verification Pattern
PASS 1: Initial Planning
├── Generate plan from PRD
├── Extract planned items
├── Compare against PRD requirements
├── Generate Fallout List #1
└── Coverage typically: 70-80%
PASS 2: Recovery Planning
├── Append Fallout List #1 to context
├── Regenerate plan sections
├── Compare against PRD requirements
├── Generate Fallout List #2
└── Coverage typically: 85-92%
PASS 3: Final Verification
├── Append Fallout List #2 to context
├── Final plan refinement
├── Compare against PRD requirements
├── Generate Final Fallout Report
└── Coverage target: ≥95%
Agent Implementation Requirement
async def verified_planning(prd: PRDDocument, max_passes: int = 3) -> VerifiedPlan:
"""
Planning with mandatory verification loops.
CRITICAL: Do NOT skip verification. Silent losses are
unacceptable in regulated industries.
"""
coverage = 0.0
fallout_lists = []
for pass_num in range(1, max_passes + 1):
# Generate or refine plan
plan = await generate_plan(prd, fallout_lists)
# MANDATORY: Compare against source
coverage, fallout = await verify_against_prd(plan, prd)
fallout_lists.append(fallout)
# Log for audit trail
log_verification_pass(pass_num, coverage, fallout)
if coverage >= 0.95:
break
if coverage < 0.80:
raise EscalationRequired(
f"Coverage {coverage:.1%} below threshold after {max_passes} passes"
)
return VerifiedPlan(plan, coverage, fallout_lists)
6.4 Glossary Precedence
When interpreting terms, consult in order:
- Project-specific glossary (highest precedence)
- Domain-specific standards (e.g., HL7 FHIR for healthcare)
- Industry-standard definitions (e.g., ISO, IEEE)
- Common technical usage (lowest precedence)
⚠️ MULTI-AGENT TOKEN MULTIPLICATION
When operating in multi-agent orchestration (15x multiplier), budget requirements interpretation carefully. Complex disambiguation processes consume significant tokens. For high-ambiguity requirements, request human clarification rather than exhaustive autonomous analysis.
7. Glossary
Purpose
The glossary eliminates lexical ambiguity by defining terms as used in this document. AI agents should consult this glossary before inferring term meanings.
Domain Terms
| Term | Definition | Context |
|---|---|---|
| Audit Trail | Secure, timestamped log of system events and data changes | FDA Part 11 compliance |
| Electronic Signature | Computer-generated means of authentication | Regulatory; not digital signature |
| PHI | Protected Health Information - individually identifiable health data | HIPAA context |
| SAML | Security Assertion Markup Language - SSO protocol | Authentication |
| Stakeholder | Person with interest in or influence over the product | Anyone affected by decisions |
[Add project-specific terms as needed]
Requirement Language (RFC 2119)
| Term | Meaning |
|---|---|
| SHALL | Absolute requirement |
| SHALL NOT | Absolute prohibition |
| SHOULD | Recommended but not mandatory |
| SHOULD NOT | Not recommended but not prohibited |
| MAY | Optional |
| MUST | Equivalent to SHALL |
8. Appendices
Appendix A: Referenced Documents
[List related documents: Technical Design Document, Architecture Decision Records, etc.]
Appendix B: Change Request Process
[Document process for requesting changes to this PRD]
Appendix C: Stakeholder Contacts
[Contact information for requirement clarification]
Appendix D: Compliance Checklists
[Regulatory compliance verification checklists]
Document Metadata
document:
type: PRD_TEMPLATE
version: 1.1 # Updated with empirical findings
standard: CODITECT_STANDARDS
compliance_scope:
- FDA_21_CFR_PART_11
- HIPAA
- SOC2
ai_agent_guidance:
disambiguation_threshold: 0.6
escalation_required:
- compliance_decisions
- architecture_changes
- feature_tradeoffs
- coverage_below_80_percent # NEW
autonomous_allowed:
- implementation_details
- minor_interpretations_with_documentation
# NEW: Intent Survival Requirements
intent_verification:
enabled: true
min_coverage: 0.95
max_planning_passes: 3
fallout_list_required: true
escalate_if_below: 0.80
token_awareness:
single_agent_multiplier: 4
multi_agent_multiplier: 15
optimization_priority: verification_loops # Changed from disambiguation_caching
# NEW: Empirical Validation Reference
validated_by:
experiment: "9 Apps Experiment"
key_finding: "Planning modes drop 20-30% of requirements silently"
mitigation: "3-pass verification recovers 90%+ of losses"
CODITECT STANDARDS | PRD Template v1.1
For AI-Driven Autonomous Development in Regulated Industries
Updated with empirical validation from intent survival research