Skip to main content

Research Prompts for Coditect Product Suite Development

Deep Research Framework Derived from Bottleneck Economy Analysis Version: 1.0 | February 2026


How to Use This Document

Each prompt category targets a specific bottleneck or opportunity identified in the analysis. Use these prompts with:

  • Claude (with web search enabled)
  • Perplexity for real-time market research
  • Internal deep research workflows
  • Customer discovery conversations

Prompts are categorized by:

  1. Market Validation - Confirming bottleneck existence and scale
  2. Competitive Intelligence - Understanding alternative approaches
  3. Product Development - Feature and capability design
  4. Go-to-Market - Positioning and messaging
  5. Technical Architecture - Implementation considerations

Category 1: Integration Bottleneck Research

1.1 Market Validation Prompts

PROMPT: Integration Gap Quantification

Research and compile data on the "integration gap" in enterprise AI adoption:

1. What percentage of enterprise AI projects fail to deliver expected ROI?
Find multiple sources with specific figures.

2. What are the top 5 reasons cited for AI project failures in regulated
industries specifically (healthcare, financial services)?

3. How long does the average enterprise take to move from AI pilot to
production deployment? What factors extend this timeline?

4. What is the estimated market size for "AI integration services" or
"AI implementation consulting"? What growth rate is projected?

Output Format:
- Key statistics with source citations
- Synthesized analysis of integration barrier patterns
- Specific examples from healthcare and fintech
- Implications for autonomous development platforms
PROMPT: Tacit Knowledge Problem Validation

Research the challenge of capturing tacit/institutional knowledge in organizations:

1. What research exists on "tacit knowledge" loss when experienced employees
leave? Quantify the impact where possible.

2. How do organizations currently attempt to capture institutional knowledge?
What tools/methods exist? How effective are they?

3. In regulated industries, what are the consequences of tacit knowledge loss
for compliance? Find specific case studies or regulatory actions.

4. What solutions have emerged for "knowledge management" in AI-assisted
development? How do they perform?

Focus Areas:
- Healthcare IT legacy system knowledge
- Financial services compliance knowledge
- Software development tribal knowledge
- Regulatory interpretation expertise

Output: Landscape analysis with gaps that Coditect could address

1.2 Competitive Intelligence Prompts

PROMPT: AI Coding Tool Team-Level Adoption Analysis

Analyze adoption patterns for AI coding assistants at the team/organizational level:

1. What is the current state of Cursor, GitHub Copilot, and Codeium adoption
at the team level (not individual)? Find enterprise deployment data.

2. What barriers do organizations report when scaling individual AI coding
tools to teams? Specific pain points?

3. What "enterprise" features have Cursor/Copilot added? How do enterprises
rate these features?

4. What is the typical "shadow IT" pattern for AI coding tools? How do
enterprises attempt to govern AI-assisted development?

5. In regulated industries (healthcare, finance), what additional requirements
exist for AI coding tool adoption?

Analysis Framework:
- Individual vs. team adoption metrics
- Compliance/governance gaps
- Context persistence limitations
- Integration depth comparison

Output: Competitive positioning matrix for Coditect
PROMPT: Compliance-Native Development Tools Landscape

Map the competitive landscape for compliance-native development tooling:

1. What tools exist that combine AI-assisted development with built-in
compliance for FDA 21 CFR Part 11, HIPAA, SOC2?

2. How do regulated industries currently handle compliance for AI-generated
code? What manual processes exist?

3. What partnerships exist between AI coding tool companies and compliance
vendors (e.g., Copilot + compliance overlay)?

4. What regulatory guidance has been issued on AI in software development
for medical devices or financial systems?

Categories to Map:
- Pure-play compliance tools
- AI coding tools with compliance features
- GRC platforms with development integration
- Manual processes being automated

Output: Competitive whitespace analysis for Coditect

Category 2: Trust Bottleneck Research

2.1 Trust Infrastructure Prompts

PROMPT: Trust Deficit in AI-Generated Content/Code

Research the emerging "trust deficit" for AI-generated outputs:

1. What research exists on trust in AI-generated code specifically?
How do developers verify AI suggestions?

2. What incidents have occurred due to unverified AI-generated code?
Security vulnerabilities, bugs, compliance failures?

3. How are enterprises attempting to build verification infrastructure
for AI outputs? What approaches are emerging?

4. What is the market for "AI verification" or "AI assurance" services?
Who are the players?

5. In regulated industries, what attestation or certification is required
for AI-assisted development? How is this evolving?

Research Focus:
- Code security verification
- Compliance attestation
- Audit trail requirements
- Third-party certification

Output: Trust infrastructure opportunity analysis
PROMPT: Audit Trail Requirements Across Industries

Research audit trail and traceability requirements for software development:

1. What are the specific audit trail requirements for:
- FDA 21 CFR Part 11 (medical devices)
- HIPAA (healthcare IT)
- SOC2 (SaaS/cloud)
- PCI-DSS (payments)
- FedRAMP (government)

2. How do current AI coding tools address (or fail to address) these requirements?

3. What is the cost of audit trail compliance for enterprises?
Time spent, tools used, manual effort?

4. What emerging regulations are being proposed for AI-assisted development
traceability?

Output Format:
- Requirements matrix by regulatory framework
- Gap analysis vs. current AI coding tools
- Cost of compliance estimates
- Feature requirements for Coditect

2.2 Trust-Building Feature Research

PROMPT: Verification and Certification Mechanisms

Research mechanisms for verifying and certifying AI-generated software:

1. What "formal verification" approaches exist for AI-generated code?
How practical are they?

2. What certification bodies exist for software quality? Could they certify
AI development processes?

3. What "code signing" or attestation mechanisms are used for supply chain
security? How do they apply to AI-generated code?

4. What insurance products exist for AI-related software failures?
What requirements do insurers have?

5. What "trust marks" or certifications do enterprises seek for their
software development processes?

Application to Coditect:
- Built-in verification mechanisms
- Third-party certification integration
- Insurance/liability considerations
- Trust mark eligibility

Output: Trust-building feature roadmap

Category 3: Coordination Bottleneck Research

3.1 Human-AI Coordination Research

PROMPT: Human-AI Workflow Design Patterns

Research best practices for human-AI collaboration in software development:

1. What research exists on optimal "human in the loop" patterns for
AI-assisted development? When should humans intervene?

2. How do high-performing teams structure review of AI-generated code?
What workflows have emerged?

3. What cognitive load research exists on developers supervising AI?
How much AI output can a human effectively review?

4. What decision support tools exist for human-AI coordination?
How do they present AI recommendations?

5. In safety-critical systems, what human oversight requirements exist
for automated/AI processes?

Focus Areas:
- Checkpoint design (when, how, what information)
- Decision support UI patterns
- Cognitive load management
- Escalation protocols

Output: Human-AI coordination design principles for Coditect
PROMPT: Team Coordination with Autonomous Systems

Research multi-person coordination when autonomous AI is involved:

1. How do development teams coordinate when using AI assistants?
What conflicts/redundancies emerge?

2. What research exists on "multi-agent + multi-human" coordination?
What frameworks apply?

3. How do teams handle disagreements between AI suggestions and human
judgment? What escalation patterns exist?

4. What tools exist for team awareness of AI activity?
(Who's using AI for what, what's been generated)

5. How do distributed teams coordinate AI-assisted development
across time zones?

Application to Coditect:
- Multi-agent coordination protocols
- Team awareness features
- Conflict resolution mechanisms
- Asynchronous coordination support

Output: Team coordination feature specifications

Category 4: Product Development Research

4.1 Organizational Context Engine

PROMPT: Organizational Context Capture Methods

Research methods for capturing and encoding organizational context:

1. How do knowledge management systems capture "organizational context"?
What works, what fails?

2. What approaches exist for encoding "coding standards" and "architectural
patterns" for AI consumption?

3. How do organizations capture "relationship context" - who to contact,
how decisions get made?

4. What graph database or knowledge graph approaches are used for
organizational knowledge?

5. How can passive observation capture organizational patterns without
explicit documentation?

Technical Considerations:
- Data structures for context representation
- Update mechanisms (how context evolves)
- Privacy considerations
- Query/retrieval patterns

Output: Organizational Context Engine technical specification
PROMPT: Tacit Knowledge Extraction Techniques

Research techniques for extracting tacit/implicit knowledge from experts:

1. What interview/elicitation techniques work best for tacit knowledge?
(Cognitive task analysis, protocol analysis, etc.)

2. What tools exist for capturing expert knowledge during work?
(Screen recording, decision logging, etc.)

3. How can AI assist in knowledge extraction?
(Conversation mining, pattern recognition, etc.)

4. What validation methods ensure extracted knowledge is accurate?

5. How is tacit knowledge represented for machine consumption?

Application to Coditect:
- "20-year employee" knowledge capture
- Domain expert elicitation
- Pattern learning from behavior
- Knowledge validation protocols

Output: Tacit knowledge capture methodology for Coditect

4.2 Compliance Feature Development

PROMPT: FDA 21 CFR Part 11 Technical Requirements

Deep research on FDA electronic records/signatures requirements:

1. What are the exact technical requirements for 21 CFR Part 11 compliance
in software development tools?

2. How do existing tools (version control, issue tracking, etc.) achieve
Part 11 compliance?

3. What gaps exist in current AI development tools for Part 11 compliance?

4. What validation documentation (IQ/OQ/PQ) is required for development
tools used in medical device software?

5. What recent FDA guidance addresses AI in medical device development?

Technical Requirements:
- Electronic signatures
- Audit trails
- Access controls
- Data integrity
- Validation protocols

Output: FDA 21 CFR Part 11 compliance specification for Coditect
PROMPT: Multi-Framework Compliance Architecture

Research architectures that support multiple compliance frameworks:

1. What "compliance as code" approaches exist? How do they handle
multiple frameworks?

2. What common controls exist across FDA, HIPAA, SOC2, PCI-DSS?
Can they be unified?

3. How do GRC platforms handle control mapping across frameworks?

4. What APIs/integrations exist for compliance evidence collection?

5. What automation is possible for compliance documentation generation?

Architecture Considerations:
- Control abstraction layer
- Framework-specific extensions
- Evidence collection automation
- Documentation generation
- Audit support

Output: Multi-framework compliance architecture for Coditect

Category 5: Go-to-Market Research

5.1 Market Positioning Research

PROMPT: Enterprise AI Tool Buying Process

Research how enterprises purchase AI development tools:

1. Who are the decision makers for AI coding tool purchases?
(Engineering, IT, Security, Compliance?)

2. What evaluation criteria do enterprises use? What's the typical
POC/pilot process?

3. What procurement obstacles exist for AI tools? (Security review,
compliance approval, legal)

4. How long is the typical enterprise sales cycle for AI development tools?

5. What existing relationships/vendors influence AI tool selection?

Buyer Journey Mapping:
- Awareness triggers
- Evaluation criteria
- Decision influencers
- Procurement requirements
- Success metrics

Output: Enterprise go-to-market strategy for Coditect
PROMPT: Regulated Industry AI Adoption Patterns

Research AI adoption patterns specifically in healthcare and fintech:

1. What is the current AI adoption rate in healthcare IT development?
What tools are used?

2. What is the current AI adoption rate in fintech development?
What tools are used?

3. What compliance concerns slow AI adoption in these industries?

4. Who are the early adopters in these industries? What do they have
in common?

5. What messaging resonates with compliance-conscious buyers?

Industry-Specific Analysis:
- Healthcare IT buyers
- Fintech buyers
- Compliance officer perspectives
- Risk manager perspectives

Output: Regulated industry positioning strategy

5.2 Competitive Messaging Research

PROMPT: "Workflow vs. Agent" Differentiation Validation

Research the workflow vs. agent distinction in market perception:

1. How do enterprises understand the difference between AI "copilots"
and "autonomous agents"?

2. What concerns do enterprises have about autonomous AI in development?
What reassurances do they need?

3. What value do enterprises place on "autonomy" vs. "assistance" in
AI development tools?

4. How do regulated industries specifically view autonomous development?

5. What messaging successfully differentiates "agent" from "workflow"
products?

Messaging Testing:
- "Autonomous" vs. "assisted" framing
- "Compliance-native" value proposition
- "Team-level" vs. "individual" productivity
- Trust and verification messaging

Output: Competitive messaging framework for Coditect

Category 6: Technical Architecture Research

6.1 Foundation Technology Research

PROMPT: FoundationDB for AI Development State Management

Research FoundationDB applications in AI and development tool contexts:

1. What applications use FoundationDB for complex state management?
Lessons learned?

2. How does FoundationDB compare to alternatives for multi-agent state
coordination?

3. What patterns exist for storing development artifacts, decisions,
and context in FoundationDB?

4. How does FoundationDB handle the scale requirements of enterprise
development tool usage?

5. What backup, disaster recovery, and compliance features does
FoundationDB support?

Architecture Considerations:
- State schema design
- Transaction patterns
- Query performance
- Operational requirements
- Compliance features

Output: FoundationDB architecture validation for Coditect
PROMPT: Multi-Agent Orchestration State-of-the-Art

Research current state-of-the-art in multi-agent AI orchestration:

1. What orchestration frameworks exist for multi-agent AI systems?
(LangGraph, CrewAI, AutoGen, etc.)

2. What patterns work best for coordination between AI agents?
What fails?

3. How do production multi-agent systems handle error cascades
and failure recovery?

4. What observability approaches exist for multi-agent systems?

5. How do multi-agent systems manage token economics at scale?

Technical Deep-Dive:
- Orchestration patterns
- State management
- Error handling
- Observability
- Cost optimization

Output: Multi-agent orchestration best practices for Coditect

Category 7: Customer Discovery Prompts

7.1 Problem Validation Interviews

PROMPT: Integration Bottleneck Discovery Interview Guide

Structure for customer discovery interviews validating integration bottleneck:

Opening (5 min):
- "Tell me about your current experience with AI coding tools."
- "What's working well? What's frustrating?"

Integration Deep-Dive (15 min):
- "How do you get AI tools to understand your codebase context?"
- "What organizational knowledge do AI tools struggle to incorporate?"
- "Walk me through a recent situation where AI didn't understand
your specific context."

Team Scaling (10 min):
- "How has AI coding tool adoption worked at the team level?"
- "What happens when multiple developers use AI on the same project?"
- "What coordination challenges have emerged?"

Compliance Context (10 min):
- "What compliance requirements affect how you can use AI tools?"
- "How do you document/audit AI-assisted development?"
- "What's your process for verifying AI-generated code?"

Value Quantification (5 min):
- "If AI tools perfectly understood your organizational context,
how much more productive would you be?"
- "What would you pay for compliance-native AI development?"

Output: Interview insights template

7.2 Feature Validation

PROMPT: Checkpoint Feature Validation Interview Guide

Structure for validating human checkpoint feature design:

Current State (5 min):
- "How do you currently review AI-generated code?"
- "What decision points require human judgment?"

Checkpoint Design (15 min):
- "If an AI system asked for your approval at key points,
what information would you need to decide?"
- "What decisions should always require human approval?"
- "How much context do you need to confidently approve AI work?"

Workflow Integration (10 min):
- "How would checkpoints fit into your current workflow?"
- "What would make checkpoints feel helpful vs. disruptive?"
- "How quickly do you need to respond to checkpoint requests?"

Team Dynamics (10 min):
- "Who should approve different types of decisions?"
- "How should checkpoint decisions be documented?"
- "What happens when the checkpoint approver is unavailable?"

Output: Checkpoint feature specification refinement

Research Execution Framework

Priority Ranking

Research CategoryBusiness ImpactUrgencyEffort
Integration Gap QuantificationHighHighMedium
Compliance LandscapeHighHighMedium
Trust InfrastructureHighMediumHigh
Tacit Knowledge MethodsHighMediumHigh
Enterprise Buying ProcessMediumHighLow
Multi-Agent ArchitectureMediumMediumMedium
Customer DiscoveryHighHighLow
  1. Week 1-2: Market validation (Integration Gap, Compliance Landscape)
  2. Week 2-3: Competitive intelligence (AI Coding Tools, Trust Infrastructure)
  3. Week 3-4: Product development (Context Engine, Compliance Features)
  4. Ongoing: Customer discovery interviews
  5. Month 2: Technical architecture validation

Output Templates

Research Summary Template

# Research: [Topic]
## Date: [Date]
## Researcher: [Name]

### Key Findings
1. [Finding 1]
2. [Finding 2]
3. [Finding 3]

### Data Points
- [Statistic with source]
- [Statistic with source]

### Implications for Coditect
- [Implication 1]
- [Implication 2]

### Recommended Actions
- [Action 1]
- [Action 2]

### Open Questions
- [Question 1]
- [Question 2]

### Sources
- [Source 1]
- [Source 2]

Document Purpose: Enable systematic deep research to inform Coditect product development based on Bottleneck Economy insights