Skip to main content

Claude Unlimited Memory: External Note System Guide

Executive Summary

This technique enables Claude to process virtually unlimited data volumes by externalizing memory through a structured file-based note system. The approach overcomes the fundamental context window limitation inherent in all LLMs by creating a persistent checkpoint/resume mechanism that survives memory compaction.


The Problem: Context Window Constraints

Core Limitation

Every LLM operates within a fixed context window—a sliding buffer where new data pushes out old data. This manifests as:

  • Hard file limits: 10-15 file upload caps per conversation
  • Deceptive processing: AI claims to process concatenated files but actually reads ~25%
  • Context degradation: Quality degrades as conversation length increases
  • Hallucination triggers: Memory gaps cause fabricated continuations

Business Impact

  • Batch processing jobs fail silently
  • Analysis of large document sets produces incomplete insights
  • Multi-step workflows lose coherence mid-execution
  • Manual intervention required for large-scale tasks

The Solution: Externalized Memory Architecture

Core Concept

The AI writes structured notes to files on the local filesystem. When memory compaction occurs, it reads these notes to reconstruct context and resume work seamlessly.

Architecture Components

┌─────────────────────────────────────────────────────────┐
│ DATA FOLDER │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Source Files (50+ transcripts, emails, etc.) │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ context.md │ │ todos.md │ │ insights.md │ │
│ │ (Goal) │ │ (Checklist) │ │ (Output) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────┘

The Three Core Files

FilePurposeUpdate Frequency
context.mdStores the original goal and analysis parametersCreated once, read on resume
todos.mdTracks progress with checkboxes for each itemUpdated after each item processed
insights.mdAccumulates findings and extracted dataAppended iteratively

Processing Cycle

┌──────────────────┐
│ START PROMPT │
└────────┬─────────┘


┌──────────────────┐
│ Create 3 Files │
│ (context/todos/ │
│ insights) │
└────────┬─────────┘


┌──────────────────┐ ┌──────────────────┐
│ Process Next │◄────│ Read context.md │
│ Data Item │ │ Read todos.md │
└────────┬─────────┘ └────────▲─────────┘
│ │
▼ │
┌──────────────────┐ │
│ Update todos.md │ │
│ (check off item) │ │
└────────┬─────────┘ │
│ │
▼ │
┌──────────────────┐ │
│ Append to │ │
│ insights.md │ │
└────────┬─────────┘ │
│ │
▼ │
┌──────────────────┐ │
│ Memory │──────────────┘
│ Compacted? │ YES
└────────┬─────────┘
│ NO

┌──────────────────┐
│ More items? │───► YES ───► [Loop back to Process]
└────────┬─────────┘
│ NO

┌──────────────────┐
│ COMPLETE │
└──────────────────┘

Setup Instructions

Prerequisites

  • Claude Pro, Team, or Max subscription
  • Claude Desktop application (not web interface)
  • Local folder with source data files

Step-by-Step Configuration

Step 1: Install Claude Desktop

Download from claude.ai for your operating system (macOS, Windows, Linux). The web interface does NOT support file system access.

Step 2: Prepare Your Data Folder

~/Desktop/analysis-project/
├── transcript_001.txt
├── transcript_002.txt
├── ... (up to 50+ files)
└── transcript_050.txt

Step 3: Launch Claude Code Mode

  1. Open Claude Desktop
  2. Click Code in the left sidebar (not Chat)
  3. This enables filesystem read/write capabilities

Step 4: Select Working Directory

  1. Click the folder selector dropdown
  2. Choose "Choose from folder"
  3. Navigate to your data folder
  4. Ensure "Local" is selected (not cloud)

Step 5: Enable Act Mode

  1. Locate the input mode selector (bottom of interface)
  2. Change from "Ask" to "Act"
  3. This enables autonomous execution without confirmation prompts

Step 6: Select Model

  1. Click the three-dot menu (⋮)
  2. Select Opus 4.5 for highest quality
  3. Fall back to Sonnet if usage limits are hit

Prompt Template

Universal Structure

# GOAL
I want you to [PRIMARY OBJECTIVE] all the [DATA TYPE] in this folder
to [DESIRED OUTCOME]. [QUALITY CONSTRAINTS].

# BEFORE YOU START
Create a `context.md` file that contains:
- The goal of this analysis
- [SPECIFIC CONTEXT PARAMETERS]

Create a `todos.md` file to track:
- Which files you've analyzed
- What you've found in each

Create an `insights.md` file that you will:
- Iteratively update after processing each [DATA ITEM]

# AS YOU WORK
- Iteratively update insights.md after processing each [DATA ITEM]
- Check off each [DATA ITEM] in todos.md as you complete it
- Ensure todos.md is updated BEFORE memory compaction
- After ANY memory compaction, read context.md and todos.md before continuing

# EXTRACTION REQUIREMENTS
For each [DATA ITEM], extract:
- [SPECIFIC DATA POINT 1]
- [SPECIFIC DATA POINT 2]
- [SPECIFIC DATA POINT 3]

Work through ALL files until complete.

Example: Customer Language Extraction

# GOAL
I want you to analyze all the meeting transcripts in this folder
to find patterns in how clients describe their problems, what
questions they ask, and what concerns they raise. If it does not
cause frustration, stress, fear, or confusion, it does not count.

# BEFORE YOU START
Create a `context.md` file that contains:
- The goal of this analysis: extracting customer pain points in
their own words for future content creation

Create a `todos.md` file to track:
- Which files you've analyzed
- What you've found

Create an `insights.md` file that you will:
- Iteratively update after processing each transcript

# AS YOU WORK
- Iteratively update insights.md after processing each transcript
- Check off each transcript in todos.md as you complete it
- Ensure todos.md is updated BEFORE memory compaction
- After ANY memory compaction, read context.md and todos.md before continuing

# EXTRACTION REQUIREMENTS
For each transcript, extract:
- Exact phrases used to describe problems or pain points
- Questions asked
- Concerns or hesitations mentioned

Work through ALL files until complete.

Example: FAQ Generation

# GOAL
I want you to analyze all meeting transcripts to identify frequently
asked questions, how they were answered, and what follow-up questions
arose. Focus on confusion points, uncertainty, and gaps in understanding.

# BEFORE YOU START
Create a `context.md` file that contains:
- The goal: generating comprehensive FAQ documentation from real
customer interactions

Create a `todos.md` file to track:
- Which transcripts you've processed
- Question count per transcript

Create an `insights.md` file structured as:
- Questions by category
- Answer quality assessments
- Suggested follow-up content

# AS YOU WORK
- Update insights.md after each transcript
- Check off completed transcripts in todos.md
- Ensure updates complete BEFORE memory compaction
- Read context.md and todos.md after ANY memory reset

# EXTRACTION REQUIREMENTS
For each transcript, extract:
- Direct questions asked (verbatim)
- Context/topic category
- How the question was answered
- Follow-up questions (actual or likely)

Work through ALL files until complete.

Use Case Catalog

1. Customer Voice Mining

Input: Sales call transcripts, support tickets, feedback surveys
Output: Pain point vocabulary, emotional triggers, objection patterns
Business Value: Marketing copy that resonates, sales scripts with proven language

2. FAQ Documentation

Input: Support calls, onboarding sessions, demo recordings
Output: Categorized Q&A pairs, answer quality gaps, content opportunities
Business Value: Self-service deflection, reduced support load

3. Churn Signal Detection

Input: Customer success calls, renewal conversations, exit interviews
Output: Early warning indicators, complaint patterns, satisfaction drivers
Business Value: Proactive retention, reduced churn rate

4. Feature Request Aggregation

Input: Product feedback calls, feature request tickets, user interviews
Output: Prioritized feature list, use case documentation, impact estimates
Business Value: Data-driven roadmap, customer-validated priorities

5. Lead Prioritization

Input: Email inbox exports, inquiry forms, initial consultation notes
Output: Ranked lead list, conversion likelihood scores, follow-up recommendations
Business Value: Sales efficiency, reduced response time for high-value leads

6. Competitive Intelligence

Input: Sales call transcripts mentioning competitors, win/loss analyses
Output: Competitor mention frequency, positioning gaps, battle card content
Business Value: Sales enablement, product differentiation

7. Training Content Generation

Input: Expert call recordings, troubleshooting sessions, best practice discussions
Output: Structured knowledge base, procedure documentation, training modules
Business Value: Knowledge preservation, onboarding acceleration


Best Practices

File Format Optimization

  • Use Markdown (.md) for all note files
  • Markdown enables checkbox tracking: - [x] completed / - [ ] pending
  • Lower memory overhead than rich text formats

Prompt Engineering Tips

  1. Be explicit about memory compaction: Use the exact phrase "before your memory gets compacted"
  2. Emphasize file reading on resume: "After ANY memory compaction, read context.md and todos.md"
  3. Define quality constraints: "If it does not cause X, it does not count"
  4. Specify completion criteria: "Work through ALL files until complete"

Error Recovery

If processing stalls:

  1. Check todos.md for last completed item
  2. Manually verify insights.md contains expected content
  3. Resume with prompt: "Continue from where you left off. Read context.md and todos.md first."

Scaling Considerations

  • 50-100 files: Standard Opus processing, ~30-60 minutes
  • 100-500 files: Consider chunking into sub-folders
  • 500+ files: Pre-filter to high-value subset, or use parallel sessions

Troubleshooting

SymptomCauseSolution
Processing stops mid-batchMemory compaction without proper checkpointRe-run with emphasis on "update todos BEFORE compaction"
Duplicate insightsResume without reading todosAdd explicit "check todos for completed items" instruction
Quality degradationInsufficient context preservationExpand context.md with more detailed parameters
Files not foundWorking directory mismatchVerify folder selection in Claude Code UI
Model refuses to actAsk mode instead of Act modeSwitch to Act mode in UI

Tool Alternatives

While this guide focuses on Claude Code, the same pattern works with:

ToolProviderFile System Access
Claude CodeAnthropicFull local access
Codex CLIOpenAIFull local access
Gemini CLIGoogleFull local access
Anti-gravityGoogleFull local access
CursorAnysphereFull local access
WindsurfCodeiumFull local access

The core technique—externalizing memory to files the AI can read/write—is model-agnostic.