Engineering Principles for Non-Engineers
12 System Design Patterns from the Second Brain Architecture
Category 1: Human-Centric Design
Principle 1: Reduce Human's Job to One Reliable Behavior
The Rule: If your system requires three behaviors, you don't have a system—you have a self-improvement program.
Application:
Human does: Capture to Slack (1 behavior)
System does: Classification, filing, surfacing, review generation
Anti-pattern: Requiring users to capture AND categorize AND tag AND review regularly. Each additional behavior reduces adoption probability multiplicatively.
Design Test: Can you describe the user's responsibility in one sentence with one verb?
| System | Answer | Pass/Fail |
|---|---|---|
| Second Brain | "Post thoughts to Slack." | ✅ PASS |
| Traditional Notion | "Capture, categorize, tag, file, and review notes." | ❌ FAIL (5 verbs) |
| Evernote | "Capture and organize notes." | ❌ FAIL (2 verbs) |
| Coditect Target | "Submit requirements." | ✅ PASS |
Principle 2: Default to Safe Behavior When Uncertain
The Rule: A real system must know how to fail gracefully.
Application:
if confidence < 0.6:
log_to_review_queue() # Safe: no pollution
ask_for_clarification() # Safe: human decides
else:
file_to_database() # Only when confident
Anti-pattern: Filing everything regardless of confidence, then requiring cleanup.
Design Test: What happens when the system isn't sure? If the answer involves polluting the database, redesign.
| System | Uncertain Behavior | Pass/Fail |
|---|---|---|
| Second Brain | Logs to inbox_log with "needs_review" status, replies asking for clarification, does NOT file to main databases | ✅ PASS |
| Naive AI filing | Files to "Uncategorized" folder, user must clean up later | ❌ FAIL |
| Gmail filters | Applies label anyway, user discovers misfiles weeks later | ❌ FAIL |
| Coditect Target | Halts execution, generates review ticket, notifies human | ✅ PASS |
Principle 3: Design for Restart, Not Perfection
The Rule: Assume users will fall off. Life happens.
Application:
- No guilt-inducing backlog accumulation
- "Don't catch up, just restart" as explicit policy
- 10-minute brain dump re-engagement path
- System continues functioning during user absence
Anti-pattern: Systems that create cleanup debt, making restart feel overwhelming.
Design Test: If a user disappears for 2 weeks, what's the re-engagement friction? Should be < 15 minutes.
| System | 2-Week Absence Re-engagement | Time | Pass/Fail |
|---|---|---|---|
| Second Brain | Open Slack, brain-dump 5-10 thoughts, automation handles rest. No backlog review required. | ~10 min | ✅ PASS |
| Traditional GTD | Review all inboxes, process backlog, update contexts, do weekly review | 2-4 hours | ❌ FAIL |
| Email-based system | 200+ unread emails requiring triage | 1-3 hours | ❌ FAIL |
| Coditect Target | Resume from last checkpoint, no manual state reconciliation | ~5 min | ✅ PASS |
Category 2: Architectural Separation
Principle 4: Separate Memory from Compute from Interface
The Rule: Each layer has one job. They connect through clear boundaries.
Application:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ INTERFACE │ → │ COMPUTE │ → │ MEMORY │
│ (Slack) │ │(Zapier+AI) │ │ (Notion) │
└──────────────┘ └──────────────┘ └──────────────┘
Swappable: Swappable: Swappable:
- Teams - Make - Airtable
- Discord - n8n - Coda
- Custom UI - Different LLM - Supabase
Benefits:
- Portability: Change one layer without rebuilding
- Testing: Verify each layer independently
- Vendor flexibility: Avoid lock-in
Anti-pattern: Obsidian for non-engineers (local files require sync plumbing that breaks abstraction).
Design Test: Can you swap one layer without touching the others?
| Layer Swap | Second Brain Impact | Pass/Fail |
|---|---|---|
| Slack → Teams | Change Zapier trigger only. Notion + prompts unchanged. | ✅ PASS |
| Claude → GPT-4 | Change Zapier AI action only. Slack + Notion unchanged. | ✅ PASS |
| Notion → Airtable | Change Zapier destination actions. Slack + prompts unchanged. | ✅ PASS |
| Zapier → Make | Rebuild workflows, but Slack/Notion/prompts unchanged. | ✅ PASS |
Coditect Validation:
| Layer Swap | Impact |
|---|---|
| Theia → VS Code | Interface only, agent layer untouched |
| Claude → GPT-4 | Compute layer prompts, memory layer untouched |
| FoundationDB → PostgreSQL | Memory layer, compute logic unchanged |
Principle 5: Build One Workflow, Then Attach Modules
The Rule: Core loop first. Optional capabilities later.
Core Loop (build this first):
Capture → Classify → File → Daily Digest → Weekly Review
Modules (add after trust established):
- Voice capture integration
- Calendar-based meeting prep
- Email forwarding to inbox
- Birthday/anniversary reminders
- Project deadline alerts
Anti-pattern: Building all features before validating the core loop works.
Design Test: Can you draw the minimum viable loop in 5 boxes? If not, you're over-engineering.
| System | MVP Loop | Box Count | Pass/Fail |
|---|---|---|---|
| Second Brain | Slack → Classify → File → Digest → Review | 5 boxes | ✅ PASS |
| Over-engineered | Slack → Parse → Validate → Classify → Route → File → Tag → Link → Digest → Review → Archive | 11 boxes | ❌ FAIL |
| Coditect MVP | Requirements → Decompose → Implement → Test → Review | 5 boxes | ✅ PASS |
Second Brain MVP:
┌─────────┐ ┌──────────┐ ┌──────┐ ┌────────┐ ┌────────┐
│ Capture │ → │ Classify │ → │ File │ → │ Digest │ → │ Review │
└─────────┘ └──────────┘ └──────┘ └────────┘ └────────┘
Category 3: Data & Schema Design
Principle 6: Keep Categories and Fields Painfully Small
The Rule: Richness creates friction. Friction kills adoption.
Application:
| Database | Recommended Fields | Not More Than |
|---|---|---|
| People | 5 fields | 7 fields |
| Projects | 5 fields | 6 fields |
| Ideas | 4 fields | 5 fields |
| Admin | 4 fields | 5 fields |
Categories: 4 total (People, Projects, Ideas, Admin). Resist expansion.
Anti-pattern: Starting with 10 categories because "my work is complex."
Design Test: Can a 5-year-old understand your category names?
| Category | 5-Year-Old Test | Pass/Fail |
|---|---|---|
| "People" | "That's about friends and family!" | ✅ PASS |
| "Projects" | "Things you're working on!" | ✅ PASS |
| "Ideas" | "Thoughts you want to remember!" | ✅ PASS |
| "Admin" | "Stuff you have to do." | ✅ PASS |
| "Strategic Initiatives" | "What?" | ❌ FAIL |
| "Stakeholder Engagement" | "Huh?" | ❌ FAIL |
| "Reference Material" | "Is that a book?" | ❌ FAIL |
Coditect Equivalent:
| Category | 5-Year-Old Test | Pass/Fail |
|---|---|---|
| "Requirements" | "What you want built" | ✅ PASS |
| "Code" | "The program" | ✅ PASS |
| "Tests" | "Checking it works" | ✅ PASS |
| "Docs" | "Instructions" | ✅ PASS |
Principle 7: Use Next Action as the Unit of Execution
The Rule: Store actions, not intentions.
Good vs Bad:
| Intention (Bad) | Action (Good) |
|---|---|
| "Work on website" | "Email Sarah to confirm copy deadline by EOD Friday" |
| "Finish proposal" | "Draft executive summary section (30 min block)" |
| "Handle client feedback" | "Schedule 30-min call with Alex to review revision 3" |
Application: Classification prompt must extract concrete, executable next actions from vague inputs.
Design Test: Could you do this action in the next 30 minutes if you had to? If not, decompose further.
| Action | 30-Minute Executable? | Pass/Fail |
|---|---|---|
| "Email Sarah to confirm copy deadline" | Yes, takes 5 minutes | ✅ PASS |
| "Draft executive summary section" | Yes, 30-min focused work | ✅ PASS |
| "Work on website" | No, undefined scope | ❌ FAIL |
| "Improve documentation" | No, where do you start? | ❌ FAIL |
| "Research competitors" | No, could take hours | ❌ FAIL |
| "Spend 25 min reviewing competitor X's pricing page" | Yes, bounded | ✅ PASS |
Coditect Translation:
| Vague Requirement | Executable Action |
|---|---|
| "Add user authentication" | "Implement JWT token validation middleware for /api/* routes" |
| "Fix the bug" | "Add null check in UserService.getProfile() line 47" |
| "Improve performance" | "Add Redis cache to ProductRepository.findByCategory()" |
Principle 8: Prefer Routing Over Organizing
The Rule: Humans hate organizing. AI is good at routing.
Traditional Approach (fails):
Human captures → Human decides folder → Human maintains hierarchy
Routing Approach (works):
Human captures → AI routes to stable bucket → Human never organizes
Application: Fixed 4-bucket taxonomy. AI decides routing. User just captures.
Anti-pattern: Giving users a tagging system and expecting them to use it consistently.
Design Test: How many taxonomy decisions does the user make per capture?
| System | Taxonomy Decisions Per Capture | Pass/Fail |
|---|---|---|
| Second Brain | 0 (AI routes automatically) | ✅ PASS |
| Notion manual | 2-3 (folder + tags + properties) | ❌ FAIL |
| Evernote | 1-2 (notebook + tags) | ❌ FAIL |
| Apple Notes | 1 (folder selection) | ⚠️ BORDERLINE |
| Coditect Target | 0 (system decomposes requirements) | ✅ PASS |
Category 4: Trust Engineering
Principle 9: Build Trust Mechanisms, Not Just Capabilities
The Rule: Capability = "it does X." Trust = "I believe it does X correctly because Y."
Trust Mechanisms:
| Mechanism | Purpose |
|---|---|
| Inbox Log | Shows what happened (transparency) |
| Confidence scores | Shows certainty level (calibration) |
| Fix button | Shows corrections are easy (control) |
| Confirmation replies | Shows what was filed (verification) |
Capability Without Trust: System files notes automatically. Capability With Trust: System files notes, logs every action, shows confidence, makes fixes trivial.
Design Test: If something goes wrong, can the user trace what happened and fix it in < 2 minutes?
| Scenario | Second Brain Response | Time to Fix | Pass/Fail |
|---|---|---|---|
| Wrong category | View Slack confirmation → Reply "fix: should be projects" | 30 seconds | ✅ PASS |
| Missing entry | Check Inbox Log → See confidence was low → Entry in review queue | 1 minute | ✅ PASS |
| Duplicate filing | Check Inbox Log → See both entries → Delete one in Notion | 2 minutes | ✅ PASS |
| Garbled extraction | View original text in Inbox Log → Reply with clarification | 1 minute | ✅ PASS |
Coditect Equivalent:
| Scenario | Trace + Fix Path | Target Time |
|---|---|---|
| Wrong code generated | View decision log → See confidence + rationale → Override | < 2 min |
| Test failure | View provenance → Trace to requirement → Adjust spec | < 5 min |
| Compliance gap | View audit trail → Identify missing evidence → Regenerate | < 10 min |
Principle 10: Treat Prompts Like APIs, Not Creative Writing
The Rule: Reliable beats creative in automated systems.
API-Style Prompt:
You are a classification system. Given input text, return JSON only.
Schema:
{
"category": "people" | "projects" | "ideas" | "admin",
"title": "string (max 50 chars)",
"confidence": 0.0-1.0,
"extracted_fields": { ... }
}
Rules:
- Return ONLY valid JSON
- No explanation, no markdown
- If uncertain, set confidence < 0.6
Input: {user_message}
Anti-pattern: "Please help me categorize this thoughtfully and explain your reasoning..."
Design Test: Is your prompt output parseable by JSON.parse() without any cleanup?
| Prompt Style | Example Output | JSON.parse() Result | Pass/Fail |
|---|---|---|---|
| API-style | {"category":"projects","title":"Website relaunch","confidence":0.87} | ✅ Parses | ✅ PASS |
| Creative | I think this is a project. Here's my analysis... | ❌ Error | ❌ FAIL |
| Markdown-wrapped | ```json\n{"category":"projects"}\n``` | ❌ Error (needs cleanup) | ❌ FAIL |
| Explanatory | {"category":"projects"} // This seems like a project because... | ❌ Error | ❌ FAIL |
Coditect Prompt Validation:
// Test every agent prompt
const output = await agent.execute(input);
try {
const parsed = JSON.parse(output);
assert(parsed.confidence !== undefined);
assert(parsed.files !== undefined);
console.log("✅ PASS: Output is valid JSON");
} catch (e) {
console.log("❌ FAIL: Output requires cleanup");
}
Category 5: Output Design
Principle 11: Make Outputs Small, Frequent, and Actionable
The Rule: Non-engineers want a top-3 list that fits on a phone screen.
Constraints:
| Output | Word Limit | Frequency | Format |
|---|---|---|---|
| Daily digest | ~150 words | Every morning | 3 actions + 1 insight |
| Weekly review | ~250 words | Every Sunday | Summary + 3 next-week actions |
| Confirmation | ~30 words | Per capture | Filed to X, confidence Y |
Why Small:
- Reduces cognitive load
- Increases read-through rate
- Improves follow-through
- Builds trust through consistent delivery
Anti-pattern: Weekly 2,000-word analysis that no one reads.
Design Test: Does the output fit on a phone screen without scrolling?
| Output | Character Count | Phone Screen (≈500 chars visible) | Pass/Fail |
|---|---|---|---|
| Daily digest (150 words) | ~900 chars | Slight scroll, acceptable | ✅ PASS |
| Weekly review (250 words) | ~1500 chars | 3 screens, still scannable | ✅ PASS |
| Confirmation (30 words) | ~180 chars | No scroll | ✅ PASS |
| Detailed analysis (2000 words) | ~12000 chars | 24 screens | ❌ FAIL |
Coditect Application:
| Output | Target | Rationale |
|---|---|---|
| Sprint summary | ≤ 250 words | Fits in Slack message |
| Commit message | ≤ 72 chars subject | Git convention |
| PR description | ≤ 500 words | Reviewable in 2 min |
| Status notification | ≤ 30 words | Glanceable |
Principle 12: Optimize for Maintainability Over Cleverness
The Rule: Moving parts are failure points.
Maintainability Checklist:
- Fewer tools (4 max in the stack)
- Clear logs (every action visible)
- Easy reconnects (5-minute fix for auth issues)
- Minimal custom code (prefer no-code where possible)
- Documented workflows (someone else could fix it)
Anti-pattern: Beautiful, elegant system with 12 integrations that breaks mysteriously.
Design Test: If your Slack token expires, how long to fix? Target: < 5 minutes.
| Failure Scenario | Second Brain Fix Path | Time | Pass/Fail |
|---|---|---|---|
| Slack token expired | Zapier → Reconnect Slack → Test | 3 min | ✅ PASS |
| Notion permissions changed | Zapier → Reconnect Notion → Verify page access | 4 min | ✅ PASS |
| Claude API key rotated | Zapier → Update AI action credentials | 2 min | ✅ PASS |
| Zapier workflow disabled | Zapier → Dashboard → Turn on | 1 min | ✅ PASS |
| Complex custom integration breaks | Debug code → Find issue → Deploy fix | 30+ min | ❌ FAIL |
Coditect Maintainability Targets:
| Component | Recovery Target |
|---|---|
| LLM API key rotation | < 5 min |
| FoundationDB reconnect | < 2 min (automatic) |
| Agent restart after crash | < 1 min (checkpoint restore) |
| Full system cold start | < 10 min |
Principle Summary Matrix
| # | Principle | Category | Key Metric | Second Brain Status |
|---|---|---|---|---|
| 1 | One reliable behavior | Human-Centric | User actions ≤ 1 | ✅ 1 verb: "Post" |
| 2 | Safe defaults when uncertain | Human-Centric | Low-confidence → queue | ✅ < 0.6 → review |
| 3 | Design for restart | Human-Centric | Re-engagement < 15 min | ✅ ~10 min |
| 4 | Separate memory/compute/interface | Architecture | Layers independently swappable | ✅ All 3 swappable |
| 5 | Core loop → modules | Architecture | MVP = 5 components | ✅ 5 boxes |
| 6 | Minimal categories and fields | Data | Categories ≤ 4, fields ≤ 6 | ✅ 4 categories |
| 7 | Next action as execution unit | Data | All actions executable | ✅ 30-min test |
| 8 | Route, don't organize | Data | Zero user taxonomy decisions | ✅ 0 decisions |
| 9 | Trust mechanisms | Trust | Errors traceable in < 2 min | ✅ Fix in 30 sec |
| 10 | Prompts as APIs | Trust | Output = valid JSON | ✅ JSON.parse() works |
| 11 | Small, frequent outputs | Output | Daily ≤ 150 words | ✅ Phone-readable |
| 12 | Maintainability > cleverness | Output | Auth fix < 5 min | ✅ 3 min recovery |
Application to Other Systems
These principles generalize beyond second brains to any human-AI workflow system:
CRM Automation: Principles 1, 4, 6, 8, 10 Meeting Notes Processing: Principles 1, 5, 7, 11, 12 Email Triage: Principles 2, 3, 8, 9, 10 Task Management: Principles 3, 6, 7, 11, 12 Knowledge Base Maintenance: Principles 4, 5, 6, 8, 9 Autonomous Development (Coditect): All 12 principles apply