Glossary of Technical Terms
From Generation Clock Pattern — Patterns of Distributed Systems (Unmesh Joshi)
Generation
A single numeric value representing one leadership epoch in a distributed cluster. Each generation corresponds to exactly one leader's tenure. When a new leader is elected, the generation increments by one. All log entries created during a leader's tenure are tagged with that leader's generation number, enabling distributed nodes to determine temporal ordering and resolve conflicts. A generation is not a timestamp—it is a logical counter that only increases on leadership transitions.
Example from the book: Neptune leads generation 1, Jupiter leads generation 2, Neptune (re-elected) leads generation 3. Entry B1 tagged with generation 1 loses to entry A1 tagged with generation 2 because 2 > 1.
Generation Clock
A monotonically increasing number that increments with each leadership election in a distributed cluster. Used to order events across multiple leader failures and detect stale requests from old leaders. Each log entry is tagged with the generation of the leader that created it, enabling conflict resolution when entries at the same index exist on different nodes.
Synonyms in other systems: Term (Raft), Epoch (ZooKeeper/Zab, Kafka), Ballot Number (Paxos)
Leader
The single node in a cluster designated to handle all update requests. The leader receives client requests, appends them to its log, and replicates entries to followers. Only the leader can accept new writes; followers forward received requests to the leader.
Follower
A cluster node that replicates log entries from the leader but does not directly accept client write requests. Followers maintain copies of the log, acknowledge replication messages, and participate in leader elections when the current leader fails.
Log Entry
A single record in the Write-Ahead Log containing an update request. Each entry includes:
- Index: Position in the log (e.g., index 1, index 2)
- Generation: The generation clock value of the leader that created it
- Payload: The actual update data (e.g., "move 40 widgets from Boston to Pune")
Log Index
The sequential position of an entry within the Write-Ahead Log. Indexes start at 1 and increment for each new entry. Conflicting entries occur when different nodes have different entries at the same index.
Uncommitted Entry
A log entry that has been appended to the leader's local log but has not yet been replicated to a Majority Quorum of nodes. Uncommitted entries are not applied to the data store and may be overwritten during leader recovery. An entry transitions from uncommitted to committed once acknowledged by a majority.
Committed Entry
A log entry that has been successfully replicated to a Majority Quorum of cluster nodes. Once committed, an entry is guaranteed to survive leader failures and will eventually be applied to all nodes' data stores. Committed entries are never overwritten.
Majority Quorum
The minimum number of nodes required to make a cluster decision, defined as more than half of the total nodes. For a 3-node cluster, quorum is 2; for a 5-node cluster, quorum is 3. Quorum ensures that any two majorities overlap by at least one node, preserving consistency across failures.
Formula: quorum = floor(n/2) + 1 where n = total nodes
Replication
The process of copying log entries from the leader to follower nodes. The leader sends replication messages containing log entries; followers append these entries to their local logs and send acknowledgments. Replication ensures data survives individual node failures.
Replication Message
A message sent from the leader to followers containing log entries to be replicated. Includes the entries themselves plus metadata like the leader's generation clock. Followers respond with acknowledgments upon successful receipt and storage.
Acknowledgment
A response from a follower to the leader confirming successful receipt and storage of replicated log entries. The leader tracks acknowledgments to determine when entries can be committed (i.e., when a Majority Quorum has acknowledged).
Leader Election
The process by which cluster nodes select a new leader when the current leader fails or becomes unreachable. Elections require participation of a Majority Quorum. The newly elected leader increments the generation clock before accepting requests.
Conflict Resolution
The process of determining which log entry to keep when multiple nodes have different entries at the same log index. Resolution rule: higher generation wins. The winning entry is replicated to all nodes, overwriting conflicting entries.
Stale Request
A request originating from an old leader that has been superseded by a new leader election. Detected by comparing the request's generation clock against the node's known current generation. Stale requests (lower generation) are rejected.
Node
A single server in the distributed cluster. Each node maintains its own copy of the log, tracks the current generation, and can serve as either leader or follower. Nodes communicate over the network and may fail independently.
Cluster
A group of interconnected nodes working together to provide a replicated, fault-tolerant service. The cluster coordinates through leader election, log replication, and quorum-based commits to maintain consistency despite individual node failures.
Network Partition
A failure condition where some nodes cannot communicate with others, splitting the cluster into isolated groups. Generation Clock helps resolve conflicts when partitions heal: nodes with lower generations defer to those with higher generations.
Zombie Leader
A former leader that was replaced due to network partition or pause but remains unaware it is no longer leader. When a zombie leader attempts to send requests, other nodes reject them because the zombie's generation is lower than the current generation.
Also known as: Split-brain scenario, Stale leader
High-Water Mark
The index of the latest committed log entry, maintained by the leader. The leader propagates the High-Water Mark to followers via heartbeats. Followers use it to know which entries in their local log are safe to apply to their data store.
Heartbeat
A periodic message sent between nodes to indicate liveness. Leaders send heartbeats to followers; absence of heartbeats triggers failure detection and potentially a new leader election. Heartbeats also carry metadata like the High-Water Mark.
Write-Ahead Log (WAL)
A durable, append-only log where all updates are recorded before being applied to the data store. Provides crash recovery: on restart, a node replays the log to restore state. Foundation for replication—the log is what gets replicated between nodes.
Data Store
The actual storage where committed updates are applied and queries are served from. Updates reach the data store only after the corresponding log entry is committed. Separate from the Write-Ahead Log, which holds the record of updates.
Crash Recovery
The process of restoring a node to a consistent state after an unexpected termination. Involves reading the Write-Ahead Log, identifying uncommitted entries, and coordinating with other nodes (via the new leader) to determine which entries should be kept or discarded.
Overwrite (Log Entry)
Replacing an uncommitted log entry with a different entry at the same index. Occurs during conflict resolution when a higher-generation entry supersedes a lower-generation entry. Only uncommitted entries can be overwritten; committed entries are permanent.
Related Patterns
| Pattern | Relationship to Generation Clock |
|---|---|
| Leader and Followers | Generation Clock is a key requirement; tags all leader actions |
| Majority Quorum | Determines commit threshold; ensures overlap for consistency |
| High-Water Mark | Tracks commit progress; propagated via heartbeats |
| Replicated Log | Uses generation to order entries and resolve conflicts |
| Write-Ahead Log | Foundation for durability; what gets replicated |
| HeartBeat | Failure detection; triggers elections that increment generation |
Quick Reference: Generation Clock Rules
- Increment on election: New leader always gets
previous_generation + 1 - Tag all entries: Every log entry stamped with creating leader's generation
- Higher wins conflicts: When same index has different entries, highest generation kept
- Reject stale requests: Any request with generation < current known generation is refused
- Track highest seen: Every node remembers the highest generation it has observed
Coditect-Specific Extensions
Task Nomenclature
The standardized format for identifying tasks in Coditect project plans: [TRACK]-[SEQUENCE]-[DESCRIPTION]. This format enables efficient multi-agent resource allocation, parallel track execution, and human-readable task identification.
Format: [TRACK]-[NNN]-[kebab-case-description]
Example: A-001-setup-authentication-module
Track
A letter-based identifier (A, B, C, ...) representing a parallel work stream in a project. Tracks enable multiple agents to work concurrently on different aspects of a project without conflict. Tasks within different tracks have no implicit ordering relationship.
Common tracks:
- A: Core Architecture
- B: Data Layer
- C: API Layer
- D: Frontend
- E: Infrastructure
- F: Testing
- Z: Maintenance
Sequence
A three-digit zero-padded number (001-999) indicating the position of a task within its track. Tasks with lower sequence numbers in the same track should generally be completed before higher-numbered tasks, though explicit dependencies override this default ordering.
Task ID
The complete identifier for a task, combining track, sequence, and description. The full format is [TRACK]-[NNN]-[description], for example B-003-implement-user-dashboard. Subtasks append ::N to create compound IDs like B-003-implement-user-dashboard::2.
Work Product Reference
A unique identifier for the output of a completed task, enabling traceability from results back to the specific agent, session, and generation that produced them.
Format: wp-[TASK_ID]-gen[N]-[HASH]
Example: wp-A-001-setup-auth-gen1-7f3a2b
Task Claim
An exclusive lock on a task granting an agent the right to work on it. Claims are time-bounded by a lease and tagged with a generation. Only one active claim exists per task at any time. Claims can be acquired, renewed, released, or superseded.
Lease
A time-bounded grant of task ownership. The lease defines how long a claim is valid before it expires. Agents must periodically renew their lease to maintain their claim. If the lease expires without renewal, other agents may supersede the claim with a new generation.
Default duration: 300 seconds (5 minutes)
Session
A logical grouping of agent activity, typically corresponding to a user's active connection. All claims acquired during a session are tagged with the session ID. Results must be submitted by the same session that acquired the claim. Sessions persist state in Session Memory.
Session Memory
Persistent storage of all task-related activity within a session. Session Memory maintains:
- Task Log: Chronological record of all task events (claims, submissions, completions)
- Active Claims: Currently held claims with expiration times
- Completed Tasks: List of task IDs completed in this session
- Track Progress: Completion statistics per track
- Track Assignments: Which agent is working on each track
Session Memory enables continuity across page refreshes, debugging of agent behavior, and compliance audit trails.
Task Log Entry
A single event recorded in Session Memory, capturing:
- Timestamp and event type (CLAIM_ACQUIRED, RESULT_SUBMITTED, etc.)
- Task identification (task_id, track, sequence)
- Generation and agent information
- Outcome (ACCEPTED, REJECTED) and work product reference
Track Progress
Statistics tracking the completion status of tasks within a track:
- Completed: Tasks with accepted results
- In Progress: Tasks with active claims
- Pending: Tasks not yet claimed
- Failed: Tasks that failed after all retries
Track Assignment
The mapping of which agent is responsible for a given track within a session. By default, one agent handles one track, but high-priority tracks may have multiple agents assigned.
Agent
An autonomous AI worker that executes tasks. Each agent has a unique identifier and belongs to a session. Agents claim tasks, perform work, and submit results. Multiple agents can exist within a single session.
Tenant
The top-level isolation boundary in a multi-tenant system. All task coordination is scoped within a tenant. Generations are independent across tenants—Tenant A's task-101 has no relationship to Tenant B's task-101.
Project
A collection of related tasks within a tenant. Tasks are scoped to projects, enabling multiple independent workstreams within a single tenant.
Claim Key
The composite identifier for a claimable unit of work: (tenant_id, project_id, task_id). Generations are tracked per claim key.
Supersede
The act of replacing an expired claim with a new claim at a higher generation. When an agent's lease expires and another agent claims the task, the old claim is superseded. Results submitted under the old generation will be rejected.
Work Lost
A flag indicating that an agent performed work but the result was rejected due to stale generation. This signals to the system (and potentially the user) that effort was expended but not accepted. Common cause: network partition followed by lease expiration.
Renewal Factor
The fraction of lease duration at which renewal should occur. Typically 0.4 (40%), meaning an agent with a 300-second lease renews at 120 seconds. This provides buffer against network delays.
Atomic Compare-and-Swap (CAS)
The fundamental database operation underlying generation-based coordination. Claim updates only succeed if the generation matches expected value. This prevents race conditions when multiple agents attempt to claim or modify the same task.
Pattern Relationship Diagram
┌─────────────────────────────────────────────────────────────────────────┐
│ CODITECT TASK COORDINATION MODEL │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ TENANT │────────▶│ PROJECT │ │
│ └─────────────┘ └──────┬──────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ TRACK A │ │ TRACK B │ │ TRACK C │ Parallel Streams │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ ┌────┴────┐ ┌────┴────┐ ┌────┴────┐ │
│ │A-001-..│ │B-001-..│ │C-001-..│ Task Nomenclature │
│ │A-002-..│ │B-002-..│ │C-002-..│ │
│ │A-003-..│ │B-003-..│ │C-003-..│ │
│ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │
│ ┌─────────────┐ │ │ │ │
│ │ SESSION │─┴────────────┴────────────┘ │
│ └──────┬──────┘ │
│ │ │
│ ├──────────────────────────────────────────┐ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ AGENT │ │ SESSION │ │
│ │ │ │ MEMORY │ │
│ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ CLAIM │ │ TASK LOG │ │
│ │ (gen=N) │◀─────────────────────────▶│ ENTRIES │ │
│ └──────┬──────┘ └─────────────┘ │
│ │ │
│ ┌─────┴─────┬────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌───────┐ ┌─────────┐ ┌──────────────┐ │
│ │GENERA-│ │ LEASE │ │ RESULT │ │
│ │ TION │ │(timeout)│ │ (wp-ref) │ │
│ │ CLOCK │ └─────────┘ └──────────────┘ │
│ └───────┘ │
│ │
└────────────────────────────────────────────────────────────────────────┘
Task ID Structure
B-003-implement-user-dashboard::2
│ │ │ │
│ │ │ └── Subtask Index (optional)
│ │ │
│ │ └── Description (kebab-case, human-readable)
│ │
│ └──── Sequence (001-999, ordering within track)
│
└────── Track (A-Z, parallel work stream)
Work Product Reference Structure
wp-B-003-implement-user-dashboard-gen2-7f3a2b
│ │ │ │
│ │ │ └── Hash (uniqueness)
│ │ │
│ │ └── Generation that produced it
│ │
│ └── Full task ID
│
└── Work product prefix
Source: Patterns of Distributed Systems by Unmesh Joshi, Addison-Wesley (Martin Fowler Signature Series), 2024
Extended for Coditect multi-agent coordination with Task Nomenclature and Session Memory, January 2026