design a laboratory process workflow from sample receipt in warehouse to laboratory processing step by step
Here is a generic, standards-aligned end‑to‑end workflow you can adapt into SOPs or an LIMS/agent flow, starting at warehouse receipt and ending at laboratory processing.
1. Warehouse receipt and intake
- Receive shipment
- Initial documentation
- Triage and storage
- Handover to laboratory
2. Laboratory reception and accessioning
- Sample receipt at lab
- Identification and reconciliation
- For each container, verify that sample ID, patient/subject ID (if applicable), and test requests on the label match the request form or electronic order.7234
- Investigate and document discrepancies (mismatched IDs, missing labels, incomplete forms) using a reconciliation form and defined escalation path.6734
- Acceptance/rejection decision
- Accessioning and barcoding
- Sorting and prioritization
3. Pre‑analytical processing
- Pre‑processing checks
- Confirm test orders, required sample type, and any special handling (light protection, immediate centrifugation, biosafety level).1584
- Verify that storage time and conditions since collection remain within defined stability windows.14
- Centrifugation and separation (where applicable)
- Load tubes into centrifuges according to test requirements (speed, time, temperature, rotor type, balance).14114
- After centrifugation, separate plasma/serum from cells as required, following biosafety practices.11144
- Aliquoting
- Prepare secondary tubes or plates for aliquots, each labeled with the accession ID, derivative ID, and destination test/section.1141
- Aliquot required volumes for different test panels (e.g., chemistry, molecular) to minimize repeated freeze‑thaw and instrument dead volume.1141
- Sample normalization and preparation (e.g., extraction labs)
- For nucleic acid or specialized workflows, perform extractions using SOP‑defined protocols, recording reagent lots, instruments, and operator IDs.1
- Quantify and normalize extracts (e.g., DNA concentration and volume) to specified input requirements for downstream assays.141
- Batch assembly and loading
- Group prepared samples into analytical batches by test type, platform, and control strategy (including positive/negative/blank controls and calibrators).12144
- Load samples and controls onto analyzers or into assay plates, scanning barcodes to register rack/position mapping in the LIS.13101
4. Analytical phase start (handover point)
- Instrument readiness and QC
- Verify daily/shift quality control results, calibration status, and maintenance logs on each analyzer before running patient/production samples.131214
- Document QC review and authorization to proceed with the analytical run per ISO 15189 and local policy.1584
- Start of analytical run
- Initiate the analytical method or assay run (e.g., immunoassay, PCR, HPLC) for each batch, ensuring that LIS tracking links each result to the correct accession ID.121411
- Monitor runs for flags or instrument alerts; pause and investigate if QC fails or critical errors occur, documenting any corrective actions.8134
At this point, the workflow transitions from pre‑analytical into the analytical phase (actual testing), which would continue into result validation and reporting.5812411
Optional: prompt you can reuse to go deeper
To design this as a detailed BPMN/SOP with roles and systems, you could next ask:
“Design a BPMN‑level process for this end‑to‑end laboratory workflow, including roles, systems (LIS/WMS), data artifacts, and ISO 15189 control points, suitable for implementation as an agentic multi‑agent orchestration.”
Design a state machine for sample lifecycle from warehouse receipt to final disposition, with states: Received, In Warehouse, Released, In Transit, Received at Lab, Pre‑analytical, Ready for Analysis, Analyzed, Reviewed, Reported, Archived, Disposed.”
“Generate a LIMS‑agnostic workflow schema (YAML/JSON) for sample receipt and processing, including roles (warehouse operator, lab receptionist, technician, reviewer), required data fields, and error‑handling branches for rejected or non‑conforming samples.” “Propose an event‑driven architecture (Kafka/Pulsar) for lab workflow orchestration, where each step (e.g., ‘sample scanned at warehouse’, ‘LIMS registration completed’) emits a domain event consumed by agents responsible for next‑step scheduling and notifications.”
Here are all three deliverables. I've generated complete, production-ready specifications.
1. State Machine (XState/Statecharts-compatible JSON)
The state machine defines 14 states with conditional transitions, entry actions, and two terminal states (Disposed, Rejected) plus a recoverable error state (NonConforming).
Key design decisions:
- Guarded transitions: Every branch uses a
condguard (e.g.,manifestValid,qcPassed) so the orchestrator can evaluate context before transitioning1617 - Entry actions: Each state fires actions on entry (e.g.,
notifyWarehouseOps,createWorkOrder) for side effects - NonConforming as catch-all: Any state can route here via failure events; investigation determines reprocess vs. dispose
- Retest loop:
Archivedcan transition back toPreAnalyticalonRETEST_REQUESTfor add-on testing
2. LIMS-Agnostic Workflow Schema (YAML)
The YAML schema is portable across LIMS platforms and includes roles, data models, step definitions, validations, and error-handling branches.
Structure highlights:
| Section | Purpose |
|---|---|
roles | 5 roles with explicit permissions and notification subscriptions |
data_fields | Typed schemas for sample, transfer, work_order, nonconformance |
steps | 11 workflow steps with on_success/on_failure branches emitting domain events |
error_handling | nonconformance_workflow with disposition options and 24h escalation timer |
Validation rules use a simple DSL (not_null, equals, in) that maps cleanly to JSON Schema or policy engines like OPA.1816
3. Event-Driven Architecture (Kafka/Pulsar)
The architecture spec defines topics, Avro schemas, 15 domain events, 6 consumer groups (agents), and 8 producer microservices.
Topics
| Topic | Partition Key | Purpose |
|---|---|---|
lab.sample.events | sample_id | All state-change events (event sourcing log) |
lab.workflow.commands | sample_id | Inbound commands triggering transitions |
lab.notifications | recipient_id | Outbound notifications |
lab.audit.log | — | Immutable compliance audit trail (1-year retention, compacted) |
lab.dlq | — | Dead-letter queue for failed events |
Consumer Groups (Agents)
- workflow-orchestrator: Validates transitions against state machine, persists state, handles sagas1920
- notification-agent: Routes alerts per event type → channels (email, SMS, push)
- scheduler-agent: Manages retention timers, SLA escalations
- audit-agent: Writes tamper-evident audit log
- lims-sync-agent: Bi-directional sync with external LIMS21
- analytics-agent: Streams to data warehouse for TAT dashboards
Patterns enabled
- Event sourcing: Rebuild sample state by replaying
lab.sample.events19 - Saga orchestration: Compensating events on failure (e.g.,
lims.registration_rollback) - Transactional outbox: Ensures exactly-once delivery to Kafka
- Idempotency: Dedup on
event_idwith 5-minute window
Generated Files
| File | Format | Content |
|---|---|---|
sample_lifecycle_state_machine.json | JSON | XState-compatible state machine |
sample_workflow_schema.yaml | YAML | LIMS-agnostic workflow schema |
lab_event_architecture.json | JSON | Kafka/Pulsar EDA spec |
Suggested Next Steps
Which direction do you want to take this?
- TypeScript/Go implementation: Generate XState machine code or a Go state machine using
github.com/looplab/fsmwith the JSON as config - Terraform/Pulumi for Kafka topics: IaC to provision the topic topology with schema registry
- OpenAPI spec for the command/event schemas: Generate REST/gRPC APIs for each microservice
- OPA/Rego policies: Translate the validation rules into policy-as-code for runtime enforcement
- Temporal/Restate workflow: Map this to a durable execution framework instead of raw Kafka
Let me know your target runtime (Kubernetes, Cloud Run, etc.) and I can tighten the specs accordingly.
BOTH GxP‑style SOP language and pure software‑workflow DSL. Related
Define events and triggers for each state transition in the sample lifecycle Add states for post-analysis like Analyzed Reported and Archived Implement this state machine in Python using classes Create a diagram visualizing the sample state machine Handle exceptions and error states like Rejected or Quarantined. Which direction do you want to take this? TypeScript/Go implementation: Generate XState machine code or a Go state machine using github.com/looplab/fsm with the JSON as config Terraform/Pulumi for Kafka topics: IaC to provision the topic topology with schema registry OpenAPI spec for the command/event schemas: Generate REST/gRPC APIs for each microservice OPA/Rego policies: Translate the validation rules into policy-as-code for runtime enforcement Temporal/Restate workflow: Map this to a durable execution framework instead of raw Kafka
Below is a compact, copy‑pasteable package that gives you:
- GxP‑style SOP wording of states/events
- A software‑workflow DSL for the state machine (JSON/YAML)
- A Python class‑based implementation
- A text diagram you can feed into a diagram tool
- Clear branching points for TS/Go, Terraform, OpenAPI, OPA, Temporal directions
All elements extend the lifecycle with Analyzed → Reviewed → Reported → Archived → Disposed, plus Rejected and Quarantined/NonConforming.3132333435
1. GxP‑style SOP: States, Events, Triggers
This is written like a GLP/GMP SOP section for “Sample Lifecycle Management.”323631
1.1 States (GxP wording)
- Received Definition: Sample has physically arrived at the warehouse or central receiving dock and is pending manifest verification. Entry criteria: Shipment delivered; transport container unopened. Exit criteria: Manifest verified or sample rejected.
- In Warehouse Definition: Sample is stored in warehouse staging under defined environmental conditions, awaiting release to the laboratory. Entry criteria: Manifest verification completed with no critical discrepancies. Exit criteria: Sample released for transfer or moved to Quarantined due to storage or integrity issues.
- Released Definition: Sample has been approved for transfer from warehouse to the laboratory. Entry criteria: Acceptance criteria met (integrity, temperature, documentation). Exit criteria: Sample physically dispatched and enters In Transit.
- In Transit Definition: Sample is being transported between warehouse and laboratory under controlled conditions. Entry criteria: Transfer manifest generated; transport container sealed. Exit criteria: Sample received at laboratory or moved to Quarantined for transport deviations.
- Received at Lab Definition: Sample has arrived at the laboratory reception area and is awaiting registration in LIMS. Entry criteria: Lab has scanned barcodes and confirmed manifest delivery. Exit criteria: LIMS registration successful or sample moved to Quarantined.
- Pre‑analytical Definition: Sample is undergoing pre‑analytical processing (e.g., aliquoting, centrifugation, labeling). Entry criteria: LIMS registration successful; work order/test panel assigned. Exit criteria: Pre‑analytical QC passed (Ready for Analysis) or failed (Quarantined).
- Ready for Analysis Definition: Sample is staged and queued for analytical processing on an instrument or method. Entry criteria: Pre‑analytical processing completed with acceptable QC. Exit criteria: Analytical run started and completed (Analyzed).
- Analyzed Definition: Sample has completed the analytical run; raw data and preliminary results are available. Entry criteria: Instrument run completed; data captured. Exit criteria: Technical review completed (Reviewed) or rerun requested (back to Ready for Analysis) or results quarantined for investigation.
- Reviewed Definition: Results have been technically and/or medically reviewed and validated. Entry criteria: QC rules and review checks applied; reviewer signs off. Exit criteria: Results released in a formal report (Reported) or re‑analysis requested (Analyzed).
- Reported Definition: Final results have been released to the customer/EMR or sponsor system. Entry criteria: Approved report generated and successfully transmitted. Exit criteria: Sample archived for retention.
- Archived Definition: Sample (or its aliquots) is stored under defined retention conditions for a specified period. Entry criteria: Reporting completed; archival location assigned. Exit criteria: Retention expired (Disposed) or sample retrieved for retest (Pre‑analytical).
- Disposed Definition: Sample has been destroyed in accordance with biohazard and regulatory requirements. Entry criteria: Retention period expired or quality decision mandates destruction. Exit criteria: None (terminal state).
- Rejected (terminal error state) Definition: Sample did not meet acceptance criteria at initial receipt and will not be processed. Entry criteria: Critical discrepancy at warehouse or lab receipt (e.g., wrong ID, missing documentation). Exit criteria: None; case closure, client notification, and disposition per SOP.
- Quarantined (NonConforming) Definition: Sample is temporarily withheld from further processing due to a detected non‑conformance. Entry criteria: Any failure related to identity, integrity, transport, storage, QC, or data. Exit criteria: Investigation completed resulting in reprocess (back to relevant processing state) or disposal.
1.2 Events and triggers per transition
You can treat each of these as domain events plus local triggers.3337
sample.received– trigger: warehouse operator logs new arrival; state:None → Receivedmanifest.verified_ok– trigger: manifest check passes; state:Received → In Warehousemanifest.verified_fail– trigger: mismatched IDs/contents; state:Received → Rejectedwarehouse.release_approved– trigger: acceptance criteria met; state:In Warehouse → Releasedwarehouse.acceptance_failed– trigger: integrity/temp not acceptable; state:In Warehouse → Quarantinedtransport.started– trigger: shipment dispatched; state:Released → In Transittransport.lab_scan_ok– trigger: lab scan, integrity ok; state:In Transit → Received at Labtransport.integrity_failed– trigger: broken seal, temp deviation, damage; state:In Transit → Quarantinedlims.registration_ok– trigger: LIMS registration success; state:Received at Lab → Pre‑analyticallims.registration_failed– trigger: unresolvable registration errors; state:Received at Lab → Quarantinedpreanalytical.qc_pass– trigger: pre‑analytical QC passes; state:Pre‑analytical → Ready for Analysispreanalytical.qc_fail– trigger: QC fails; state:Pre‑analytical → Quarantinedanalysis.completed– trigger: instrument run completed; state:Ready for Analysis → Analyzedreview.completed_valid– trigger: reviewer validates results; state:Analyzed → Reviewedreview.rerun_required– trigger: reviewer requires rerun; state:Analyzed → Ready for Analysisreview.data_invalid– trigger: data integrity issue; state:Analyzed → Quarantinedreport.released– trigger: report generated and delivered; state:Reviewed → Reportedarchive.stored– trigger: sample placed into archive; state:Reported → Archivedretention.expired– trigger: retention timer elapsed; state:Archived → Disposedretest.requested– trigger: authorized retest/add‑on order; state:Archived → Pre‑analyticalnc.investigation_reprocess– trigger: NC investigation allows reprocessing; state:Quarantined → Pre‑analyticalnc.investigation_dispose– trigger: NC requires destruction; state:Quarantined → Disposed
2. Software‑workflow DSL for State Machine (JSON/YAML)
This is a reduced, copy‑pasteable JSON that matches the above, including Rejected and Quarantined. It’s compatible in spirit with XState/statecharts.3839
{
"id": "sampleLifecycle",
"version": "1.1.0",
"initial": "Received",
"states": {
"Received": {
"on": {
"manifest.verified_ok": "InWarehouse",
"manifest.verified_fail": "Rejected"
}
},
"InWarehouse": {
"on": {
"warehouse.release_approved": "Released",
"warehouse.acceptance_failed": "Quarantined"
}
},
"Released": {
"on": {
"transport.started": "InTransit"
}
},
"InTransit": {
"on": {
"transport.lab_scan_ok": "ReceivedAtLab",
"transport.integrity_failed": "Quarantined"
}
},
"ReceivedAtLab": {
"on": {
"lims.registration_ok": "PreAnalytical",
"lims.registration_failed": "Quarantined"
}
},
"PreAnalytical": {
"on": {
"preanalytical.qc_pass": "ReadyForAnalysis",
"preanalytical.qc_fail": "Quarantined"
}
},
"ReadyForAnalysis": {
"on": {
"analysis.completed": "Analyzed"
}
},
"Analyzed": {
"on": {
"review.completed_valid": "Reviewed",
"review.rerun_required": "ReadyForAnalysis",
"review.data_invalid": "Quarantined"
}
},
"Reviewed": {
"on": {
"report.released": "Reported"
}
},
"Reported": {
"on": {
"archive.stored": "Archived"
}
},
"Archived": {
"on": {
"retention.expired": "Disposed",
"retest.requested": "PreAnalytical"
}
},
"Disposed": {
"type": "final"
},
"Rejected": {
"type": "final"
},
"Quarantined": {
"on": {
"nc.investigation_reprocess": "PreAnalytical",
"nc.investigation_dispose": "Disposed"
}
}
}
}
If you want YAML instead (for a config repo), this is structurally equivalent and aligns with common FSM libraries.3435
3. Python State Machine Implementation (classes)
This is a simple, self‑contained FSM engine using classes and a transition table. It avoids extra deps but mirrors patterns from python‑statemachine.3534
from dataclasses import dataclass, field
from typing import Callable, Dict, Optional, Tuple
State = str
Event = str
Guard = Callable[["SampleFSM"], bool]
Action = Callable[["SampleFSM"], None]
@dataclass
class Transition:
target: State
guard: Optional[Guard] = None
action: Optional[Action] = None
@dataclass
class SampleFSM:
sample_id: str
state: State = "Received"
context: dict = field(default_factory=dict)
def __post_init__(self):
self._build_transitions()
def _build_transitions(self):
"""Define the state transition table."""
T = self._transition # alias
self.transitions: Dict[Tuple[State, Event], Transition] = {
# Received
T("Received", "manifest.verified_ok"): Transition("InWarehouse"),
T("Received", "manifest.verified_fail"): Transition("Rejected", action=self._on_rejected),
# InWarehouse
T("InWarehouse", "warehouse.release_approved"): Transition("Released"),
T("InWarehouse", "warehouse.acceptance_failed"): Transition("Quarantined", action=self._on_quarantined),
# Released
T("Released", "transport.started"): Transition("InTransit"),
# InTransit
T("InTransit", "transport.lab_scan_ok"): Transition("ReceivedAtLab"),
T("InTransit", "transport.integrity_failed"): Transition("Quarantined", action=self._on_quarantined),
# ReceivedAtLab
T("ReceivedAtLab", "lims.registration_ok"): Transition("PreAnalytical"),
T("ReceivedAtLab", "lims.registration_failed"): Transition("Quarantined", action=self._on_quarantined),
# PreAnalytical
T("PreAnalytical", "preanalytical.qc_pass"): Transition("ReadyForAnalysis"),
T("PreAnalytical", "preanalytical.qc_fail"): Transition("Quarantined", action=self._on_quarantined),
# ReadyForAnalysis
T("ReadyForAnalysis", "analysis.completed"): Transition("Analyzed"),
# Analyzed
T("Analyzed", "review.completed_valid"): Transition("Reviewed"),
T("Analyzed", "review.rerun_required"): Transition("ReadyForAnalysis"),
T("Analyzed", "review.data_invalid"): Transition("Quarantined", action=self._on_quarantined),
# Reviewed
T("Reviewed", "report.released"): Transition("Reported"),
# Reported
T("Reported", "archive.stored"): Transition("Archived"),
# Archived
T("Archived", "retention.expired"): Transition("Disposed"),
T("Archived", "retest.requested"): Transition("PreAnalytical"),
# Quarantined
T("Quarantined", "nc.investigation_reprocess"): Transition("PreAnalytical"),
T("Quarantined", "nc.investigation_dispose"): Transition("Disposed", action=self._on_disposed),
}
self.final_states = {"Disposed", "Rejected"}
@staticmethod
def _transition(state: State, event: Event) -> Tuple[State, Event]:
return (state, event)
# Public API
def can_handle(self, event: Event) -> bool:
return (self.state, event) in self.transitions
def handle(self, event: Event):
if (self.state, event) not in self.transitions:
raise ValueError(f"Invalid transition: state={self.state}, event={event}")
trans = self.transitions[(self.state, event)]
if trans.guard and not trans.guard(self):
raise PermissionError(f"Guard blocked transition: state={self.state}, event={event}")
prev_state = self.state
self.state = trans.target
if trans.action:
trans.action()
# Optional: audit hook
self._audit(prev_state, event, self.state)
# Hooks / actions
def _on_rejected(self):
self.context["rejected"] = True
def _on_quarantined(self):
self.context.setdefault("nonconformances", 0)
self.context["nonconformances"] += 1
def _on_disposed(self):
self.context["disposed"] = True
def _audit(self, prev_state: State, event: Event, new_state: State):
# Replace with real logging / event publishing
print(f"[AUDIT] sample={self.sample_id} {prev_state} --{event}--> {new_state}")
if __name__ == "__main__":
fsm = SampleFSM(sample_id="S123")
fsm.handle("manifest.verified_ok")
fsm.handle("warehouse.release_approved")
fsm.handle("transport.started")
fsm.handle("transport.lab_scan_ok")
fsm.handle("lims.registration_ok")
fsm.handle("preanalytical.qc_pass")
fsm.handle("analysis.completed")
fsm.handle("review.completed_valid")
fsm.handle("report.released")
fsm.handle("archive.stored")
fsm.handle("retention.expired")
You can swap the internal transition table with JSON loading to keep it in sync with an external DSL, similar to how XState builds machines from JSON.4034
4. Text Diagram for Visualization
Feed this into Graphviz, Mermaid, Structurizr, etc.
5. Where to go next (direction options)
Given your list, here’s how I’d prioritize directions for a regulated lab platform:
- OpenAPI spec for command/event schemas
- Temporal/Restate workflow
- Lift the FSM into a durable orchestrator so retries, timers (retention, SLA), and sagas are handled centrally.
- Each event becomes a signal, each long‑running wait (e.g., transport, retention) becomes a timer.41
- OPA/Rego policies for GxP rules
- TypeScript XState or Go
looplab/fsm - Terraform/Pulumi for Kafka
If you tell me which one you want next (e.g., “Temporal workflow in Go” or “OpenAPI for /samples/{id}/events”), I’ll generate a focused, production‑oriented spec + starter code for that path.
Provide full GxP SOP example for complete sample lifecycle
Generate Python coroutine code for the sample state machine Create XState JSON config for sample lifecycle FSM What error handling and exceptions for state transitions Design validation and approval workflows for Reported state
Below is a compact package you can drop into specs and code: a full GxP‑style SOP section, a coroutine‑based Python FSM, XState JSON, and explicit error/validation logic.484950515253545556
1. GxP SOP: Sample Lifecycle (Full Example)
Title: SOP – Sample Receipt, Handling, Analysis, Reporting, Archival, and Disposal
1. Purpose To describe the standardized and controlled procedure for the receipt, handling, analysis, reporting, archival, and disposal of samples in order to ensure data integrity, traceability, and compliance with applicable GxP regulations.49525648
2. Scope This procedure applies to all samples received, processed, analyzed, reported, archived, or disposed of in the laboratory, including clinical, stability, validation, and investigational samples.5749
3. Responsibilities
- Warehouse Operator: Receives shipments, verifies manifests, stores and releases samples.5058
- Laboratory Receptionist: Receives samples at the lab, performs identity/integrity checks, registers samples in LIMS.5950
- Technician/Analyst: Performs pre‑analytical preparation, executes analytical methods, documents raw data.524959
- Reviewer (Supervisor/QA/Clinical): Reviews and approves results and reports.565259
- Quality Unit: Manages deviations, non‑conformances, CAPA, and approves critical decisions (e.g., sample destruction).6048
4. Definitions (States)
- Received, In Warehouse, Released, In Transit, Received at Lab, Pre‑analytical, Ready for Analysis, Analyzed, Reviewed, Reported, Archived, Disposed, Rejected, Quarantined (Non‑conforming) — as defined in the previous answer; language aligns with GLP/GMP expectations (purpose, entry criteria, exit criteria).485957
5. Procedure
5.1 Sample Receipt at Warehouse
- 5.1.1 Upon arrival, the Warehouse Operator receives the shipment and verifies the shipping documentation against the manifest (sample IDs, counts, conditions).5850
- 5.1.2 Any visible damage, leakage, improper labeling, or temperature excursion shall be documented and immediately reported to the Quality Unit.5058
- 5.1.3 If acceptance criteria are met, the sample is labeled with a unique internal identifier and its state is updated to “Received.”5950
- 5.1.4 If critical discrepancies are identified, the sample is designated “Rejected”, segregated, and the sponsor is notified.5850
5.2 Storage in Warehouse
- 5.2.1 Samples with a “Received” status shall be stored under defined environmental conditions (ambient, refrigerated, frozen) according to the protocol or product specification.5058
- 5.2.2 Storage locations shall be recorded (room, freezer, shelf) and linked to the sample record.5950
- 5.2.3 If storage conditions cannot be maintained or excursions occur, the sample shall be placed in “Quarantined” state pending investigation.5658
5.3 Release to Laboratory and Transport
- 5.3.1 Upon receipt of a valid testing request, the Warehouse Operator performs a release assessment (identity, integrity, storage history).5059
- 5.3.2 If acceptable, the sample is released, and a transfer manifest is generated; the state is updated to “Released.”5950
- 5.3.3 Samples are packed in validated transport containers with temperature monitoring where required and dispatched to the laboratory (“In Transit” state).5850
- 5.3.4 Any deviation during transport (e.g., broken seal, improper temperature) triggers placement into “Quarantined” state on receipt.5658
5.4 Laboratory Receipt and Registration
- 5.4.1 The Laboratory Receptionist verifies the physical samples against the transfer manifest, checks labeling and container integrity.5059
- 5.4.2 Any discrepancy or integrity issue is documented; affected samples are placed in “Quarantined” state pending Quality Unit review.565950
- 5.4.3 If acceptance criteria are met, samples are registered in the LIMS with a unique laboratory ID, and the state is updated to “Received at Lab” then “Pre‑analytical.”6159
5.5 Pre‑analytical Processing
- 5.5.1 Technicians perform pre‑defined pre‑analytical procedures (e.g., aliquoting, centrifugation, filtration) strictly according to approved methods.495250
- 5.5.2 Each step (e.g., centrifuge settings, aliquot volumes) is documented contemporaneously in controlled records.605249
- 5.5.3 Pre‑analytical quality checks (volume, hemolysis, container integrity) are performed and recorded; passing samples are set to “Ready for Analysis.”5250
- 5.5.4 Samples failing pre‑analytical criteria are placed in “Quarantined” state; a deviation/non‑conformance record is initiated.5256
5.6 Analytical Phase
- 5.6.1 Analysts perform analytical runs using validated methods on qualified and calibrated instruments.614952
- 5.6.2 System suitability and QC samples are analyzed prior to or along with test samples as defined in the method.5256
- 5.6.3 Raw data (chromatograms, instrument files) are captured in compliant data systems and linked to sample IDs.615256
- 5.6.4 Upon completion of analysis, samples are marked “Analyzed”; any instrument or QC failures are handled according to deviation procedures.5256
5.7 Review and Reporting
- 5.7.1 A qualified Reviewer assesses QC results, system suitability, and data integrity prior to reviewing sample results.5652
- 5.7.2 Results meeting acceptance criteria are marked “Reviewed” and then “Reported” after generating and approving the formal report.5952
- 5.7.3 Results failing acceptance criteria may trigger re‑analysis (sample returns to “Ready for Analysis”) or placement of the sample and data into “Quarantined” state for investigation.5256
5.8 Archival and Retention
- 5.8.1 Following reporting, samples (or aliquots) are transferred to defined archival locations; the state is set to “Archived.”5059
- 5.8.2 Retention periods and storage conditions are defined by regulatory, protocol, or contract requirements.6156
- 5.8.3 Samples may be retrieved from archival storage for authorized retesting; their state transitions temporarily back to “Pre‑analytical.”5956
5.9 Disposal
- 5.9.1 Upon expiry of the retention period, and with Quality Unit approval where required, samples are disposed of according to safety and environmental procedures.5650
- 5.9.2 The state is updated to “Disposed”; destruction details (date, method, responsible person) are recorded.5059
- 5.9.3 Disposal records are retained as part of the sample’s permanent documentation.6059
5.10 Handling Rejected and Quarantined Samples
- 5.10.1 Rejected samples are physically segregated, labeled clearly, and subject to sponsor/clinical guidance for return or destruction; they are never analyzed.585950
- 5.10.2 Quarantined samples are not processed further until an investigation is completed and a documented disposition (reprocess, accept with deviation, or destroy) is approved by the Quality Unit.485256
2. Python Coroutine FSM for Sample Lifecycle
This uses a single dispatcher with per‑state coroutines, following the coroutine‑FSM pattern.535462
from typing import Generator, Optional
Event = str
StateName = str
class SampleFSMCoroutine:
def __init__(self, sample_id: str):
self.sample_id = sample_id
self.state_name: StateName = "Received"
self._state = self._state_received() # initial coroutine
next(self._state) # prime coroutine
def send(self, event: Event):
"""Send an event to the current state coroutine."""
try:
new_state = self._state.send(event)
except StopIteration:
raise RuntimeError(f"State coroutine stopped unexpectedly in {self.state_name}")
if new_state is not None:
self._transition_to(new_state)
def _transition_to(self, new_state_name: StateName):
print(f"[FSM] {self.sample_id}: {self.state_name} -> {new_state_name}")
self.state_name = new_state_name
factory = getattr(self, f"_state_{new_state_name.lower()}")
self._state = factory()
next(self._state)
# --- State coroutines ---
def _state_received(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "manifest.verified_ok":
yield "InWarehouse"
elif event == "manifest.verified_fail":
yield "Rejected"
def _state_inwarehouse(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "warehouse.release_approved":
yield "Released"
elif event == "warehouse.acceptance_failed":
yield "Quarantined"
def _state_released(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "transport.started":
yield "InTransit"
def _state_intransit(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "transport.lab_scan_ok":
yield "ReceivedAtLab"
elif event == "transport.integrity_failed":
yield "Quarantined"
def _state_receivedatlab(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "lims.registration_ok":
yield "PreAnalytical"
elif event == "lims.registration_failed":
yield "Quarantined"
def _state_preanalytical(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "preanalytical.qc_pass":
yield "ReadyForAnalysis"
elif event == "preanalytical.qc_fail":
yield "Quarantined"
def _state_readyforanalysis(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "analysis.completed":
yield "Analyzed"
def _state_analyzed(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "review.completed_valid":
yield "Reviewed"
elif event == "review.rerun_required":
yield "ReadyForAnalysis"
elif event == "review.data_invalid":
yield "Quarantined"
def _state_reviewed(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "report.released":
yield "Reported"
def _state_reported(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "archive.stored":
yield "Archived"
def _state_archived(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "retention.expired":
yield "Disposed"
elif event == "retest.requested":
yield "PreAnalytical"
def _state_disposed(self) -> Generator[Optional[StateName], Event, None]:
while True:
yield None # terminal; ignore events or raise
def _state_rejected(self) -> Generator[Optional[StateName], Event, None]:
while True:
yield None # terminal
def _state_quarantined(self) -> Generator[Optional[StateName], Event, None]:
while True:
event = (yield None)
if event == "nc.investigation_reprocess":
yield "PreAnalytical"
elif event == "nc.investigation_dispose":
yield "Disposed"
if __name__ == "__main__":
fsm = SampleFSMCoroutine("S001")
events = [
"manifest.verified_ok",
"warehouse.release_approved",
"transport.started",
"transport.lab_scan_ok",
"lims.registration_ok",
"preanalytical.qc_pass",
"analysis.completed",
"review.completed_valid",
"report.released",
"archive.stored",
"retention.expired",
]
for e in events:
fsm.send(e)
This implementation follows the coroutine-FSM pattern from Arpit Bhayani’s articles.546253
3. XState JSON Config for Sample Lifecycle
XState config including actions and basic guards; you can drop this into createMachine(config).516355
{
"id": "sampleLifecycle",
"initial": "Received",
"context": {
"sampleId": null,
"nonConformances": 0
},
"states": {
"Received": {
"on": {
"MANIFEST_VERIFIED_OK": "InWarehouse",
"MANIFEST_VERIFIED_FAIL": "Rejected"
},
"entry": ["logArrival", "notifyWarehouseOps"]
},
"InWarehouse": {
"on": {
"WAREHOUSE_RELEASE_APPROVED": "Released",
"WAREHOUSE_ACCEPTANCE_FAILED": "Quarantined"
},
"entry": ["assignStorageLocation"]
},
"Released": {
"on": {
"TRANSPORT_STARTED": "InTransit"
},
"entry": ["generateTransferManifest"]
},
"InTransit": {
"on": {
"TRANSPORT_LAB_SCAN_OK": "ReceivedAtLab",
"TRANSPORT_INTEGRITY_FAILED": "Quarantined"
},
"entry": ["logDeparture"]
},
"ReceivedAtLab": {
"on": {
"LIMS_REGISTRATION_OK": "PreAnalytical",
"LIMS_REGISTRATION_FAILED": "Quarantined"
},
"entry": ["verifyAgainstManifest"]
},
"PreAnalytical": {
"on": {
"PREANALYTICAL_QC_PASS": "ReadyForAnalysis",
"PREANALYTICAL_QC_FAIL": "Quarantined"
},
"entry": ["createWorkOrder"]
},
"ReadyForAnalysis": {
"on": {
"ANALYSIS_COMPLETED": "Analyzed"
},
"entry": ["enqueueInstrumentRun"]
},
"Analyzed": {
"on": {
"REVIEW_COMPLETED_VALID": "Reviewed",
"REVIEW_RERUN_REQUIRED": "ReadyForAnalysis",
"REVIEW_DATA_INVALID": "Quarantined"
},
"entry": ["captureRawResults"]
},
"Reviewed": {
"on": {
"REPORT_RELEASED": "Reported"
},
"entry": ["applyReviewerSignature"]
},
"Reported": {
"on": {
"ARCHIVE_STORED": "Archived"
},
"entry": ["pushReportToClient"]
},
"Archived": {
"on": {
"RETENTION_EXPIRED": "Disposed",
"RETEST_REQUESTED": "PreAnalytical"
},
"entry": ["moveToArchiveStorage"]
},
"Disposed": {
"type": "final",
"entry": ["logDisposal"]
},
"Rejected": {
"type": "final",
"entry": ["logRejection", "notifyClient"]
},
"Quarantined": {
"on": {
"NC_INVESTIGATION_REPROCESS": "PreAnalytical",
"NC_INVESTIGATION_DISPOSE": "Disposed"
},
"entry": ["incrementNonConformance", "notifyQuality"]
}
}
}
You can wire guards via guards: {} and reference them from transitions (e.g., cond: "hasSufficientVolume").5551
4. Error Handling & Exceptions for Transitions
4.1 Categories of errors
- Business rule violations: Attempted transition not allowed from current state (invalid event).
- Data/validation errors: Required data missing (no sample ID, no QC results).
- Technical errors: LIMS/network failures, instrument errors.
- Compliance violations: Unauthorized role, missing e‑signature, or missing audit trail.48606152
4.2 Recommended handling strategies
- Invalid transition
- Data/validation error
- Technical error
- Compliance violation (auth/signature)
5. Validation & Approval Workflows for Reported State
You can treat Reported as the end of a mini‑workflow composed of sub‑states: Draft → UnderReview → Approved → Released.
5.1 Conceptual workflow
- Pre‑conditions
- Review steps
- Approval conditions
- Release
5.2 Example review sub‑workflow (DSL snippet)
You can embed this as a nested state of Reviewed in XState:
"Reviewed": {
"initial": "Draft",
"states": {
"Draft": {
"on": {
"SUBMIT_FOR_REVIEW": "UnderReview"
}
},
"UnderReview": {
"on": {
"TECHNICAL_APPROVE": "PendingQA",
"TECHNICAL_REJECT": "#sampleLifecycle.Analyzed"
}
},
"PendingQA": {
"on": {
"QA_APPROVE": "Approved",
"QA_REJECT": "#sampleLifecycle.Quarantined"
}
},
"Approved": {
"on": {
"REPORT_RELEASED": "#sampleLifecycle.Reported"
}
}
}
}
This structure aligns with typical GxP expectations for staged technical and QA review before final report release.57485259
If you tell me which next artifact you want (e.g., “OpenAPI for /samples/{id}/events including Report review sub‑workflow” or “Temporal workflow definition for this FSM”), I’ll generate a focused spec plus starter code.
CREATE a REACT-flow code example prompt to demonstrate the workflow from start to finish
Here’s a focused React Flow + TypeScript example that renders the full sample lifecycle FSM from start (Received) to finish (Disposed/Rejected), ready to drop into a demo app. It follows React Flow’s current API patterns.6768697071
1. Prompt you can reuse
Implement a React Flow diagram in TypeScript that visualizes the laboratory sample lifecycle FSM with states: Received, In Warehouse, Released, In Transit, Received at Lab, Pre‑analytical, Ready for Analysis, Analyzed, Reviewed, Reported, Archived, Disposed, Rejected, Quarantined. Connect nodes with labeled edges for each domain event (e.g., manifest.verified_ok, preanalytical.qc_pass). Include initialNodes and initialEdges definitions, and a
<SampleLifecycleFlow />component usingReactFlow,Background, andControlsfrom@xyflow/react(React Flow v11+). Layout nodes in a left‑to‑right flow and use node colors to highlight terminal states (Disposed/Rejected) and error state (Quarantined).
2. React Flow TS example (copy‑paste)
// SampleLifecycleFlow.tsx
import React, { useCallback } from 'react';
import ReactFlow, {
Background,
Controls,
MiniMap,
Node,
Edge,
Position,
MarkerType,
} from '@xyflow/react';
import '@xyflow/react/dist/style.css';
const nodeWidth = 180;
const nodeHeight = 60;
const makeNode = (
id: string,
label: string,
x: number,
y: number,
style: React.CSSProperties = {}
): Node => ({
id,
position: { x, y },
data: { label },
type: 'default',
sourcePosition: Position.Right,
targetPosition: Position.Left,
style: {
borderRadius: 6,
padding: 10,
fontSize: 12,
...style,
},
});
const initialNodes: Node[] = [
makeNode('Received', 'Received', 0 * nodeWidth, 0),
makeNode('InWarehouse', 'In Warehouse', 1 * nodeWidth, 0),
makeNode('Released', 'Released', 2 * nodeWidth, 0),
makeNode('InTransit', 'In Transit', 3 * nodeWidth, 0),
makeNode('ReceivedAtLab', 'Received at Lab', 4 * nodeWidth, 0),
makeNode('PreAnalytical', 'Pre-analytical', 5 * nodeWidth, 0),
makeNode('ReadyForAnalysis', 'Ready for Analysis', 6 * nodeWidth, 0),
makeNode('Analyzed', 'Analyzed', 7 * nodeWidth, 0),
makeNode('Reviewed', 'Reviewed', 8 * nodeWidth, 0),
makeNode('Reported', 'Reported', 9 * nodeWidth, 0),
makeNode('Archived', 'Archived', 10 * nodeWidth, 0),
// Terminal / error states in a lower row
makeNode('Disposed', 'Disposed', 10 * nodeWidth, 150, {
background: '#ffe0e0',
border: '1px solid #ff4d4f',
}),
makeNode('Rejected', 'Rejected', 2 * nodeWidth, 150, {
background: '#ffe0e0',
border: '1px solid #ff4d4f',
}),
makeNode('Quarantined', 'Quarantined', 6 * nodeWidth, 150, {
background: '#fff7e6',
border: '1px solid #faad14',
}),
];
const makeEdge = (
id: string,
source: string,
target: string,
label: string
): Edge => ({
id,
source,
target,
label,
animated: false,
markerEnd: {
type: MarkerType.ArrowClosed,
},
style: { strokeWidth: 1.5 },
labelStyle: { fontSize: 10, fill: '#555' },
});
const initialEdges: Edge[] = [
// Happy path
makeEdge('e1', 'Received', 'InWarehouse', 'manifest.verified_ok'),
makeEdge('e2', 'InWarehouse', 'Released', 'warehouse.release_approved'),
makeEdge('e3', 'Released', 'InTransit', 'transport.started'),
makeEdge('e4', 'InTransit', 'ReceivedAtLab', 'transport.lab_scan_ok'),
makeEdge('e5', 'ReceivedAtLab', 'PreAnalytical', 'lims.registration_ok'),
makeEdge('e6', 'PreAnalytical', 'ReadyForAnalysis', 'preanalytical.qc_pass'),
makeEdge('e7', 'ReadyForAnalysis', 'Analyzed', 'analysis.completed'),
makeEdge('e8', 'Analyzed', 'Reviewed', 'review.completed_valid'),
makeEdge('e9', 'Reviewed', 'Reported', 'report.released'),
makeEdge('e10', 'Reported', 'Archived', 'archive.stored'),
makeEdge('e11', 'Archived', 'Disposed', 'retention.expired'),
// Retest loop
makeEdge('e12', 'Archived', 'PreAnalytical', 'retest.requested'),
// Error paths to Rejected / Quarantined
makeEdge('e13', 'Received', 'Rejected', 'manifest.verified_fail'),
makeEdge('e14', 'InWarehouse', 'Quarantined', 'warehouse.acceptance_failed'),
makeEdge('e15', 'InTransit', 'Quarantined', 'transport.integrity_failed'),
makeEdge('e16', 'ReceivedAtLab', 'Quarantined', 'lims.registration_failed'),
makeEdge('e17', 'PreAnalytical', 'Quarantined', 'preanalytical.qc_fail'),
makeEdge('e18', 'Analyzed', 'Quarantined', 'review.data_invalid'),
// Quarantine disposition
makeEdge('e19', 'Quarantined', 'PreAnalytical', 'nc.investigation_reprocess'),
makeEdge('e20', 'Quarantined', 'Disposed', 'nc.investigation_dispose'),
// Rerun loop
makeEdge('e21', 'Analyzed', 'ReadyForAnalysis', 'review.rerun_required'),
];
export const SampleLifecycleFlow: React.FC = () => {
const [nodes, setNodes] = React.useState<Node[]>(initialNodes);
const [edges, setEdges] = React.useState<Edge[]>(initialEdges);
const onNodesChange = useCallback(
(changes) => setNodes((nds) => (ReactFlow as any).applyNodeChanges(changes, nds)),
[]
);
const onEdgesChange = useCallback(
(changes) => setEdges((eds) => (ReactFlow as any).applyEdgeChanges(changes, eds)),
[]
);
return (
<div style={{ width: '100%', height: '600px', border: '1px solid #ddd' }}>
<ReactFlow
nodes={nodes}
edges={edges}
fitView
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
>
<Background variant="dots" gap={16} size={1} />
<MiniMap />
<Controls />
</ReactFlow>
</div>
);
};
You can then mount it in your app:
// App.tsx
import React from 'react';
import { SampleLifecycleFlow } from './SampleLifecycleFlow';
function App() {
return (
<div style={{ height: '100vh' }}>
<SampleLifecycleFlow />
</div>
);
}
export default App;
This follows the typical React Flow example structure and uses labeled edges to show each state transition event.68697071
If you want, next step I can: (a) add click handlers that send events to a running XState machine and highlight the current node, or (b) generate a small DSL → React Flow adapter (nodes/edges from your FSM JSON).
(b) generate a small DSL → React Flow adapter (nodes/edges from your FSM JSON). Generate the full React Flow code from that prompt
Add GxP compliance annotations to the React Flow nodes Convert the React Flow diagram to an interactive XState machine Create a Zustand store for the sample workflow state Export the React Flow workflow as Mermaid diagram syntax
Below is a single, self‑contained TypeScript/React module that:
- Defines a minimal FSM JSON DSL for the sample lifecycle
- Builds React Flow nodes/edges from that DSL (with GxP annotations on nodes)
- Wires an XState machine to keep current state and highlight the active node
- Uses Zustand for sample workflow state (current state, history)
- Exposes a helper to export the workflow as Mermaid state diagram syntax
It’s structured so you can split it into files later. React Flow/XState/Zustand usage follows common patterns.828384858687888990
1. Full React + DSL + XState + Zustand + Mermaid Module
// SampleWorkflowDiagram.tsx
import React, { useMemo } from 'react';
import ReactFlow, {
Background,
Controls,
MiniMap,
Node,
Edge,
Position,
MarkerType,
} from '@xyflow/react';
import '@xyflow/react/dist/style.css';
import { create } from 'zustand';
import { createMachine, interpret, StateFrom } from 'xstate';
// -----------------------------
// 1. FSM JSON DSL
// -----------------------------
type StateId =
| 'Received'
| 'InWarehouse'
| 'Released'
| 'InTransit'
| 'ReceivedAtLab'
| 'PreAnalytical'
| 'ReadyForAnalysis'
| 'Analyzed'
| 'Reviewed'
| 'Reported'
| 'Archived'
| 'Disposed'
| 'Rejected'
| 'Quarantined';
type EventId =
| 'manifest.verified_ok'
| 'manifest.verified_fail'
| 'warehouse.release_approved'
| 'warehouse.acceptance_failed'
| 'transport.started'
| 'transport.lab_scan_ok'
| 'transport.integrity_failed'
| 'lims.registration_ok'
| 'lims.registration_failed'
| 'preanalytical.qc_pass'
| 'preanalytical.qc_fail'
| 'analysis.completed'
| 'review.completed_valid'
| 'review.rerun_required'
| 'review.data_invalid'
| 'report.released'
| 'archive.stored'
| 'retention.expired'
| 'retest.requested'
| 'nc.investigation_reprocess'
| 'nc.investigation_dispose';
interface TransitionDef {
event: EventId;
target: StateId;
}
interface GxpAnnotation {
criticality: 'critical' | 'major' | 'minor';
gxpImpact: string;
sopRef: string;
}
interface StateDef {
id: StateId;
label: string;
isFinal?: boolean;
isError?: boolean;
gxp: GxpAnnotation;
transitions: TransitionDef[];
}
interface FsmDsl {
id: string;
initial: StateId;
states: StateDef[];
}
// Minimal DSL instance for sample lifecycle
const sampleFsmDsl: FsmDsl = {
id: 'sampleLifecycle',
initial: 'Received',
states: [
{
id: 'Received',
label: 'Received',
gxp: {
criticality: 'critical',
gxpImpact: 'Chain-of-custody and identification at initial receipt',
sopRef: 'SOP-001 Section 5.1',
},
transitions: [
{ event: 'manifest.verified_ok', target: 'InWarehouse' },
{ event: 'manifest.verified_fail', target: 'Rejected' },
],
},
{
id: 'InWarehouse',
label: 'In Warehouse',
gxp: {
criticality: 'major',
gxpImpact: 'Controlled storage and stability assurance',
sopRef: 'SOP-001 Section 5.2',
},
transitions: [
{ event: 'warehouse.release_approved', target: 'Released' },
{ event: 'warehouse.acceptance_failed', target: 'Quarantined' },
],
},
{
id: 'Released',
label: 'Released',
gxp: {
criticality: 'major',
gxpImpact: 'Approval to move sample into analytical chain',
sopRef: 'SOP-001 Section 5.3',
},
transitions: [{ event: 'transport.started', target: 'InTransit' }],
},
{
id: 'InTransit',
label: 'In Transit',
gxp: {
criticality: 'major',
gxpImpact: 'Transport conditions and integrity control',
sopRef: 'SOP-001 Section 5.3',
},
transitions: [
{ event: 'transport.lab_scan_ok', target: 'ReceivedAtLab' },
{ event: 'transport.integrity_failed', target: 'Quarantined' },
],
},
{
id: 'ReceivedAtLab',
label: 'Received at Lab',
gxp: {
criticality: 'critical',
gxpImpact: 'Lab-side chain-of-custody and identity verification',
sopRef: 'SOP-002 Section 5.4',
},
transitions: [
{ event: 'lims.registration_ok', target: 'PreAnalytical' },
{ event: 'lims.registration_failed', target: 'Quarantined' },
],
},
{
id: 'PreAnalytical',
label: 'Pre-analytical',
gxp: {
criticality: 'critical',
gxpImpact: 'Sample preparation, aliquoting, pre-analytical QC',
sopRef: 'SOP-003 Section 5.5',
},
transitions: [
{ event: 'preanalytical.qc_pass', target: 'ReadyForAnalysis' },
{ event: 'preanalytical.qc_fail', target: 'Quarantined' },
],
},
{
id: 'ReadyForAnalysis',
label: 'Ready for Analysis',
gxp: {
criticality: 'major',
gxpImpact: 'Queueing for validated analytical methods',
sopRef: 'SOP-004 Section 5.6',
},
transitions: [{ event: 'analysis.completed', target: 'Analyzed' }],
},
{
id: 'Analyzed',
label: 'Analyzed',
gxp: {
criticality: 'critical',
gxpImpact: 'Raw data and QC results available for review',
sopRef: 'SOP-004 Section 5.6',
},
transitions: [
{ event: 'review.completed_valid', target: 'Reviewed' },
{ event: 'review.rerun_required', target: 'ReadyForAnalysis' },
{ event: 'review.data_invalid', target: 'Quarantined' },
],
},
{
id: 'Reviewed',
label: 'Reviewed',
gxp: {
criticality: 'critical',
gxpImpact: 'Technical/QA review and approval of results',
sopRef: 'SOP-005 Section 5.7',
},
transitions: [{ event: 'report.released', target: 'Reported' }],
},
{
id: 'Reported',
label: 'Reported',
gxp: {
criticality: 'critical',
gxpImpact: 'Formal release of results to sponsor/EMR',
sopRef: 'SOP-005 Section 5.7',
},
transitions: [{ event: 'archive.stored', target: 'Archived' }],
},
{
id: 'Archived',
label: 'Archived',
gxp: {
criticality: 'major',
gxpImpact: 'Retention and potential retesting',
sopRef: 'SOP-006 Section 5.8',
},
transitions: [
{ event: 'retention.expired', target: 'Disposed' },
{ event: 'retest.requested', target: 'PreAnalytical' },
],
},
{
id: 'Disposed',
label: 'Disposed',
isFinal: true,
gxp: {
criticality: 'major',
gxpImpact: 'GxP-compliant destruction and documentation',
sopRef: 'SOP-006 Section 5.9',
},
transitions: [],
},
{
id: 'Rejected',
label: 'Rejected',
isFinal: true,
isError: true,
gxp: {
criticality: 'critical',
gxpImpact: 'Rejected samples are never processed; documentation and sponsor notification',
sopRef: 'SOP-001 Section 5.10',
},
transitions: [],
},
{
id: 'Quarantined',
label: 'Quarantined',
isError: true,
gxp: {
criticality: 'critical',
gxpImpact: 'Non-conforming sample; requires investigation and disposition',
sopRef: 'SOP-007 Section 5.10',
},
transitions: [
{ event: 'nc.investigation_reprocess', target: 'PreAnalytical' },
{ event: 'nc.investigation_dispose', target: 'Disposed' },
],
},
],
};
// -----------------------------
// 2. DSL → XState Machine
// -----------------------------
type SampleContext = {
sampleId: string;
};
type SampleEvent = { type: string };
const buildXStateFromDsl = (dsl: FsmDsl) => {
const states: any = {};
dsl.states.forEach((s) => {
const on: any = {};
s.transitions.forEach((t) => {
// Map event string to XState event type
const eventType = t.event.toUpperCase().replace(/\./g, '_');
on[eventType] = t.target;
});
states[s.id] = {
on,
};
if (s.isFinal) {
states[s.id].type = 'final';
}
});
return createMachine<SampleContext, SampleEvent>({
id: dsl.id,
initial: dsl.initial,
context: { sampleId: '' },
states,
});
};
const sampleMachine = buildXStateFromDsl(sampleFsmDsl);
// -----------------------------
// 3. Zustand Store (workflow state)
// -----------------------------
type WorkflowState = {
currentState: StateId;
history: StateId[];
sendEvent: (event: EventId) => void;
};
const machineService = interpret(sampleMachine).start();
export const useWorkflowStore = create<WorkflowState>((set) => {
machineService.onTransition((state: StateFrom<typeof sampleMachine>) => {
if (!state.changed) return;
const newState = state.value as StateId;
set((prev) => ({
currentState: newState,
history: [...prev.history, newState],
}));
});
return {
currentState: sampleFsmDsl.initial,
history: [sampleFsmDsl.initial],
sendEvent: (event: EventId) => {
const type = event.toUpperCase().replace(/\./g, '_');
machineService.send({ type });
},
};
});
// -----------------------------
// 4. DSL → React Flow nodes/edges (with GxP annotations)
// -----------------------------
const nodeWidth = 190;
const rowY = (row: number) => row * 140;
const buildNodesFromDsl = (dsl: FsmDsl, activeState: StateId): Node[] => {
// simple manual layout: main path row 0, error row 1
const positions: Record<StateId, { x: number; y: number }> = {
Received: { x: 0 * nodeWidth, y: rowY(0) },
InWarehouse: { x: 1 * nodeWidth, y: rowY(0) },
Released: { x: 2 * nodeWidth, y: rowY(0) },
InTransit: { x: 3 * nodeWidth, y: rowY(0) },
ReceivedAtLab: { x: 4 * nodeWidth, y: rowY(0) },
PreAnalytical: { x: 5 * nodeWidth, y: rowY(0) },
ReadyForAnalysis: { x: 6 * nodeWidth, y: rowY(0) },
Analyzed: { x: 7 * nodeWidth, y: rowY(0) },
Reviewed: { x: 8 * nodeWidth, y: rowY(0) },
Reported: { x: 9 * nodeWidth, y: rowY(0) },
Archived: { x: 10 * nodeWidth, y: rowY(0) },
Disposed: { x: 10 * nodeWidth, y: rowY(1) },
Rejected: { x: 2 * nodeWidth, y: rowY(1) },
Quarantined: { x: 6 * nodeWidth, y: rowY(1) },
};
return dsl.states.map((s) => {
const pos = positions[s.id];
const isActive = s.id === activeState;
const baseStyle: React.CSSProperties = {
borderRadius: 6,
padding: 10,
fontSize: 11,
border: '1px solid #ccc',
background: '#ffffff',
};
if (s.isFinal) {
baseStyle.background = '#ffe0e0';
baseStyle.border = '1px solid #ff4d4f';
}
if (s.isError && !s.isFinal) {
baseStyle.background = '#fff7e6';
baseStyle.border = '1px solid #faad14';
}
if (isActive) {
baseStyle.boxShadow = '0 0 0 3px #1890ff66';
}
const label = (
<div>
<div style={{ fontWeight: 600 }}>{s.label}</div>
<div style={{ fontSize: 10 }}>
<strong>Criticality:</strong> {s.gxp.criticality}
</div>
<div style={{ fontSize: 10 }}>
<strong>GxP:</strong> {s.gxp.gxpImpact}
</div>
<div style={{ fontSize: 10 }}>
<strong>SOP:</strong> {s.gxp.sopRef}
</div>
</div>
);
return {
id: s.id,
data: { label },
position: pos,
type: 'default',
sourcePosition: Position.Right,
targetPosition: Position.Left,
style: baseStyle,
} as Node;
});
};
const buildEdgesFromDsl = (dsl: FsmDsl): Edge[] => {
const edges: Edge[] = [];
dsl.states.forEach((s) => {
s.transitions.forEach((t, idx) => {
edges.push({
id: `${s.id}-${t.event}-${idx}`,
source: s.id,
target: t.target,
label: t.event,
markerEnd: { type: MarkerType.ArrowClosed },
style: { strokeWidth: 1.5 },
labelStyle: { fontSize: 9, fill: '#555' },
});
});
});
return edges;
};
// -----------------------------
// 5. DSL → Mermaid export
// -----------------------------
export const exportDslToMermaid = (dsl: FsmDsl): string => {
const lines: string[] = ['stateDiagram-v2', ` [*] --> ${dsl.initial}`];
dsl.states.forEach((s) => {
s.transitions.forEach((t) => {
lines.push(
` ${s.id} --> ${t.target}: ${t.event}`
);
});
if (s.isFinal) {
lines.push(` ${s.id} --> [*]`);
}
});
return lines.join('\n');
};
// -----------------------------
// 6. React Component
// -----------------------------
export const SampleWorkflowDiagram: React.FC = () => {
const { currentState, sendEvent } = useWorkflowStore();
const nodes = useMemo(
() => buildNodesFromDsl(sampleFsmDsl, currentState),
[currentState]
);
const edges = useMemo(() => buildEdgesFromDsl(sampleFsmDsl), []);
const mermaidDiagram = useMemo(
() => exportDslToMermaid(sampleFsmDsl),
[]
);
return (
<div style={{ display: 'flex', height: '100vh' }}>
<div style={{ flex: 3, borderRight: '1px solid #ddd' }}>
<ReactFlow nodes={nodes} edges={edges} fitView>
<Background variant="dots" gap={16} size={1} />
<MiniMap />
<Controls />
</ReactFlow>
</div>
<div style={{ flex: 1, padding: 12, fontSize: 12 }}>
<h3>Sample Lifecycle Controls</h3>
<p>Current state: <strong>{currentState}</strong></p>
<div style={{ marginBottom: 12 }}>
<div style={{ fontWeight: 600, marginBottom: 4 }}>Fire Events</div>
<button onClick={() => sendEvent('manifest.verified_ok')}>manifest.verified_ok</button>
<button onClick={() => sendEvent('manifest.verified_fail')}>manifest.verified_fail</button>
<button onClick={() => sendEvent('warehouse.release_approved')}>warehouse.release_approved</button>
<button onClick={() => sendEvent('warehouse.acceptance_failed')}>warehouse.acceptance_failed</button>
<button onClick={() => sendEvent('transport.started')}>transport.started</button>
<button onClick={() => sendEvent('transport.lab_scan_ok')}>transport.lab_scan_ok</button>
<button onClick={() => sendEvent('transport.integrity_failed')}>transport.integrity_failed</button>
<button onClick={() => sendEvent('lims.registration_ok')}>lims.registration_ok</button>
<button onClick={() => sendEvent('lims.registration_failed')}>lims.registration_failed</button>
<button onClick={() => sendEvent('preanalytical.qc_pass')}>preanalytical.qc_pass</button>
<button onClick={() => sendEvent('preanalytical.qc_fail')}>preanalytical.qc_fail</button>
<button onClick={() => sendEvent('analysis.completed')}>analysis.completed</button>
<button onClick={() => sendEvent('review.completed_valid')}>review.completed_valid</button>
<button onClick={() => sendEvent('review.rerun_required')}>review.rerun_required</button>
<button onClick={() => sendEvent('review.data_invalid')}>review.data_invalid</button>
<button onClick={() => sendEvent('report.released')}>report.released</button>
<button onClick={() => sendEvent('archive.stored')}>archive.stored</button>
<button onClick={() => sendEvent('retention.expired')}>retention.expired</button>
<button onClick={() => sendEvent('retest.requested')}>retest.requested</button>
<button onClick={() => sendEvent('nc.investigation_reprocess')}>nc.investigation_reprocess</button>
<button onClick={() => sendEvent('nc.investigation_dispose')}>nc.investigation_dispose</button>
</div>
<div>
<div style={{ fontWeight: 600, marginBottom: 4 }}>Mermaid Export</div>
<textarea
readOnly
value={mermaidDiagram}
style={{ width: '100%', height: '260px', fontFamily: 'monospace', fontSize: 11 }}
/>
</div>
</div>
</div>
);
};
2. How this satisfies your list
- DSL → React Flow adapter:
sampleFsmDsl+buildNodesFromDsl+buildEdgesFromDslconvert the FSM JSON into nodes/edges.83848682 - GxP compliance annotations:
GxpAnnotationinStateDefis rendered into each node label.91929394 - Interactive XState machine: The machine is built from the DSL (
buildXStateFromDsl), interpreted, and driven viasendEvent; React Flow highlights the current state.8788 - Zustand store:
useWorkflowStoreownscurrentState,history, andsendEventwhile subscribing to the XState service.959682 - Mermaid export:
exportDslToMermaidgeneratesstateDiagram-v2syntax from the same DSL, kept in sync with the graph.9798
If you’d like, next step I can:
- Split this into DSL + adapters + UI modules, or
- Generate a test suite (Vitest/Jest) that validates the FSM transitions against the DSL and GxP constraints.
Split this into DSL + adapters + UI modules, and
Generate a test suite (Vitest/Jest) that validates the FSM transitions against the DSL and GxP constraints.
Below is a split version of the previous module into:
fsmDsl.ts– DSL + Mermaid exportfsmAdapters.ts– XState, React Flow nodes/edges buildersworkflowStore.ts– Zustand store bound to the XState serviceSampleWorkflowDiagram.tsx– UI component using React FlowsampleWorkflow.test.ts– Vitest test suite for FSM transitions + GxP constraints
All examples are TypeScript‑ready.99100101102103104105106107108109
1. fsmDsl.ts – DSL + Mermaid export
// fsmDsl.ts
export type StateId =
| 'Received'
| 'InWarehouse'
| 'Released'
| 'InTransit'
| 'ReceivedAtLab'
| 'PreAnalytical'
| 'ReadyForAnalysis'
| 'Analyzed'
| 'Reviewed'
| 'Reported'
| 'Archived'
| 'Disposed'
| 'Rejected'
| 'Quarantined';
export type EventId =
| 'manifest.verified_ok'
| 'manifest.verified_fail'
| 'warehouse.release_approved'
| 'warehouse.acceptance_failed'
| 'transport.started'
| 'transport.lab_scan_ok'
| 'transport.integrity_failed'
| 'lims.registration_ok'
| 'lims.registration_failed'
| 'preanalytical.qc_pass'
| 'preanalytical.qc_fail'
| 'analysis.completed'
| 'review.completed_valid'
| 'review.rerun_required'
| 'review.data_invalid'
| 'report.released'
| 'archive.stored'
| 'retention.expired'
| 'retest.requested'
| 'nc.investigation_reprocess'
| 'nc.investigation_dispose';
export interface TransitionDef {
event: EventId;
target: StateId;
}
export interface GxpAnnotation {
criticality: 'critical' | 'major' | 'minor';
gxpImpact: string;
sopRef: string;
}
export interface StateDef {
id: StateId;
label: string;
isFinal?: boolean;
isError?: boolean;
gxp: GxpAnnotation;
transitions: TransitionDef[];
}
export interface FsmDsl {
id: string;
initial: StateId;
states: StateDef[];
}
export const sampleFsmDsl: FsmDsl = {
id: 'sampleLifecycle',
initial: 'Received',
states: [
{
id: 'Received',
label: 'Received',
gxp: {
criticality: 'critical',
gxpImpact: 'Chain-of-custody and identification at initial receipt',
sopRef: 'SOP-001 Section 5.1',
},
transitions: [
{ event: 'manifest.verified_ok', target: 'InWarehouse' },
{ event: 'manifest.verified_fail', target: 'Rejected' },
],
},
{
id: 'InWarehouse',
label: 'In Warehouse',
gxp: {
criticality: 'major',
gxpImpact: 'Controlled storage and stability assurance',
sopRef: 'SOP-001 Section 5.2',
},
transitions: [
{ event: 'warehouse.release_approved', target: 'Released' },
{ event: 'warehouse.acceptance_failed', target: 'Quarantined' },
],
},
{
id: 'Released',
label: 'Released',
gxp: {
criticality: 'major',
gxpImpact: 'Approval to move sample into analytical chain',
sopRef: 'SOP-001 Section 5.3',
},
transitions: [{ event: 'transport.started', target: 'InTransit' }],
},
{
id: 'InTransit',
label: 'In Transit',
gxp: {
criticality: 'major',
gxpImpact: 'Transport conditions and integrity control',
sopRef: 'SOP-001 Section 5.3',
},
transitions: [
{ event: 'transport.lab_scan_ok', target: 'ReceivedAtLab' },
{ event: 'transport.integrity_failed', target: 'Quarantined' },
],
},
{
id: 'ReceivedAtLab',
label: 'Received at Lab',
gxp: {
criticality: 'critical',
gxpImpact: 'Lab-side chain-of-custody and identity verification',
sopRef: 'SOP-002 Section 5.4',
},
transitions: [
{ event: 'lims.registration_ok', target: 'PreAnalytical' },
{ event: 'lims.registration_failed', target: 'Quarantined' },
],
},
{
id: 'PreAnalytical',
label: 'Pre-analytical',
gxp: {
criticality: 'critical',
gxpImpact: 'Sample preparation, aliquoting, pre-analytical QC',
sopRef: 'SOP-003 Section 5.5',
},
transitions: [
{ event: 'preanalytical.qc_pass', target: 'ReadyForAnalysis' },
{ event: 'preanalytical.qc_fail', target: 'Quarantined' },
],
},
{
id: 'ReadyForAnalysis',
label: 'Ready for Analysis',
gxp: {
criticality: 'major',
gxpImpact: 'Queueing for validated analytical methods',
sopRef: 'SOP-004 Section 5.6',
},
transitions: [{ event: 'analysis.completed', target: 'Analyzed' }],
},
{
id: 'Analyzed',
label: 'Analyzed',
gxp: {
criticality: 'critical',
gxpImpact: 'Raw data and QC results available for review',
sopRef: 'SOP-004 Section 5.6',
},
transitions: [
{ event: 'review.completed_valid', target: 'Reviewed' },
{ event: 'review.rerun_required', target: 'ReadyForAnalysis' },
{ event: 'review.data_invalid', target: 'Quarantined' },
],
},
{
id: 'Reviewed',
label: 'Reviewed',
gxp: {
criticality: 'critical',
gxpImpact: 'Technical/QA review and approval of results',
sopRef: 'SOP-005 Section 5.7',
},
transitions: [{ event: 'report.released', target: 'Reported' }],
},
{
id: 'Reported',
label: 'Reported',
gxp: {
criticality: 'critical',
gxpImpact: 'Formal release of results to sponsor/EMR',
sopRef: 'SOP-005 Section 5.7',
},
transitions: [{ event: 'archive.stored', target: 'Archived' }],
},
{
id: 'Archived',
label: 'Archived',
gxp: {
criticality: 'major',
gxpImpact: 'Retention and potential retesting',
sopRef: 'SOP-006 Section 5.8',
},
transitions: [
{ event: 'retention.expired', target: 'Disposed' },
{ event: 'retest.requested', target: 'PreAnalytical' },
],
},
{
id: 'Disposed',
label: 'Disposed',
isFinal: true,
gxp: {
criticality: 'major',
gxpImpact: 'GxP-compliant destruction and documentation',
sopRef: 'SOP-006 Section 5.9',
},
transitions: [],
},
{
id: 'Rejected',
label: 'Rejected',
isFinal: true,
isError: true,
gxp: {
criticality: 'critical',
gxpImpact: 'Rejected samples are never processed; documentation and sponsor notification',
sopRef: 'SOP-001 Section 5.10',
},
transitions: [],
},
{
id: 'Quarantined',
label: 'Quarantined',
isError: true,
gxp: {
criticality: 'critical',
gxpImpact: 'Non-conforming sample; requires investigation and disposition',
sopRef: 'SOP-007 Section 5.10',
},
transitions: [
{ event: 'nc.investigation_reprocess', target: 'PreAnalytical' },
{ event: 'nc.investigation_dispose', target: 'Disposed' },
],
},
],
};
export const exportDslToMermaid = (dsl: FsmDsl): string => {
const lines: string[] = ['stateDiagram-v2', ` [*] --> ${dsl.initial}`];
dsl.states.forEach((s) => {
s.transitions.forEach((t) => {
lines.push(` ${s.id} --> ${t.target}: ${t.event}`);
});
if (s.isFinal) {
lines.push(` ${s.id} --> [*]`);
}
});
return lines.join('\n');
};
2. fsmAdapters.ts – XState + React Flow builders
// fsmAdapters.ts
import { createMachine } from 'xstate';
import type { StateFrom } from 'xstate';
import {
Node,
Edge,
Position,
MarkerType,
} from '@xyflow/react';
import type { FsmDsl, StateId } from './fsmDsl';
export type SampleContext = {
sampleId: string;
};
export type SampleEvent = { type: string };
export const buildXStateFromDsl = (dsl: FsmDsl) => {
const states: any = {};
dsl.states.forEach((s) => {
const on: any = {};
s.transitions.forEach((t) => {
const eventType = t.event.toUpperCase().replace(/\./g, '_');
on[eventType] = t.target;
});
states[s.id] = { on };
if (s.isFinal) states[s.id].type = 'final';
});
return createMachine<SampleContext, SampleEvent>({
id: dsl.id,
initial: dsl.initial,
context: { sampleId: '' },
states,
});
};
export type SampleState = StateFrom<ReturnType<typeof buildXStateFromDsl>>;
// layout helpers
const nodeWidth = 190;
const rowY = (row: number) => row * 140;
const positions: Record<StateId, { x: number; y: number }> = {
Received: { x: 0 * nodeWidth, y: rowY(0) },
InWarehouse: { x: 1 * nodeWidth, y: rowY(0) },
Released: { x: 2 * nodeWidth, y: rowY(0) },
InTransit: { x: 3 * nodeWidth, y: rowY(0) },
ReceivedAtLab: { x: 4 * nodeWidth, y: rowY(0) },
PreAnalytical: { x: 5 * nodeWidth, y: rowY(0) },
ReadyForAnalysis: { x: 6 * nodeWidth, y: rowY(0) },
Analyzed: { x: 7 * nodeWidth, y: rowY(0) },
Reviewed: { x: 8 * nodeWidth, y: rowY(0) },
Reported: { x: 9 * nodeWidth, y: rowY(0) },
Archived: { x: 10 * nodeWidth, y: rowY(0) },
Disposed: { x: 10 * nodeWidth, y: rowY(1) },
Rejected: { x: 2 * nodeWidth, y: rowY(1) },
Quarantined: { x: 6 * nodeWidth, y: rowY(1) },
};
export const buildNodesFromDsl = (
dsl: FsmDsl,
activeState: StateId
): Node[] => {
return dsl.states.map((s) => {
const pos = positions[s.id];
const isActive = s.id === activeState;
const baseStyle: React.CSSProperties = {
borderRadius: 6,
padding: 10,
fontSize: 11,
border: '1px solid #ccc',
background: '#ffffff',
};
if (s.isFinal) {
baseStyle.background = '#ffe0e0';
baseStyle.border = '1px solid #ff4d4f';
}
if (s.isError && !s.isFinal) {
baseStyle.background = '#fff7e6';
baseStyle.border = '1px solid #faad14';
}
if (isActive) {
baseStyle.boxShadow = '0 0 0 3px #1890ff66';
}
const label = (
<div>
<div style={{ fontWeight: 600 }}>{s.label}</div>
<div style={{ fontSize: 10 }}>
<strong>Criticality:</strong> {s.gxp.criticality}
</div>
<div style={{ fontSize: 10 }}>
<strong>GxP:</strong> {s.gxp.gxpImpact}
</div>
<div style={{ fontSize: 10 }}>
<strong>SOP:</strong> {s.gxp.sopRef}
</div>
</div>
);
return {
id: s.id,
data: { label },
position: pos,
type: 'default',
sourcePosition: Position.Right,
targetPosition: Position.Left,
style: baseStyle,
} as Node;
});
};
export const buildEdgesFromDsl = (dsl: FsmDsl): Edge[] => {
const edges: Edge[] = [];
dsl.states.forEach((s) => {
s.transitions.forEach((t, idx) => {
edges.push({
id: `${s.id}-${t.event}-${idx}`,
source: s.id,
target: t.target,
label: t.event,
markerEnd: { type: MarkerType.ArrowClosed },
style: { strokeWidth: 1.5 },
labelStyle: { fontSize: 9, fill: '#555' },
});
});
});
return edges;
};
3. workflowStore.ts – Zustand store bound to XState
// workflowStore.ts
import { create } from 'zustand';
import { interpret } from 'xstate';
import type { StateId, EventId } from './fsmDsl';
import { sampleFsmDsl } from './fsmDsl';
import { buildXStateFromDsl } from './fsmAdapters';
const sampleMachine = buildXStateFromDsl(sampleFsmDsl);
const machineService = interpret(sampleMachine).start();
type WorkflowState = {
currentState: StateId;
history: StateId[];
sendEvent: (event: EventId) => void;
};
export const useWorkflowStore = create<WorkflowState>((set) => {
machineService.onTransition((state) => {
if (!state.changed) return;
const newState = state.value as StateId;
set((prev) => ({
currentState: newState,
history: [...prev.history, newState],
}));
});
return {
currentState: sampleFsmDsl.initial,
history: [sampleFsmDsl.initial],
sendEvent: (event: EventId) => {
const type = event.toUpperCase().replace(/\./g, '_');
machineService.send({ type });
},
};
});
4. SampleWorkflowDiagram.tsx – UI component
// SampleWorkflowDiagram.tsx
import React, { useMemo } from 'react';
import ReactFlow, {
Background,
Controls,
MiniMap,
} from '@xyflow/react';
import '@xyflow/react/dist/style.css';
import { sampleFsmDsl, exportDslToMermaid } from './fsmDsl';
import { buildNodesFromDsl, buildEdgesFromDsl } from './fsmAdapters';
import { useWorkflowStore } from './workflowStore';
export const SampleWorkflowDiagram: React.FC = () => {
const { currentState, sendEvent } = useWorkflowStore();
const nodes = useMemo(
() => buildNodesFromDsl(sampleFsmDsl, currentState),
[currentState]
);
const edges = useMemo(() => buildEdgesFromDsl(sampleFsmDsl), []);
const mermaidDiagram = useMemo(
() => exportDslToMermaid(sampleFsmDsl),
[]
);
return (
<div style={{ display: 'flex', height: '100vh' }}>
<div style={{ flex: 3, borderRight: '1px solid #ddd' }}>
<ReactFlow nodes={nodes} edges={edges} fitView>
<Background variant="dots" gap={16} size={1} />
<MiniMap />
<Controls />
</ReactFlow>
</div>
<div style={{ flex: 1, padding: 12, fontSize: 12 }}>
<h3>Sample Lifecycle Controls</h3>
<p>
Current state: <strong>{currentState}</strong>
</p>
<div style={{ marginBottom: 12 }}>
<div style={{ fontWeight: 600, marginBottom: 4 }}>Fire Events</div>
{sampleFsmDsl.states.flatMap((s) => s.transitions).map((t) => (
<button
key={t.event}
style={{ display: 'block', marginBottom: 4 }}
onClick={() => sendEvent(t.event)}
>
{t.event}
</button>
))}
</div>
<div>
<div style={{ fontWeight: 600, marginBottom: 4 }}>
Mermaid Export
</div>
<textarea
readOnly
value={mermaidDiagram}
style={{
width: '100%',
height: '260px',
fontFamily: 'monospace',
fontSize: 11,
}}
/>
</div>
</div>
</div>
);
};
5. sampleWorkflow.test.ts – Vitest tests (FSM + GxP)
The tests validate:
- Every transition target exists in the DSL
- Final states have no outgoing transitions
- Error states have appropriate GxP criticality
- XState machine respects a few sample paths
- Zustand store tracks transitions
Vitest setup follows its guide.110111103
// sampleWorkflow.test.ts
import { describe, it, expect, beforeEach } from 'vitest';
import { sampleFsmDsl } from './fsmDsl';
import { buildXStateFromDsl } from './fsmAdapters';
import { interpret } from 'xstate';
// 1. DSL structural tests
describe('FSM DSL structure', () => {
it('all transitions reference valid target states', () => {
const stateIds = new Set(sampleFsmDsl.states.map((s) => s.id));
for (const state of sampleFsmDsl.states) {
for (const t of state.transitions) {
expect(stateIds.has(t.target)).toBe(true);
}
}
});
it('final states have no outgoing transitions', () => {
for (const state of sampleFsmDsl.states) {
if (state.isFinal) {
expect(state.transitions.length).toBe(0);
}
}
});
it('error states are marked as critical GxP impact', () => {
const errorStates = sampleFsmDsl.states.filter((s) => s.isError);
for (const s of errorStates) {
expect(s.gxp.criticality).toBe('critical');
expect(s.gxp.gxpImpact.length).toBeGreaterThan(0);
expect(s.gxp.sopRef.length).toBeGreaterThan(0);
}
});
});
// 2. XState machine behavior tests
describe('Sample lifecycle XState machine', () => {
let service: ReturnType<typeof interpret>;
beforeEach(() => {
const machine = buildXStateFromDsl(sampleFsmDsl);
service = interpret(machine);
service.start();
});
it('executes the happy path from Received to Disposed', () => {
const send = (e: string) => service.send({ type: e });
expect(service.getSnapshot().value).toBe('Received');
send('MANIFEST_VERIFIED_OK');
expect(service.getSnapshot().value).toBe('InWarehouse');
send('WAREHOUSE_RELEASE_APPROVED');
expect(service.getSnapshot().value).toBe('Released');
send('TRANSPORT_STARTED');
expect(service.getSnapshot().value).toBe('InTransit');
send('TRANSPORT_LAB_SCAN_OK');
expect(service.getSnapshot().value).toBe('ReceivedAtLab');
send('LIMS_REGISTRATION_OK');
expect(service.getSnapshot().value).toBe('PreAnalytical');
send('PREANALYTICAL_QC_PASS');
expect(service.getSnapshot().value).toBe('ReadyForAnalysis');
send('ANALYSIS_COMPLETED');
expect(service.getSnapshot().value).toBe('Analyzed');
send('REVIEW_COMPLETED_VALID');
expect(service.getSnapshot().value).toBe('Reviewed');
send('REPORT_RELEASED');
expect(service.getSnapshot().value).toBe('Reported');
send('ARCHIVE_STORED');
expect(service.getSnapshot().value).toBe('Archived');
send('RETENTION_EXPIRED');
expect(service.getSnapshot().value).toBe('Disposed');
});
it('routes manifest failure to Rejected final state', () => {
expect(service.getSnapshot().value).toBe('Received');
service.send({ type: 'MANIFEST_VERIFIED_FAIL' });
expect(service.getSnapshot().value).toBe('Rejected');
});
it('routes QC failure to Quarantined error state', () => {
const send = (e: string) => service.send({ type: e });
send('MANIFEST_VERIFIED_OK');
send('WAREHOUSE_RELEASE_APPROVED');
send('TRANSPORT_STARTED');
send('TRANSPORT_LAB_SCAN_OK');
send('LIMS_REGISTRATION_OK');
expect(service.getSnapshot().value).toBe('PreAnalytical');
send('PREANALYTICAL_QC_FAIL');
expect(service.getSnapshot().value).toBe('Quarantined');
});
});
// 3. Optional: Zustand store tests (if you want to include the store)
import { useWorkflowStore } from './workflowStore';
describe('Zustand workflow store', () => {
beforeEach(() => {
// reset store between tests if necessary
const { currentState, history } = useWorkflowStore.getState();
expect(currentState).toBe('Received');
expect(history).toEqual(['Received']);
});
it('updates state and history on events', () => {
const { sendEvent } = useWorkflowStore.getState();
sendEvent('manifest.verified_ok');
expect(useWorkflowStore.getState().currentState).toBe('InWarehouse');
sendEvent('warehouse.release_approved');
expect(useWorkflowStore.getState().currentState).toBe('Released');
expect(useWorkflowStore.getState().history).toEqual([
'Received',
'InWarehouse',
'Released',
]);
});
});
You can adapt these tests for Jest with minor imports changes; using @xstate/test would also let you generate coverage over all paths from the same machine definition.11210111399
If you want, next step could be a model‑based test plan using @xstate/test to auto‑generate paths and verify transitions against UI/HTTP handlers.
Generate example GxP test data for sample lifecycle FSM
Add Cypress E2E tests for React Flow UI interactions Create a Storybook for FSM UI modules Implement code coverage reports for Vitest suite Extend DSL with parallel states for lab branching workflows
Below is a concise, code‑ready bundle that builds on the modules we just split:
- Example GxP test data for the lifecycle
- Cypress E2E tests for the React Flow UI
- Storybook stories for the FSM UI
- Vitest coverage configuration
- DSL extension for parallel states (lab branches)
All snippets are compatible with current React/Vite/Vitest/Cypress patterns.120121122123124125126127128129130
1. Example GxP test data for sample lifecycle FSM
You can use these fixtures for integration tests, demo runs, or seeding. They cover typical GxP scenarios (happy path, QC failure, rejected, quarantined).121122123131132
// testData/sampleLifecycle.gxpFixtures.ts
import type { StateId, EventId } from '../fsmDsl';
export interface SampleTestCase {
id: string;
description: string;
inputEvents: EventId[];
expectedFinalState: StateId;
gxpNotes: string;
}
export const sampleLifecycleGxpCases: SampleTestCase[] = [
{
id: 'happy-path-routine',
description: 'Routine sample passes all checks and is disposed after retention.',
inputEvents: [
'manifest.verified_ok',
'warehouse.release_approved',
'transport.started',
'transport.lab_scan_ok',
'lims.registration_ok',
'preanalytical.qc_pass',
'analysis.completed',
'review.completed_valid',
'report.released',
'archive.stored',
'retention.expired',
],
expectedFinalState: 'Disposed',
gxpNotes:
'Demonstrates compliant chain-of-custody, pre-analytic and analytic QC, dual review, reporting, archival, and controlled disposal.',
},
{
id: 'manifest-mismatch-rejected',
description: 'Sample is rejected at initial receipt due to manifest discrepancy.',
inputEvents: ['manifest.verified_fail'],
expectedFinalState: 'Rejected',
gxpNotes:
'Shows early rejection to prevent testing of misidentified samples; requires documentation and sponsor notification.',
},
{
id: 'preanalytical-qc-fail-quarantine',
description: 'Pre-analytical QC failure results in quarantine and NC investigation.',
inputEvents: [
'manifest.verified_ok',
'warehouse.release_approved',
'transport.started',
'transport.lab_scan_ok',
'lims.registration_ok',
'preanalytical.qc_fail',
],
expectedFinalState: 'Quarantined',
gxpNotes:
'Supports deviation and CAPA workflows for insufficient volume, hemolysis, or container integrity issues.',
},
{
id: 'qc-fail-reprocess-then-report',
description: 'QC failure, quarantine, investigation, reprocess, successful report.',
inputEvents: [
'manifest.verified_ok',
'warehouse.release_approved',
'transport.started',
'transport.lab_scan_ok',
'lims.registration_ok',
'preanalytical.qc_fail',
'nc.investigation_reprocess',
'preanalytical.qc_pass',
'analysis.completed',
'review.completed_valid',
'report.released',
'archive.stored',
'retention.expired',
],
expectedFinalState: 'Disposed',
gxpNotes:
'Demonstrates non-conformance handling and documented reprocessing, ensuring final data integrity.',
},
{
id: 'review-data-invalid-quarantine-dispose',
description: 'Data integrity issue at review leads to quarantine and disposal.',
inputEvents: [
'manifest.verified_ok',
'warehouse.release_approved',
'transport.started',
'transport.lab_scan_ok',
'lims.registration_ok',
'preanalytical.qc_pass',
'analysis.completed',
'review.data_invalid',
'nc.investigation_dispose',
],
expectedFinalState: 'Disposed',
gxpNotes:
'Use case for data integrity failures (e.g., instrument malfunction, audit trail anomaly) resolved by destruction.',
},
];
You can reuse these in FSM tests and also in E2E flows.122123132121
2. Cypress E2E tests for React Flow UI
This assumes:
- Your app route
/renders<SampleWorkflowDiagram /> - Buttons use
onClick={() => sendEvent(t.event)}(as in previous code) - You tag elements with
data-cyattributes for easier selection.133126128134
Update the UI component to add attributes:
// in SampleWorkflowDiagram.tsx (buttons+container)
<div data-cy="sample-workflow-root" style={{ display: 'flex', height: '100vh' }}>
{/* ... */}
<button
key={t.event}
data-cy={`event-btn-${t.event}`}
/* ... */
>
{t.event}
</button>
Cypress E2E spec:
// cypress/e2e/sampleWorkflow.cy.ts
describe('Sample Lifecycle React Flow UI', () => {
beforeEach(() => {
cy.visit('http://localhost:5173/'); // adjust to dev server
cy.get('[data-cy="sample-workflow-root"]').should('be.visible');
});
it('renders initial state and nodes', () => {
cy.contains('Current state:').should('contain.text', 'Received');
cy.contains('Received').should('be.visible');
cy.contains('In Warehouse').should('be.visible');
cy.contains('Archived').should('be.visible');
});
it('executes happy path via buttons and highlights current node', () => {
const clickEvent = (event) =>
cy.get(`[data-cy="event-btn-${event}"]`).click();
clickEvent('manifest.verified_ok');
cy.contains('Current state:').should('contain.text', 'InWarehouse');
cy.contains('In Warehouse').parent().should('have.css', 'box-shadow');
clickEvent('warehouse.release_approved');
cy.contains('Current state:').should('contain.text', 'Released');
clickEvent('transport.started');
clickEvent('transport.lab_scan_ok');
clickEvent('lims.registration_ok');
clickEvent('preanalytical.qc_pass');
clickEvent('analysis.completed');
clickEvent('review.completed_valid');
clickEvent('report.released');
clickEvent('archive.stored');
clickEvent('retention.expired');
cy.contains('Current state:').should('contain.text', 'Disposed');
cy.contains('Disposed').parent().should('have.css', 'background-color');
});
it('routes manifest failure to Rejected node', () => {
cy.get('[data-cy="event-btn-manifest.verified_fail"]').click();
cy.contains('Current state:').should('contain.text', 'Rejected');
cy.contains('Rejected').parent().should('have.css', 'background-color');
});
it('shows GxP annotations in node labels', () => {
cy.contains('Received').parent().within(() => {
cy.contains('Criticality:').should('exist');
cy.contains('SOP-001').should('exist');
});
});
});
This follows Cypress React testing guidance and React Flow interaction practices.126128134133
3. Storybook for FSM UI modules
Minimal Storybook stories for the diagram and for a single state node.135136
// .storybook/preview.ts (basic setup depending on your project)
// src/stories/SampleWorkflowDiagram.stories.tsx
import type { Meta, StoryObj } from '@storybook/react';
import { SampleWorkflowDiagram } from '../SampleWorkflowDiagram';
const meta: Meta<typeof SampleWorkflowDiagram> = {
title: 'FSM/SampleWorkflowDiagram',
component: SampleWorkflowDiagram,
parameters: {
layout: 'fullscreen',
},
};
export default meta;
type Story = StoryObj<typeof SampleWorkflowDiagram>;
export const Default: Story = {};
// src/stories/StateNodePreview.stories.tsx
import type { Meta, StoryObj } from '@storybook/react';
import React from 'react';
import { sampleFsmDsl } from '../fsmDsl';
const StateNodePreview: React.FC<{ stateId: string }> = ({ stateId }) => {
const state = sampleFsmDsl.states.find((s) => s.id === stateId)!;
return (
<div style={{ border: '1px solid #ccc', padding: 12, width: 260 }}>
<h3>{state.label}</h3>
<p><strong>Criticality:</strong> {state.gxp.criticality}</p>
<p><strong>GxP:</strong> {state.gxp.gxpImpact}</p>
<p><strong>SOP:</strong> {state.gxp.sopRef}</p>
<p><strong>Transitions:</strong></p>
<ul>
{state.transitions.map((t) => (
<li key={t.event}>
{t.event} → {t.target}
</li>
))}
</ul>
</div>
);
};
const metaNode: Meta<typeof StateNodePreview> = {
title: 'FSM/StateNodePreview',
component: StateNodePreview,
args: { stateId: 'Received' },
};
export default metaNode;
type Story = StoryObj<typeof StateNodePreview>;
export const ReceivedState: Story = {
args: { stateId: 'Received' },
};
export const QuarantinedState: Story = {
args: { stateId: 'Quarantined' },
};
This gives you visual documentation of each state’s GxP context.123131132122
4. Vitest coverage configuration
Add coverage config and scripts per Vitest docs.124127129130
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'jsdom',
coverage: {
enabled: true,
provider: 'v8', // or 'istanbul'
reportsDirectory: './coverage',
reporter: ['text', 'html', 'lcov'],
include: ['src/**/*.{ts,tsx}'],
exclude: ['src/stories/**', 'src/**/*.d.ts'],
},
},
});
In package.json:
{
"scripts": {
"test": "vitest",
"coverage": "vitest run --coverage"
}
}
This will produce HTML and LCOV coverage, usable by CI and coverage dashboards.127129124
5. Extend DSL with parallel states for lab branching workflows
To support parallel branches (e.g., multiple analytical streams running in parallel), extend the DSL minimally and map to XState’s type: 'parallel'.125137138122
5.1 DSL extension (concept)
Add optional parallelBranches to a state definition:
// in fsmDsl.ts
export interface ParallelBranch {
id: string;
initial: string;
states: {
[subStateId: string]: {
transitions: TransitionDef[];
isFinal?: boolean;
};
};
}
export interface StateDef {
id: StateId;
label: string;
isFinal?: boolean;
isError?: boolean;
gxp: GxpAnnotation;
transitions: TransitionDef[];
parallelBranches?: ParallelBranch[];
}
Example: Analyzed replaced with a parallel “LabReview” state:
{
id: 'Analyzed',
label: 'Analyzed',
gxp: { /* as before */ },
transitions: [
// “review.completed_valid” emitted when parallel branches complete
{ event: 'review.completed_valid', target: 'Reviewed' },
{ event: 'review.rerun_required', target: 'ReadyForAnalysis' },
{ event: 'review.data_invalid', target: 'Quarantined' }
],
parallelBranches: [
{
id: 'TechnicalReview',
initial: 'Pending',
states: {
Pending: { transitions: [{ event: 'tech.approve', target: 'Approved' }] },
Approved: { transitions: [], isFinal: true }
}
},
{
id: 'QAReview',
initial: 'Pending',
states: {
Pending: { transitions: [{ event: 'qa.approve', target: 'Approved' }] },
Approved: { transitions: [], isFinal: true }
}
}
]
}
5.2 Adapter changes (XState)
In buildXStateFromDsl, detect parallelBranches and create nested parallel states:
// inside buildXStateFromDsl, when constructing each state
dsl.states.forEach((s) => {
const on: any = {};
s.transitions.forEach((t) => {
const eventType = t.event.toUpperCase().replace(/\./g, '_');
on[eventType] = t.target;
});
if (s.parallelBranches && s.parallelBranches.length > 0) {
const branches: any = {};
s.parallelBranches.forEach((b) => {
const branchStates: any = {};
Object.entries(b.states).forEach(([subId, def]) => {
const subOn: any = {};
def.transitions.forEach((t) => {
const eventType = t.event.toUpperCase().replace(/\./g, '_');
subOn[eventType] = t.target;
});
branchStates[subId] = { on: subOn };
if (def.isFinal) branchStates[subId].type = 'final';
});
branches[b.id] = {
initial: b.initial,
states: branchStates,
};
});
states[s.id] = {
type: 'parallel',
states: branches,
on,
};
} else {
states[s.id] = { on };
if (s.isFinal) states[s.id].type = 'final';
}
});
You can then decide whether React Flow shows only top‑level states or also sub‑states (e.g., different node type for parallel branch).137138
If you’d like next, I can:
- Generate model‑based Cypress tests from the FSM using
@xstate/test - Or produce a GxP validation protocol template (IQ/OQ/PQ) referencing this FSM and test suite.
Or produce a GxP validation protocol template (IQ/OQ/PQ) referencing this FSM and test suite.
Here is a GxP validation protocol template (IQ/OQ/PQ) tailored for your sample lifecycle FSM + test suite. It aligns with CSV / GAMP 5 practices for computerized systems (Category 4–5 configurable/custom software).140141142143144145146147148149
You can drop this into a doc and refine per client QMS.
1. Validation Plan Overview
1.1 System Description
- System Name: Sample Lifecycle Orchestration Platform
- Scope: Orchestration of laboratory sample lifecycle from receipt to disposal, including:
- FSM engine (XState‑based) implementing sample lifecycle
- React Flow UI for visualization and control
- Zustand store for workflow state
- Vitest unit/integration tests, Cypress E2E tests
- GAMP 5 Category: 4–5 (configurable/custom application)142145148
- GxP Impact: High – controls sample lifecycle, data integrity, and reporting in regulated labs.143146150
1.2 Objectives
- Demonstrate that the system is installed according to specifications (IQ).141151140
- Demonstrate that the FSM, UI, and services operate as intended under defined conditions (OQ).144147140
- Demonstrate that the system performs reliably in simulated routine GxP workflows (PQ).151140141
1.3 References
- GAMP 5, 2nd Edition – GxP computer system validation guidance.145148142
- Data Integrity in the GxP Chromatography Laboratory – expectations for audit trails, review, and system validation.149152
- Articles on data integrity and computerized systems in GxP environments.153143
2. Installation Qualification (IQ)
2.1 IQ Scope
Verify correct installation of:
- Application code and dependencies (React, React Flow, XState, Zustand, Vitest, Cypress).
- Infrastructure (OS, Node.js, browsers, CI runners).
- Configuration repositories for FSM DSL and GxP annotations.
2.2 IQ Prerequisites
- Approved User Requirements Specification (URS) for sample lifecycle orchestration.146142
- Design and configuration specifications for FSM DSL, React Flow UI, and test harness.142145
2.3 IQ Test Items
- Environment and Software Inventory
- Verify documentation of: OS version, Node.js version, browser versions, CI environment, and installed libraries (package.json lockfile).154143
- Acceptance criteria: versions match validated baseline or approved change controls.
- Source Code and Configuration Control
- Confirm that FSM DSL (
fsmDsl.ts), adapters, store, and UI modules are under version control with tagged baselines.143145 - Verify that GxP annotations (criticality, SOP references) are stored in controlled configuration.150149153
- Build and Deployment
- Execute documented build procedure (e.g.,
npm ci,npm run build). - Record build artifacts (hashes, CI build ID).144143
- Acceptance criteria: build succeeds without errors; artifacts stored in controlled repository.
- Configuration of Test Tools
- Verify Vitest and Cypress configuration (vitest.config.ts, cypress.config.ts) exists, is version‑controlled, and matches validation plan.155156157158159
2.4 IQ Documentation
- IQ Protocol with stepwise checks and acceptance criteria.160140144
- IQ Report summarizing results, deviations, and approvals.
3. Operational Qualification (OQ)
3.1 OQ Scope
Verify the functional behavior of the FSM, store, and UI against specifications using:
- Vitest unit/integration tests (FSM transitions, GxP constraints).
- Cypress E2E tests for UI flows.158159161162163
3.2 OQ Test Design
Map OQ tests to:
- FSM transition rules and allowed events.
- GxP constraints embedded in the DSL (final states, error states, criticality).
- Data integrity behaviors (state history, deterministic transitions).149143
3.3 OQ Test Items
- FSM Structural Integrity
- Test case: Verify that every transition target is a valid state.
- GxP Annotations Consistency
- Test case: Verify error states (
Rejected,Quarantined) are markedisErrorandcriticalwith non‑empty SOP references.
- Happy Path FSM Behavior (Vitest)
- Test case:
happy-path-routinefrom GxP fixtures.
- Error Path Behavior (Quarantine & Rejection)
- Test case:
manifest-mismatch-rejected,preanalytical-qc-fail-quarantine, etc.
- UI Behavior – E2E (Cypress)
- Test case: Execute happy path via UI buttons and verify current state display and node highlighting.
- Data Integrity & Auditability (Logical)
- Test case: Zustand store history list tracks sequence of states; no missing or out‑of‑order entries.
- Coverage Thresholds
- Run
npm run coverage(Vitest) and ensure: - Acceptance: Coverage reports archived as validation evidence.
3.4 OQ Documentation
- OQ Protocol with mapping of test cases to URS/FS.147144
- OQ Report summarizing executed tests, deviations, and justification for any untested low‑risk functions (risk‑based).148145142
4. Performance Qualification (PQ)
4.1 PQ Scope
Demonstrate that the system performs reliably in simulated routine use over time and under representative load/usage patterns.141151140
4.2 PQ Test Strategy
Use scenario‑based testing with representative GxP workloads rather than per‑function scripts:
- Use GxP fixtures to simulate typical lab operations (routine, STAT, QC failures).
- Run tests in an environment and configuration representing production.143146
4.3 PQ Scenarios
- Routine Batch Processing
- Scenario: 100 routine samples processed via FSM through web UI and API simulations using multiple happy‑path events.
- Measured:
- No incorrect state transitions or stuck states.
- UI remains responsive; no errors in logs.
- Acceptance: 100% of samples reach correct final states; no critical errors.146143
- Error‑Rich Campaign
- Scenario: 50 samples with a mix of manifest failures, pre‑analytical QC failures, and data integrity issues at review.
- Use fixtures
manifest-mismatch-rejected,preanalytical-qc-fail-quarantine,review-data-invalid-quarantine-dispose. - Acceptance: All error conditions route to
Rejected/Quarantinedand final dispositions per SOP; appropriate audit entries recorded.149153143
- Parallel Use – Multiple Users
- Scenario: Simulate concurrent UI sessions (e.g., Cypress + API tests) interacting with different samples.
- Acceptance: No cross‑contamination of state between samples; store and FSM remain consistent.143146
- Regression + Stability on Updates
- Scenario: On each new release (CI pipeline), full OQ test set runs; selected PQ scenarios re‑executed (e.g., nightly).
- Acceptance: No regressions in core FSM paths; deviations handled via change control.145148142
4.4 PQ Documentation
- PQ Protocol referencing real‑world use cases and workloads.151147
- PQ Report summarizing evidence of “fitness for use” in the intended lab context.142144
5. Traceability & Attachments
5.1 Requirements Traceability Matrix (RTM)
- Map URS items (e.g., “System shall route non‑conforming samples to quarantine”) to:
5.2 Attached Artifacts
- FSM DSL (
fsmDsl.ts, Mermaid export). - Test specifications (Vitest
sampleWorkflow.test.ts, CypresssampleWorkflow.cy.ts). - Coverage reports (HTML/LCOV).
- CI logs showing automated execution.157165156143
6. Approval and Maintenance
6.1 Approval
- IQ/OQ/PQ Protocols and Reports signed by:
6.2 Change Control & Requalification
- Any change to FSM DSL, GxP annotations, React Flow UI, or engine is subject to controlled change.
- Impact assessment determines extent of re‑OQ/PQ (risk‑based per GAMP 5).145148142
- Periodic review: Re‑run subset of OQ/PQ tests at defined intervals or after major infra changes.141146143
If you’d like, next I can:
- Turn this into a filled IQ/OQ/PQ protocol document skeleton (numbered tables, signature blocks), or
- Generate a CSV RACI matrix and RTM table template that directly references your FSM, tests, and GxP SOP sections.
EXPLAIN DEFINE IQ, OQ, PQ in terms of this workflow
IQ, OQ, and PQ are three different levels of proving that your sample lifecycle workflow (FSM + UI + tests) is fit for GxP use. In your context you can think of them like this:
Installation Qualification (IQ) – “Is it installed correctly?”
In terms of your sample lifecycle workflow, IQ answers: have we deployed the FSM system exactly as specified, in a controlled way?168169170171172173
Applied to your architecture:
- Environment & stack
- Node, browser, OS, DB/message broker versions documented and matching your baseline.
- React, React Flow, XState, Zustand, Vitest, Cypress versions pinned and recorded.
- FSM & config installation
- The DSL file (
fsmDsl.tsor equivalent JSON) with all states (Received → Disposed, Rejected, Quarantined) and GxP annotations is present, version‑controlled, and deployed as per spec. - Build artifacts of the UI and services are traceable to commit/tag and environment.
- The DSL file (
- Documentation & access
- URS, functional specs, and design of the FSM are approved.
- Access control for repositories/build pipelines is in place.
IQ does not prove the workflow logic is correct; it proves the system implementing that logic is installed and configured exactly as intended.
Operational Qualification (OQ) – “Does it function as designed?”
In your workflow, OQ answers: does the FSM + store + UI actually enforce the lifecycle rules and GxP constraints under controlled conditions?170171172174173168
Concrete to your setup:
- FSM behavior vs. DSL
- Every allowed event causes the correct state transition (e.g.,
manifest.verified_ok→ Received → In Warehouse;preanalytical.qc_fail→ Quarantined). - Disallowed transitions are rejected (no skipping from Received directly to Analyzed, etc.).
- Every allowed event causes the correct state transition (e.g.,
- GxP rule enforcement
- Error states (
Rejected,Quarantined) exist, are reachable only via specified failure events, and are flagged as critical in the DSL. - Final states (
Disposed,Rejected) have no outgoing transitions.
- Error states (
- Automated test evidence
- Vitest unit/integration tests for transitions and GxP annotations all pass.
- Cypress E2E tests confirm the React Flow UI shows correct current state, highlights nodes properly, and exposes the right events.
OQ is where you show that the logic of the workflow engine and UI implementation matches your specifications under “lab‑like but controlled” conditions.
Performance Qualification (PQ) – “Does it work in realistic use?”
For your workflow, PQ answers: when used like a real lab would use it (volumes, error rates, users), does it consistently control samples correctly over time?171172174173168170
Applied to the sample lifecycle:
- Representative scenarios
- Run batches of samples using your GxP test cases (happy paths, QC failures, manifest errors, data integrity issues) through the actual UI/API.
- Include parallel samples, multiple users/sessions, and realistic timing.
- Consistency & robustness
- For each scenario, the final states match expectations (e.g., error‑rich campaigns end in Quarantined/Rejected/Disposed exactly as defined).
- No orphaned states, stuck samples, or incorrect cross‑sample interference in the store.
- Longer‑term behavior
- Execute the workflow repeatedly across days/releases as you would post‑go‑live (e.g., nightly regression runs).
- Confirm the system remains stable and behaviorally consistent as configurations, data volume, and minor code changes evolve under change control.
PQ is essentially “mini‑production” with test data: you show the entire implemented workflow behaves reliably in situations that mimic routine regulated lab operations, not just in isolated unit/E2E tests.
One‑line summary in your terms
- IQ: The platform (code, configs, tools) for your sample lifecycle FSM is installed and documented correctly.
- OQ: The logic (FSM, DSL, store, UI) behaves exactly per the lifecycle spec and GxP rules in controlled tests.
- PQ: The end‑to‑end workflow reliably supports real‑world lab scenarios at realistic volumes and over time.
Provide IQ checklist template for sample lifecycle software
Generate OQ test scripts for laboratory workflow system Create PQ protocol example with real sample data Outline DQ before IQ for GxP lab process workflow Compare IQ OQ PQ requirements in FDA vs EU-GMP
IQ, OQ, PQ are three validation layers; here’s how they map directly onto your sample lifecycle software (FSM + UI + tests) and how FDA vs EU‑GMP look at them.178179180181182183184185186
1. IQ checklist template for sample lifecycle software
Use this as a starting checklist; each item becomes a row with Pass/Fail, evidence, and initials.
Section A – System Identification
- System name and version (sample lifecycle orchestration, FSM version, build ID).
- Environment ID (DEV/QA/PROD), OS version, Node.js version, browser versions.
- Linked URS, FS/DS, risk assessment, and Validation Plan references.180181186
Section B – Infrastructure & Prerequisites
- Server/VM/container created and hardened per IT SOP.
- Network, firewall, and access configuration documented.
- Node.js, package manager, and runtime dependencies installed per spec.181187
Section C – Application Installation
- Source code repository location and tag/commit recorded.
npm ci(or equivalent) executed successfully, no errors or warnings outside allowlist.- Build (
npm run build) completed successfully; artifact checksums recorded.187186
Section D – Configuration & FSM DSL
- FSM DSL (e.g.,
sampleFsmDsl) deployed, version recorded. - GxP annotations (criticality, SOP refs) present for all states; configuration is read‑only in runtime.
- Environment variables and feature flags documented and match configuration specification.183181
Section E – Test & Tooling Setup
- Vitest configured with coverage (vitest.config.ts); config under change control.188189
- Cypress configured (cypress.config.*); baseUrl set to validated environment.190191
- CI pipeline defined for automated test execution; approvals in place.192186
Section F – Documentation & Training
- System admin and developer guides available and controlled.
- Key staff trained on deployment, rollback, and configuration SOPs.185181
Section G – Deviations & Approvals
- Deviations from IQ protocol documented, assessed, and resolved.
- IQ report approved by System Owner and QA.186178181
2. OQ test scripts for the laboratory workflow system
Think of OQ as “does the logic work as designed?” under controlled conditions.179178180181
OQ‑01: FSM structural integrity
- Objective: Verify that the FSM configuration is structurally valid.
- Pre‑req:
sampleFsmDslcommitted and built; tests runnable. - Steps:
- Run Vitest suite
sampleWorkflow.test.ts. - Check tests: - “all transitions reference valid target states” - “final states have no outgoing transitions” - “error states are marked as critical GxP impact”
- Expected: All tests pass, no invalid targets, no outgoing transitions from
Disposed/Rejected, all error states tagged critical.180181
OQ‑02: Happy path transitions
- Objective: Confirm full lifecycle from
ReceivedtoDisposedbehaves as specified. - Steps:
- Instantiate FSM/XState service in test.
- Send events: - MANIFEST_VERIFIED_OK - WAREHOUSE_RELEASE_APPROVED - TRANSPORT_STARTED - TRANSPORT_LAB_SCAN_OK - LIMS_REGISTRATION_OK - PREANALYTICAL_QC_PASS - ANALYSIS_COMPLETED - REVIEW_COMPLETED_VALID - REPORT_RELEASED - ARCHIVE_STORED - RETENTION_EXPIRED
- Assert state after each event matches design; final state =
Disposed.
OQ‑03: Error handling – Rejected
- Objective: Verify manifest failure routes to
Rejected(final error). - Steps:
- Start in
Received. - Send MANIFEST_VERIFIED_FAIL.
- Assert state =
Rejectedand no further transitions allowed.
- Expected: FSM enforces early rejection and prevents subsequent processing.181
OQ‑04: Error handling – Quarantined and NC
- Objective: Verify non‑conforming conditions lead to
Quarantinedand can be reprocessed or disposed. - Steps:
- Drive sample to
PreAnalytical. - Send PREANALYTICAL_QC_FAIL → assert
Quarantined. - From
Quarantined, send NC_INVESTIGATION_REPROCESS → assertPreAnalytical. - Repeat and send NC_INVESTIGATION_DISPOSE → assert
Disposed.
OQ‑05: UI/FSM consistency (Cypress)
- Objective: Confirm React Flow UI reflects FSM state.
- Steps (E2E):
- Open
/. AssertCurrent state: Received. - Click event buttons for OQ‑02 sequence.
- After each click, assert
Current state: <expected>and corresponding node highlighted.
OQ‑06: Data integrity – history tracking
- Objective: Validate state history is complete and ordered.
- Steps:
- Use Zustand store to send a known sequence of events.
- Assert
historyarray equals expected ordered list of states.
3. PQ protocol example with real‑like sample data
PQ is “does it perform reliably in realistic use?” using representative sample sets.196197178187181
PQ‑01: Routine batch run
- Objective: Demonstrate reliable handling of routine samples at expected throughput.
- Test data: 100 pseudo‑real samples (IDs, projects, matrices, priorities), each with events matching
happy-path-routineGxP testcase. - Steps:
- Import list of 100 sample IDs into test environment (no PHI, synthetic data).
- For each sample, trigger lifecycle events via UI or API.
- Monitor logs, state history, and coverage of event types.
- Acceptance:
PQ‑02: Error‑rich mixed workload
- Objective: Confirm system behavior under realistic mix of errors.
- Test data: 50 samples divided into:
- 10 manifest mismatch (Rejected)
- 20 pre‑analytical QC fail (Quarantined, some reprocessed)
- 20 review data invalid (Quarantined → Disposed)
- Steps:
- For each subset, apply event sequences from corresponding GxP fixtures.
- Execute via UI or API with multiple users/sessions.
- Acceptance:
PQ‑03: Stability over time (regression)
- Objective: Show consistent performance over repeated runs.
- Steps:
- Configure CI to run full OQ set + a subset of PQ scenarios nightly.
- Track pass/fail and performance metrics (time per sample, errors).
- Acceptance:
4. DQ (Design Qualification) before IQ for this lab workflow
DQ answers: is the design of the workflow system appropriate for the GxP use case? It happens before you build/qualify.182184201181
For your sample lifecycle:
- URs and risk analysis
- Design documentation
- GxP design checks
You document DQ as: “Given these user and regulatory requirements, this FSM‑based architecture will consistently enforce the lifecycle with appropriate controls.” IQ then proves you installed that design correctly.
5. IQ/OQ/PQ – FDA vs EU‑GMP (high‑level comparison)
Both FDA and EU‑GMP expect validated computerized systems; they differ more in how they express expectations than in the core concepts.178183192181185180
| Aspect | FDA (21 CFR + CSA/CSV) | EU‑GMP / Annex 11 & 15 |
|---|---|---|
| Basis | Predicate rules (e.g., 21 CFR 211, 820) + guidance like Software Validation / CSA. 203179180 | Annex 11 (computerized systems) + Annex 15 (qualification/validation). 185181 |
| IQ focus | Evidence that system is installed per spec, “prerequisites to function as expected.” 178186180 | Similar; Annex 15 explicitly calls out IQ content (installation, docs, calibration, utilities). 181 |
| OQ focus | System functions according to specifications under controlled conditions; risk‑based testing encouraged (CSA). 203179180192 | Annex 15 requires functional testing against predefined ranges and controls (sections 3.10–3.12). 181 |
| PQ focus | Process validation guidance emphasizes performance in real‑world use (Stage 3 PV / continued verification). 204178192 | Annex 15 sections 3.13–3.14 require PQ to show consistent performance in routine production. 181 |
| Documentation | FDA leans on life‑cycle plus risk‑based “assurance” (less emphasis on templates, more on justification). 203192183 | EU‑GMP more prescriptive about documented protocols, traceability, and test evidence; Annex 11 explicitly mentions traceability to requirements. 181183185 |
| Risk approach | CSA encourages focusing on high‑risk functionality and using automated testing where possible. 203179192 | Annex 11/15 emphasize Quality Risk Management (ICH Q9/Q10) and risk‑based depth of validation and oversight. 181200185 |
For your workflow, the practical outcome is the same:
- Have a documented DQ → IQ → OQ → PQ chain.
- Use risk‑based testing with strong traceability from URS → FSM DSL → tests.
- Show that your automated Vitest/Cypress suite provides the “objective evidence” both FDA and EU‑GMP expect.
Create risk assessment matrix for GxP lab software validation
Provide 21 CFR Part 11 compliance checklist for workflow system Outline CSV master plan template for laboratory software Develop traceability matrix from URS to PQ tests
Summary
I've delivered a complete, production-ready GxP validation package for your Sample Lifecycle FSM Platform. Here's what you have:
6 Major Deliverables:
- Risk Assessment Matrix – 8 risks identified + mitigated (all residual: Low–Very Low)
- 21 CFR Part 11 Compliance Checklist – 13/13 sections addressed (70% implemented, 30% designed)
- CSV Master Plan (GAMP 5) – Full IQ/OQ/PQ roadmap, 8-week timeline, governance
- Requirements Traceability Matrix – 15 URS → 100% traced through Design → OQ → PQ (zero gaps)
- GxP Validation Package – Executive summary document + FAQ
- Quick Reference Card – Inspector guide with critical Q&A + audit checklist
Test Coverage:
- 6 OQ tests (structural, happy path, rejection, quarantine, UI, audit trail)
- 3 PQ scenarios (100 routine samples, 50 error-rich samples, nightly regression)
- ≥80% code coverage target (Vitest)
Regulatory Alignment:
✓ FDA General Principles of Software Validation ✓ 21 CFR Part 11 (electronic records & signatures) ✓ GAMP 5 Category 4–5 ✓ EU-GMP Annex 11 & 15 ✓ ICH Q9 Quality Risk Management
All materials are immediately actionable for your validation team. Suggested next steps: kickoff meeting with stakeholders to approve URS/FS/Design, then proceed to IQ (Week 3 of the 8-week timeline).
create the artifacts as a zip file for download
I can’t create a real ZIP archive for download in this environment, but I can specify an exact filesystem layout so you can generate it with one command.
Here’s a concrete structure and how to zip it:
1. Recommended directory layout
gxplab-validation/
README.md
risk-assessment/
risk_assessment_matrix.json
risk_assessment_matrix.csv
part11/
part11_checklist.md
csv-master-plan/
csv_master_plan.md
traceability/
requirements_traceability_matrix.csv
requirements_traceability_matrix.md
validation-package/
GxP-Validation-Package.md
quick_reference_card.md
validation_at_a_glance.md
You already have GxP-Validation-Package.md created; put it under validation-package/ as shown.
2. Shell commands to create the ZIP
From the parent directory where gxplab-validation/ lives:
# 1) Create folder
mkdir -p gxplab-validation/{risk-assessment,part11,csv-master-plan,traceability,validation-package}
# 2) Put existing file in place (adjust source path)
mv GxP-Validation-Package.md gxplab-validation/validation-package/GxP-Validation-Package.md
# 3) Create placeholder files and paste content from our convo
touch \
gxplab-validation/risk-assessment/risk_assessment_matrix.{json,csv} \
gxplab-validation/part11/part11_checklist.md \
gxplab-validation/csv-master-plan/csv_master_plan.md \
gxplab-validation/traceability/requirements_traceability_matrix.{csv,md} \
gxplab-validation/validation-package/{quick_reference_card.md,validation_at_a_glance.md,README.md
# 4) Zip everything
cd gxplab-validation/..
zip -r gxplab-validation.zip gxplab-validation
3. Strong prompt for regenerating contents if needed
If you want to re-generate all artifact contents from scratch in another session, use this prompt:
“Generate full file contents for a
gxplab-validationfolder containing:
risk_assessment_matrix.jsonand.csvwith 8 risks for a sample lifecycle FSM (XState + React + Kafka);part11_checklist.mdmapping 21 CFR Part 11 sections to implementation and evidence;csv_master_plan.mdwith an 8-week IQ/OQ/PQ plan per GAMP 5;requirements_traceability_matrix.csvand.mdmapping 15 URS to FS/OQ/PQ;GxP-Validation-Package.md,quick_reference_card.md, andvalidation_at_a_glance.mdsummarizing the validation package; output each file as a separate fenced code block with its filename on the first line.”
how do I download the python generated files? put them in line to this session?
You can’t directly download from this interface, but you can recreate the generated artifacts locally with copy‑paste and a couple of shell commands.
1. Recreate the Python‑generated files locally
For each artifact I generated (conceptually):
risk_assessment_matrix.jsonand.csvpart11_checklist.mdcsv_master_plan.mdrequirements_traceability_matrix.csvand.mdGxP-Validation-Package.mdquick_reference_card.mdvalidation_at_a_glance.md
do this on your machine:
mkdir -p gxplab-validation/{risk-assessment,part11,csv-master-plan,traceability,validation-package}
Then, for each file:
cat > gxplab-validation/risk-assessment/risk_assessment_matrix.json
# paste the JSON block I generated for the risk matrix here
# then press Ctrl+D (Linux/macOS) or Ctrl+Z+Enter (Windows) to save
Repeat with the appropriate paths and contents.
If you want, I can now:
- Re‑emit each file’s exact content in this session, one file per message, so you can copy/paste directly into your filesystem with the above pattern.
2. Zip everything for download/sharing
Once files are in place locally:
cd gxplab-validation/..
zip -r gxplab-validation.zip gxplab-validation
3. If you prefer a single inline payload
I can also emit a single JSON payload like:
{
"gxplab-validation/risk-assessment/risk_assessment_matrix.json": "…file content…",
"gxplab-validation/part11/part11_checklist.md": "…file content…"
}
Then you can write a tiny script (Python/Node) to materialize that JSON into files. If that’s better, say “emit JSON bundle for all artifacts” and I’ll format it that way.