Skip to main content

Electronic Record Controls Specification

Document ID: CODITECT-BIO-ERC-001 Version: 1.0.0 Effective Date: 2026-02-16 Classification: Internal - Restricted Owner: Chief Information Security Officer (CISO)


Document Control

Approval History

RoleNameSignatureDate
Chief Information Security Officer[Pending][Digital Signature]YYYY-MM-DD
VP Quality Assurance[Pending][Digital Signature]YYYY-MM-DD
VP Engineering[Pending][Digital Signature]YYYY-MM-DD
Regulatory Affairs Director[Pending][Digital Signature]YYYY-MM-DD

Revision History

VersionDateAuthorChangesApproval Status
1.0.02026-02-16CISO OfficeInitial releaseDraft

Distribution List

  • Executive Leadership Team
  • Information Security Team
  • Quality Assurance Team
  • Engineering Leadership
  • Compliance and Regulatory Affairs
  • Internal Audit
  • External Auditors (as needed)

Review Schedule

Review TypeFrequencyNext Review DateResponsible Party
Annual Review12 months2027-02-16CISO
Regulatory Update ReviewAs neededN/ARegulatory Affairs
Post-Incident ReviewAs neededN/ASecurity Incident Response Team
Retention Policy ReviewQuarterly2026-05-16Compliance Officer

1. Purpose and Scope

1.1 Purpose

This Electronic Record Controls Specification establishes the comprehensive requirements and procedures for managing electronic records in the CODITECT Biosciences Quality Management System (BIO-QMS) Platform to ensure:

  1. Record Integrity - Electronic records remain trustworthy, unaltered, and tamper-evident throughout their lifecycle
  2. Record Retrieval - All records are retrievable in human-readable format throughout the retention period
  3. Record Retention - Records are retained per regulatory requirements and organizational policies
  4. Access Control - Role-based access with time-limited sessions ensures only authorized personnel access records
  5. Regulatory Compliance - Full conformance with FDA 21 CFR Part 11, HIPAA, SOC 2, and ALCOA+ principles

1.2 Scope

This specification applies to:

In Scope:

  • All GxP electronic records (work orders, approvals, validations, test results)
  • Electronic signatures and signature manifestations
  • Audit trail records for all system operations
  • Protected health information (PHI) records
  • Configuration and change control records
  • Training and qualification records
  • Vendor and supplier records
  • Quality event records (deviations, CAPAs, change controls)

Out of Scope:

  • Source code version control (managed by separate Software Development Policy)
  • Infrastructure logs unrelated to GxP activities (managed by separate Infrastructure Policy)
  • Marketing and sales records (non-regulated)

1.3 Audience

  • Primary: Quality Assurance, Compliance Officers, System Administrators
  • Secondary: Engineering Team, DevOps Engineers, Security Engineers
  • Reference: Executive Leadership, External Auditors, Regulatory Inspectors

1.4 Regulatory Framework

This specification implements requirements from:

FrameworkAuthorityKey Requirements
FDA 21 CFR Part 11U.S. Food and Drug Administration§11.10 (Controls for closed systems)
§11.30 (Controls for open systems)
§11.50 (Signature manifestations)
§11.70 (Signature/record linking)
HIPAA Security RuleU.S. Department of Health and Human Services§164.312(b) Audit controls
§164.316(b)(2)(i) Retention requirements
SOC 2 Type IIAICPA Trust Services CriteriaCC6.1 Logical access controls
CC7.2 System monitoring
CC8.1 Change management
ALCOA+ PrinciplesWHO GMP/GDP GuidelinesAttributable, Legible, Contemporaneous, Original, Accurate, Complete, Consistent, Enduring, Available

2. Record Integrity Architecture

2.1 Immutable Record Design

Principle: Electronic records MUST be tamper-evident and preserve a complete history of all modifications.

2.1.1 Append-Only Storage Pattern

All regulated records use an append-only storage pattern where:

  1. No DELETE operations - Records are never physically deleted, only marked as deleted with audit trail
  2. No UPDATE operations - Modifications create new record versions with complete change history
  3. Version chain integrity - Each version cryptographically links to previous versions

Database Implementation:

-- Example: Work Order record with version control
CREATE TABLE work_orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
version INT NOT NULL DEFAULT 1,
version_hash VARCHAR(64) NOT NULL, -- SHA-256 of current version
previous_version_hash VARCHAR(64), -- Links to prior version

-- Record content fields
work_order_number VARCHAR(50) NOT NULL,
title TEXT NOT NULL,
description TEXT,
status VARCHAR(50) NOT NULL,
regulatory_flag BOOLEAN DEFAULT false,

-- Audit metadata (ALCOA+)
created_by UUID NOT NULL REFERENCES persons(id),
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
modified_by UUID NOT NULL REFERENCES persons(id),
modified_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),

-- Deletion tracking (soft delete)
deleted BOOLEAN DEFAULT false,
deleted_by UUID REFERENCES persons(id),
deleted_at TIMESTAMP WITH TIME ZONE,
deletion_reason TEXT,

-- Integrity constraints
UNIQUE(id, version), -- Composite unique for version history
CHECK (version > 0),
CHECK (NOT deleted OR (deleted_by IS NOT NULL AND deleted_at IS NOT NULL))
);

-- Trigger to prevent UPDATE/DELETE operations
CREATE OR REPLACE FUNCTION prevent_record_modification()
RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'DELETE') THEN
RAISE EXCEPTION 'DELETE not allowed on regulated records - use soft delete';
END IF;

IF (TG_OP = 'UPDATE' AND OLD.version = NEW.version) THEN
RAISE EXCEPTION 'UPDATE not allowed - create new version instead';
END IF;

RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER enforce_record_integrity
BEFORE UPDATE OR DELETE ON work_orders
FOR EACH ROW
EXECUTE FUNCTION prevent_record_modification();

2.1.2 Cryptographic Hash Chain

Each record version includes a SHA-256 hash that binds it to the previous version, creating a tamper-evident chain.

Hash Calculation:

import crypto from 'crypto';

interface RecordVersion {
id: string;
version: number;
content: Record<string, any>;
previousVersionHash: string | null;
timestamp: Date;
userId: string;
}

function calculateRecordHash(record: RecordVersion): string {
// Canonical JSON serialization (sorted keys)
const canonicalContent = JSON.stringify(
{
id: record.id,
version: record.version,
content: sortObjectKeys(record.content),
previousVersionHash: record.previousVersionHash,
timestamp: record.timestamp.toISOString(),
userId: record.userId,
},
Object.keys(record).sort()
);

return crypto
.createHash('sha256')
.update(canonicalContent, 'utf8')
.digest('hex');
}

function sortObjectKeys(obj: Record<string, any>): Record<string, any> {
return Object.keys(obj)
.sort()
.reduce((sorted, key) => {
sorted[key] = typeof obj[key] === 'object' && obj[key] !== null
? sortObjectKeys(obj[key])
: obj[key];
return sorted;
}, {} as Record<string, any>);
}

Hash Verification Process:

async function verifyRecordIntegrity(recordId: string): Promise<IntegrityResult> {
const versions = await prisma.workOrder.findMany({
where: { id: recordId },
orderBy: { version: 'asc' },
});

const results: VersionIntegrityCheck[] = [];
let previousHash: string | null = null;

for (const version of versions) {
// Verify hash chain link
if (version.previousVersionHash !== previousHash) {
results.push({
version: version.version,
valid: false,
error: `Hash chain broken: expected ${previousHash}, got ${version.previousVersionHash}`,
});
continue;
}

// Recalculate hash and compare
const calculatedHash = calculateRecordHash(version);
if (calculatedHash !== version.versionHash) {
results.push({
version: version.version,
valid: false,
error: `Hash mismatch: calculated ${calculatedHash}, stored ${version.versionHash}`,
});
} else {
results.push({
version: version.version,
valid: true,
});
}

previousHash = version.versionHash;
}

return {
recordId,
totalVersions: versions.length,
validVersions: results.filter(r => r.valid).length,
integrityStatus: results.every(r => r.valid) ? 'INTACT' : 'COMPROMISED',
details: results,
};
}

interface IntegrityResult {
recordId: string;
totalVersions: number;
validVersions: number;
integrityStatus: 'INTACT' | 'COMPROMISED';
details: VersionIntegrityCheck[];
}

interface VersionIntegrityCheck {
version: number;
valid: boolean;
error?: string;
}

2.1.3 Database-Level Constraints

All regulated tables enforce data integrity through database constraints:

-- Integrity constraints for work orders
ALTER TABLE work_orders
-- Required fields (ALCOA: Attributable, Contemporaneous)
ADD CONSTRAINT work_orders_created_by_not_null CHECK (created_by IS NOT NULL),
ADD CONSTRAINT work_orders_created_at_not_null CHECK (created_at IS NOT NULL),
ADD CONSTRAINT work_orders_modified_by_not_null CHECK (modified_by IS NOT NULL),
ADD CONSTRAINT work_orders_modified_at_not_null CHECK (modified_at IS NOT NULL),

-- Referential integrity
ADD CONSTRAINT work_orders_tenant_fk FOREIGN KEY (tenant_id) REFERENCES tenants(id),
ADD CONSTRAINT work_orders_created_by_fk FOREIGN KEY (created_by) REFERENCES persons(id),
ADD CONSTRAINT work_orders_modified_by_fk FOREIGN KEY (modified_by) REFERENCES persons(id),

-- Business logic constraints
ADD CONSTRAINT work_orders_status_valid CHECK (status IN (
'DRAFT', 'PLANNED', 'SCHEDULED', 'IN_PROGRESS',
'PENDING_REVIEW', 'APPROVED', 'REJECTED', 'COMPLETED', 'CANCELLED'
)),

-- Temporal integrity (ALCOA: Contemporaneous)
ADD CONSTRAINT work_orders_modified_after_created CHECK (modified_at >= created_at),
ADD CONSTRAINT work_orders_deleted_after_created CHECK (deleted_at IS NULL OR deleted_at >= created_at);

-- Index for performance
CREATE INDEX idx_work_orders_tenant_status ON work_orders(tenant_id, status);
CREATE INDEX idx_work_orders_version_chain ON work_orders(id, version);

2.1.4 Application-Level Validation

Field-Level Validation Rules:

import { z } from 'zod';

// Work Order validation schema
const WorkOrderSchema = z.object({
workOrderNumber: z.string()
.regex(/^WO-\d{4}-\d{6}$/, 'Must match format WO-YYYY-NNNNNN')
.min(1, 'Work order number required'),

title: z.string()
.min(10, 'Title must be at least 10 characters')
.max(200, 'Title must not exceed 200 characters'),

description: z.string()
.min(20, 'Description must be at least 20 characters')
.max(5000, 'Description must not exceed 5000 characters')
.optional(),

status: z.enum([
'DRAFT', 'PLANNED', 'SCHEDULED', 'IN_PROGRESS',
'PENDING_REVIEW', 'APPROVED', 'REJECTED', 'COMPLETED', 'CANCELLED'
]),

regulatoryFlag: z.boolean().default(false),

scheduledStartDate: z.date().optional(),
scheduledEndDate: z.date().optional(),

assigneeId: z.string().uuid('Invalid assignee ID').optional(),
systemOwnerId: z.string().uuid('Invalid system owner ID'),

}).refine(data => {
// Business rule: scheduled end must be after start
if (data.scheduledStartDate && data.scheduledEndDate) {
return data.scheduledEndDate >= data.scheduledStartDate;
}
return true;
}, {
message: 'Scheduled end date must be after start date',
});

// Validate before database write
async function createWorkOrder(input: unknown, userId: string): Promise<WorkOrder> {
const validated = WorkOrderSchema.parse(input);

return await prisma.workOrder.create({
data: {
...validated,
version: 1,
versionHash: calculateRecordHash({
id: '', // Will be set by database
version: 1,
content: validated,
previousVersionHash: null,
timestamp: new Date(),
userId,
}),
createdBy: userId,
modifiedBy: userId,
},
});
}

2.2 Version Control for Records

Every modification to a regulated record creates a new version while preserving all prior versions.

Versioning Implementation:

async function updateWorkOrder(
workOrderId: string,
changes: Partial<WorkOrder>,
userId: string,
changeReason: string
): Promise<WorkOrder> {
// Fetch current version
const current = await prisma.workOrder.findFirst({
where: { id: workOrderId },
orderBy: { version: 'desc' },
});

if (!current) {
throw new Error(`Work order ${workOrderId} not found`);
}

// Validate state transition
if (current.status === 'APPROVED' || current.status === 'COMPLETED') {
throw new Error(`Cannot modify work order in ${current.status} status`);
}

// Create new version
const newVersion = current.version + 1;
const newContent = { ...current, ...changes };
const newHash = calculateRecordHash({
id: workOrderId,
version: newVersion,
content: newContent,
previousVersionHash: current.versionHash,
timestamp: new Date(),
userId,
});

// Insert new version (append-only)
const updated = await prisma.workOrder.create({
data: {
id: workOrderId,
version: newVersion,
versionHash: newHash,
previousVersionHash: current.versionHash,
...newContent,
modifiedBy: userId,
modifiedAt: new Date(),
},
});

// Create audit trail entry
await prisma.auditTrail.create({
data: {
entityType: 'WORK_ORDER',
entityId: workOrderId,
entityVersion: newVersion,
action: 'UPDATE',
userId,
timestamp: new Date(),
changes: {
old: current,
new: newContent,
reason: changeReason,
},
},
});

return updated;
}

2.3 Hash Verification Procedures

Automated Daily Verification:

import cron from 'node-cron';

// Run daily at 2 AM UTC
cron.schedule('0 2 * * *', async () => {
console.log('Starting daily integrity verification');

const recordTypes = ['work_orders', 'approvals', 'electronic_signatures', 'audit_trails'];
const results: IntegrityVerificationReport = {
timestamp: new Date(),
totalRecords: 0,
verifiedRecords: 0,
failedRecords: 0,
failures: [],
};

for (const type of recordTypes) {
const records = await getRecordsForVerification(type);
results.totalRecords += records.length;

for (const recordId of records) {
const verification = await verifyRecordIntegrity(recordId);

if (verification.integrityStatus === 'INTACT') {
results.verifiedRecords++;
} else {
results.failedRecords++;
results.failures.push({
recordType: type,
recordId,
details: verification.details,
});
}
}
}

// Log results
await prisma.integrityVerificationLog.create({
data: results,
});

// Alert on failures
if (results.failedRecords > 0) {
await sendSecurityAlert({
severity: 'CRITICAL',
subject: `Integrity verification failed for ${results.failedRecords} records`,
body: JSON.stringify(results.failures, null, 2),
});
}

console.log(`Verification complete: ${results.verifiedRecords}/${results.totalRecords} passed`);
});

interface IntegrityVerificationReport {
timestamp: Date;
totalRecords: number;
verifiedRecords: number;
failedRecords: number;
failures: IntegrityFailure[];
}

interface IntegrityFailure {
recordType: string;
recordId: string;
details: VersionIntegrityCheck[];
}

Manual Monthly Review:

Compliance officers perform manual integrity reviews:

  1. Sample Selection: Random sample of 50 records per record type
  2. Hash Verification: Manually verify hash chain for selected records
  3. Change History Review: Verify all modifications have corresponding audit trail entries
  4. Deletion Review: Verify all soft-deleted records have deletion reason and approver
  5. Documentation: Document findings in monthly compliance report

3. Record Retrieval System

3.1 Human-Readable Format Requirements

Per FDA 21 CFR Part 11 §11.10(b), records MUST be capable of being retrieved in human-readable format.

Supported Export Formats:

FormatUse CaseRetentionStandard
PDF/A-2bLong-term archival, regulatory submission10+ yearsISO 19005-2
CSVData analysis, spreadsheet importShort-termRFC 4180
JSONAPI integration, programmatic accessShort-termRFC 8259
XMLSystem-to-system exchangeShort-termW3C XML 1.0
HTMLWeb viewing, online inspectionShort-termW3C HTML5

3.1.1 PDF/A-2b Export

PDF/A Requirements:

  • Conformance Level: PDF/A-2b (basic conformance, ISO 19005-2)
  • Embedded Fonts: All fonts embedded to ensure rendering consistency
  • No External Dependencies: No links to external content
  • Metadata: XMP metadata with creation date, author, title
  • Color Space: Device-independent (sRGB or CMYK)

Implementation:

import PDFDocument from 'pdfkit';
import fs from 'fs';

async function exportWorkOrderToPDF(
workOrderId: string,
includeAuditTrail: boolean = true
): Promise<Buffer> {
// Fetch work order with all versions and audit trail
const workOrder = await prisma.workOrder.findFirst({
where: { id: workOrderId },
orderBy: { version: 'desc' },
include: {
createdByPerson: true,
modifiedByPerson: true,
approvals: {
include: {
signature: true,
approver: true,
},
},
auditTrail: {
orderBy: { timestamp: 'asc' },
},
},
});

if (!workOrder) {
throw new Error(`Work order ${workOrderId} not found`);
}

// Create PDF/A-2b document
const doc = new PDFDocument({
pdfVersion: '1.7',
tagged: true,
displayTitle: true,
lang: 'en-US',
});

const buffers: Buffer[] = [];
doc.on('data', buffers.push.bind(buffers));

// Set metadata (XMP)
doc.info.Title = `Work Order ${workOrder.workOrderNumber}`;
doc.info.Author = 'CODITECT BIO-QMS';
doc.info.Subject = `Work Order: ${workOrder.title}`;
doc.info.Creator = 'CODITECT BIO-QMS Platform v1.0';
doc.info.CreationDate = new Date();

// Header
doc.fontSize(20).text('Work Order Record', { align: 'center' });
doc.moveDown();

// Work order details
doc.fontSize(12).text(`Work Order Number: ${workOrder.workOrderNumber}`, { bold: true });
doc.text(`Title: ${workOrder.title}`);
doc.text(`Status: ${workOrder.status}`);
doc.text(`Regulatory Flag: ${workOrder.regulatoryFlag ? 'Yes' : 'No'}`);
doc.moveDown();

doc.text(`Created By: ${workOrder.createdByPerson.name}`);
doc.text(`Created At: ${workOrder.createdAt.toISOString()}`);
doc.text(`Modified By: ${workOrder.modifiedByPerson.name}`);
doc.text(`Modified At: ${workOrder.modifiedAt.toISOString()}`);
doc.moveDown();

// Description
doc.fontSize(14).text('Description:', { underline: true });
doc.fontSize(12).text(workOrder.description || 'N/A');
doc.moveDown();

// Approvals (FDA §11.50 signature manifestations)
if (workOrder.approvals.length > 0) {
doc.fontSize(14).text('Electronic Signatures:', { underline: true });
for (const approval of workOrder.approvals) {
doc.fontSize(12);
doc.text(`Signed By: ${approval.approver.name}`);
doc.text(`Role: ${approval.role}`);
doc.text(`Decision: ${approval.decision}`);
doc.text(`Signed At: ${approval.signature.signedAt.toISOString()}`);
doc.text(`Meaning: ${approval.signature.meaning}`);
if (approval.comment) {
doc.text(`Comment: ${approval.comment}`);
}
doc.moveDown(0.5);
}
doc.moveDown();
}

// Audit trail (if requested)
if (includeAuditTrail && workOrder.auditTrail.length > 0) {
doc.fontSize(14).text('Audit Trail:', { underline: true });
doc.fontSize(10);

for (const entry of workOrder.auditTrail) {
doc.text(`${entry.timestamp.toISOString()} - ${entry.action} by ${entry.userId}`);
if (entry.changes) {
doc.text(` Changes: ${JSON.stringify(entry.changes, null, 2)}`);
}
doc.moveDown(0.3);
}
}

// Footer with integrity hash
doc.fontSize(8).text(
`Integrity Hash (SHA-256): ${workOrder.versionHash}`,
{ align: 'center', color: 'gray' }
);
doc.text(
`Generated: ${new Date().toISOString()} | Version: ${workOrder.version}`,
{ align: 'center', color: 'gray' }
);

doc.end();

return new Promise((resolve) => {
doc.on('end', () => resolve(Buffer.concat(buffers)));
});
}

Search Implementation (PostgreSQL):

-- Add full-text search index
ALTER TABLE work_orders
ADD COLUMN search_vector tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(work_order_number, '')), 'A') ||
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(description, '')), 'B')
) STORED;

CREATE INDEX idx_work_orders_search ON work_orders USING GIN(search_vector);

-- Search query
SELECT
id,
work_order_number,
title,
ts_rank(search_vector, query) AS rank
FROM
work_orders,
plainto_tsquery('english', 'validation IQ OQ PQ') AS query
WHERE
search_vector @@ query
AND tenant_id = :tenant_id
AND NOT deleted
ORDER BY
rank DESC
LIMIT 50;

Application-Level Search API:

async function searchRecords(
tenantId: string,
searchQuery: string,
filters: RecordSearchFilters,
pagination: { page: number; pageSize: number }
): Promise<SearchResults> {
const { page, pageSize } = pagination;
const offset = (page - 1) * pageSize;

// Build search query
const whereClause: any = {
tenantId,
deleted: false,
};

// Apply filters
if (filters.recordType) {
whereClause.recordType = filters.recordType;
}

if (filters.status) {
whereClause.status = filters.status;
}

if (filters.dateRange) {
whereClause.createdAt = {
gte: filters.dateRange.start,
lte: filters.dateRange.end,
};
}

if (filters.createdBy) {
whereClause.createdBy = filters.createdBy;
}

// Execute search
const [records, total] = await Promise.all([
prisma.$queryRaw`
SELECT
id,
work_order_number,
title,
status,
created_at,
ts_rank(search_vector, plainto_tsquery('english', ${searchQuery})) AS rank
FROM work_orders
WHERE
tenant_id = ${tenantId}
AND search_vector @@ plainto_tsquery('english', ${searchQuery})
AND NOT deleted
${filters.status ? Prisma.sql`AND status = ${filters.status}` : Prisma.empty}
ORDER BY rank DESC
LIMIT ${pageSize} OFFSET ${offset}
`,

prisma.$queryRaw`
SELECT COUNT(*) as count
FROM work_orders
WHERE
tenant_id = ${tenantId}
AND search_vector @@ plainto_tsquery('english', ${searchQuery})
AND NOT deleted
${filters.status ? Prisma.sql`AND status = ${filters.status}` : Prisma.empty}
`,
]);

return {
records,
pagination: {
page,
pageSize,
total: Number(total[0].count),
totalPages: Math.ceil(Number(total[0].count) / pageSize),
},
};
}

interface RecordSearchFilters {
recordType?: string;
status?: string;
dateRange?: {
start: Date;
end: Date;
};
createdBy?: string;
}

interface SearchResults {
records: any[];
pagination: {
page: number;
pageSize: number;
total: number;
totalPages: number;
};
}

3.3 Advanced Filtering

Filter Criteria:

FilterDescriptionExample
Date RangeCreated/modified within date range2026-01-01 to 2026-01-31
UserCreated/modified by specific userjohn.doe@bioqms.com
Record TypeSpecific record typework_order, approval, signature
StatusCurrent statusAPPROVED, PENDING_REVIEW
Regulatory FlagGxP vs. non-GxP recordstrue (regulatory only)
TenantSpecific tenant (admin only)tenant-uuid-123
DeletedInclude soft-deleted recordsfalse (exclude deleted)

API Endpoint:

GET /api/v1/records/search
Query Parameters:
q: string # Search query
recordType: string # work_order | approval | signature
status: string # Record status
dateFrom: ISO8601 # Start date
dateTo: ISO8601 # End date
createdBy: uuid # User ID
regulatoryFlag: boolean # GxP records only
includeDeleted: boolean # Include soft-deleted
page: integer # Page number (default: 1)
pageSize: integer # Page size (default: 20, max: 100)

Response: 200 OK
{
"records": [
{
"id": "uuid",
"recordType": "work_order",
"workOrderNumber": "WO-2026-000123",
"title": "IQ for Laboratory Information System",
"status": "APPROVED",
"createdBy": "uuid",
"createdAt": "2026-02-15T10:30:00Z",
"rank": 0.95
}
],
"pagination": {
"page": 1,
"pageSize": 20,
"total": 145,
"totalPages": 8
}
}

3.4 Export with Complete Audit Trail

Comprehensive Export Package:

When exporting records for regulatory submission or audit, include:

  1. Primary Record - Current version in PDF/A-2b format
  2. Version History - All prior versions with change tracking
  3. Audit Trail - Complete audit trail with timestamps, users, actions
  4. Electronic Signatures - Signature manifestations per FDA §11.50
  5. Integrity Manifest - SHA-256 hashes for all files

Export Implementation:

import JSZip from 'jszip';

async function exportRecordPackage(
recordId: string,
recordType: string
): Promise<Buffer> {
const zip = new JSZip();

// 1. Export primary record as PDF
const pdfBuffer = await exportWorkOrderToPDF(recordId, true);
zip.file('record.pdf', pdfBuffer);

// 2. Export version history as JSON
const versions = await prisma.workOrder.findMany({
where: { id: recordId },
orderBy: { version: 'asc' },
});
zip.file('version-history.json', JSON.stringify(versions, null, 2));

// 3. Export audit trail as CSV
const auditTrail = await prisma.auditTrail.findMany({
where: { entityId: recordId },
orderBy: { timestamp: 'asc' },
});
const auditCsv = convertToCSV(auditTrail);
zip.file('audit-trail.csv', auditCsv);

// 4. Export electronic signatures as JSON
const signatures = await prisma.electronicSignature.findMany({
where: {
approvals: {
some: {
workOrderId: recordId,
},
},
},
include: {
approvals: true,
},
});
zip.file('electronic-signatures.json', JSON.stringify(signatures, null, 2));

// 5. Generate integrity manifest
const manifest = {
recordId,
recordType,
exportTimestamp: new Date().toISOString(),
totalVersions: versions.length,
totalAuditEntries: auditTrail.length,
totalSignatures: signatures.length,
fileHashes: {
'record.pdf': calculateFileHash(pdfBuffer),
'version-history.json': calculateFileHash(JSON.stringify(versions)),
'audit-trail.csv': calculateFileHash(auditCsv),
'electronic-signatures.json': calculateFileHash(JSON.stringify(signatures)),
},
};
zip.file('MANIFEST.json', JSON.stringify(manifest, null, 2));

// 6. Add README with instructions
zip.file('README.txt', `
CODITECT BIO-QMS Export Package
================================

Record ID: ${recordId}
Record Type: ${recordType}
Export Date: ${new Date().toISOString()}

Contents:
- record.pdf: Primary record in PDF/A-2b format (long-term archival)
- version-history.json: Complete version history
- audit-trail.csv: Complete audit trail
- electronic-signatures.json: Electronic signature records per FDA 21 CFR Part 11
- MANIFEST.json: File integrity hashes (SHA-256)

Integrity Verification:
To verify integrity, recalculate SHA-256 hash of each file and compare with MANIFEST.json.

Contact: compliance@bioqms.com
`);

return await zip.generateAsync({ type: 'nodebuffer' });
}

function calculateFileHash(content: Buffer | string): string {
return crypto
.createHash('sha256')
.update(content)
.digest('hex');
}

function convertToCSV(data: any[]): string {
if (data.length === 0) return '';

const headers = Object.keys(data[0]);
const rows = data.map(row =>
headers.map(header => {
const value = row[header];
if (value === null || value === undefined) return '';
if (typeof value === 'object') return JSON.stringify(value);
return String(value).replace(/"/g, '""'); // Escape quotes
})
);

const csvHeaders = headers.join(',');
const csvRows = rows.map(row => row.map(cell => `"${cell}"`).join(',')).join('\n');

return `${csvHeaders}\n${csvRows}`;
}

4. Retention Policy Engine

4.1 Retention Requirements by Record Type

Record TypeRegulatory BasisRetention PeriodTrigger Event
GxP Work OrdersFDA 21 CFR Part 112 years minimumAfter last action/modification
Electronic SignaturesFDA 21 CFR Part 11 §11.10(e)2 years minimumAfter signature event
Validation Records (IQ/OQ/PQ)FDA 21 CFR 211.180(c)Duration of system use + 1 yearAfter system retirement
Audit TrailsFDA 21 CFR Part 11 §11.10(e)2 years minimumAfter audit event
Protected Health Information (PHI)HIPAA §164.316(b)(2)(i)6 yearsFrom date of creation or last effective date
Quality Events (Deviations, CAPAs)FDA 21 CFR 820.1802 yearsAfter resolution
Training RecordsFDA 21 CFR 211.25(a)Duration of employment + 2 yearsAfter employee departure
Vendor RecordsFDA 21 CFR 820.502 yearsAfter vendor contract end
SOC 2 EvidenceSOC 2 Type II7 yearsAfter audit report issuance
Change Control RecordsFDA 21 CFR 211.100Duration of system use + 1 yearAfter change implementation

4.2 Retention Policy Data Model

-- Retention policy configuration table
CREATE TABLE retention_policies (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
record_type VARCHAR(100) NOT NULL,

-- Retention period
retention_period_days INT NOT NULL,
retention_basis VARCHAR(255) NOT NULL, -- Regulatory citation
trigger_event VARCHAR(100) NOT NULL, -- 'CREATION' | 'LAST_MODIFICATION' | 'STATUS_CHANGE' | 'CUSTOM'

-- Archival policy
archival_delay_days INT DEFAULT 90, -- Move to archive after N days
archival_storage_class VARCHAR(50) DEFAULT 'GLACIER', -- S3 storage class

-- Deletion policy (after retention expires)
deletion_delay_days INT DEFAULT 30, -- Grace period before deletion
require_approval BOOLEAN DEFAULT true, -- Require dual approval for deletion

-- Configuration
active BOOLEAN DEFAULT true,
created_by UUID NOT NULL REFERENCES persons(id),
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
modified_by UUID NOT NULL REFERENCES persons(id),
modified_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),

UNIQUE(tenant_id, record_type)
);

-- Retention hold (legal hold overrides retention policy)
CREATE TABLE retention_holds (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),

-- Hold scope
record_type VARCHAR(100), -- NULL = all record types
record_id UUID, -- NULL = all records of type

-- Hold details
hold_reason TEXT NOT NULL,
legal_case_number VARCHAR(100),
hold_placed_by UUID NOT NULL REFERENCES persons(id),
hold_placed_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),

-- Hold release
hold_released BOOLEAN DEFAULT false,
hold_released_by UUID REFERENCES persons(id),
hold_released_at TIMESTAMP WITH TIME ZONE,
release_reason TEXT,

CHECK (hold_released = false OR (hold_released_by IS NOT NULL AND hold_released_at IS NOT NULL))
);

-- Record retention tracking
CREATE TABLE record_retention_status (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
record_id UUID NOT NULL,
record_type VARCHAR(100) NOT NULL,

-- Retention lifecycle
created_at TIMESTAMP WITH TIME ZONE NOT NULL,
last_modified_at TIMESTAMP WITH TIME ZONE NOT NULL,
retention_trigger_date TIMESTAMP WITH TIME ZONE NOT NULL,
retention_expiry_date TIMESTAMP WITH TIME ZONE NOT NULL,

-- Archival status
archived BOOLEAN DEFAULT false,
archived_at TIMESTAMP WITH TIME ZONE,
archival_location VARCHAR(500), -- S3 URI

-- Deletion eligibility
eligible_for_deletion BOOLEAN DEFAULT false,
eligible_for_deletion_at TIMESTAMP WITH TIME ZONE,
deletion_scheduled BOOLEAN DEFAULT false,
deletion_scheduled_at TIMESTAMP WITH TIME ZONE,

-- Holds
on_legal_hold BOOLEAN DEFAULT false,
hold_count INT DEFAULT 0,

UNIQUE(record_id)
);

4.3 Retention Calculation Engine

interface RetentionPolicy {
id: string;
recordType: string;
retentionPeriodDays: number;
retentionBasis: string;
triggerEvent: 'CREATION' | 'LAST_MODIFICATION' | 'STATUS_CHANGE' | 'CUSTOM';
archivalDelayDays: number;
deletionDelayDays: number;
requireApproval: boolean;
}

async function calculateRetentionDates(
recordId: string,
recordType: string,
tenantId: string
): Promise<RetentionDates> {
// Fetch retention policy
const policy = await prisma.retentionPolicy.findUnique({
where: {
tenantId_recordType: {
tenantId,
recordType,
},
},
});

if (!policy) {
throw new Error(`No retention policy found for record type: ${recordType}`);
}

// Fetch record metadata
const record = await prisma[recordType].findUnique({
where: { id: recordId },
});

if (!record) {
throw new Error(`Record ${recordId} not found`);
}

// Determine trigger date
let triggerDate: Date;
switch (policy.triggerEvent) {
case 'CREATION':
triggerDate = record.createdAt;
break;
case 'LAST_MODIFICATION':
triggerDate = record.modifiedAt;
break;
case 'STATUS_CHANGE':
// For work orders: trigger from COMPLETED or CANCELLED status
if (record.status === 'COMPLETED' || record.status === 'CANCELLED') {
triggerDate = record.modifiedAt;
} else {
// Not yet completed, retention not started
return {
retentionStarted: false,
triggerDate: null,
archivalDate: null,
expiryDate: null,
deletionEligibleDate: null,
};
}
break;
default:
triggerDate = record.createdAt;
}

// Calculate dates
const archivalDate = addDays(triggerDate, policy.archivalDelayDays);
const expiryDate = addDays(triggerDate, policy.retentionPeriodDays);
const deletionEligibleDate = addDays(expiryDate, policy.deletionDelayDays);

return {
retentionStarted: true,
triggerDate,
archivalDate,
expiryDate,
deletionEligibleDate,
policy,
};
}

interface RetentionDates {
retentionStarted: boolean;
triggerDate: Date | null;
archivalDate: Date | null;
expiryDate: Date | null;
deletionEligibleDate: Date | null;
policy?: RetentionPolicy;
}

function addDays(date: Date, days: number): Date {
const result = new Date(date);
result.setDate(result.getDate() + days);
return result;
}

4.4 Archival Process

Automated Archival Job (Daily):

import cron from 'node-cron';
import AWS from 'aws-sdk';

const s3 = new AWS.S3();

// Run daily at 3 AM UTC
cron.schedule('0 3 * * *', async () => {
console.log('Starting automated archival process');

// Find records eligible for archival
const eligibleRecords = await prisma.recordRetentionStatus.findMany({
where: {
archived: false,
archivalDate: {
lte: new Date(),
},
onLegalHold: false, // Skip records on legal hold
},
});

console.log(`Found ${eligibleRecords.length} records eligible for archival`);

for (const record of eligibleRecords) {
try {
// Export record package
const packageBuffer = await exportRecordPackage(
record.recordId,
record.recordType
);

// Upload to S3 Glacier
const s3Key = `archived-records/${record.recordType}/${record.recordId}/${new Date().toISOString()}.zip`;
await s3.putObject({
Bucket: 'bioqms-archived-records',
Key: s3Key,
Body: packageBuffer,
StorageClass: 'GLACIER',
Metadata: {
'record-id': record.recordId,
'record-type': record.recordType,
'archival-date': new Date().toISOString(),
'retention-expiry': record.retentionExpiryDate.toISOString(),
},
ServerSideEncryption: 'AES256',
}).promise();

// Update retention status
await prisma.recordRetentionStatus.update({
where: { id: record.id },
data: {
archived: true,
archivedAt: new Date(),
archivalLocation: `s3://bioqms-archived-records/${s3Key}`,
},
});

console.log(`Archived record ${record.recordId} to ${s3Key}`);
} catch (error) {
console.error(`Failed to archive record ${record.recordId}:`, error);
// Log to error tracking system
await logArchivalError(record.recordId, error);
}
}

console.log('Archival process complete');
});

4.5 Deletion Workflow

Dual-Approval Deletion Process:

async function scheduleDeletion(
recordId: string,
requestedBy: string,
deletionReason: string
): Promise<DeletionRequest> {
// Verify record is eligible for deletion
const retention = await prisma.recordRetentionStatus.findUnique({
where: { recordId },
});

if (!retention) {
throw new Error('Record retention status not found');
}

if (retention.onLegalHold) {
throw new Error('Cannot delete record on legal hold');
}

if (retention.retentionExpiryDate > new Date()) {
throw new Error('Record retention period has not expired');
}

// Create deletion request
const deletionRequest = await prisma.deletionRequest.create({
data: {
recordId,
recordType: retention.recordType,
requestedBy,
requestedAt: new Date(),
deletionReason,
approvalStatus: 'PENDING',
approvalsRequired: 2, // Dual approval
approvalsReceived: 0,
},
});

// Notify approvers (QA and Compliance)
await sendDeletionApprovalRequest(deletionRequest.id);

return deletionRequest;
}

async function approveDeletion(
deletionRequestId: string,
approverUserId: string,
approverRole: 'QA' | 'COMPLIANCE',
comment: string
): Promise<void> {
const request = await prisma.deletionRequest.findUnique({
where: { id: deletionRequestId },
});

if (!request) {
throw new Error('Deletion request not found');
}

if (request.approvalStatus !== 'PENDING') {
throw new Error(`Deletion request already ${request.approvalStatus}`);
}

// Record approval
await prisma.deletionApproval.create({
data: {
deletionRequestId,
approverUserId,
approverRole,
approvedAt: new Date(),
comment,
},
});

// Check if all approvals received
const approvalsReceived = await prisma.deletionApproval.count({
where: { deletionRequestId },
});

if (approvalsReceived >= request.approvalsRequired) {
// Schedule deletion
await prisma.deletionRequest.update({
where: { id: deletionRequestId },
data: {
approvalStatus: 'APPROVED',
approvalsReceived,
scheduledDeletionDate: addDays(new Date(), 7), // 7-day grace period
},
});

await prisma.recordRetentionStatus.update({
where: { recordId: request.recordId },
data: {
deletionScheduled: true,
deletionScheduledAt: new Date(),
},
});

console.log(`Deletion scheduled for record ${request.recordId}`);
}
}

// Automated deletion execution (after grace period)
cron.schedule('0 4 * * *', async () => {
const scheduledDeletions = await prisma.deletionRequest.findMany({
where: {
approvalStatus: 'APPROVED',
scheduledDeletionDate: {
lte: new Date(),
},
executed: false,
},
});

for (const deletion of scheduledDeletions) {
try {
// Soft delete record
await softDeleteRecord(deletion.recordId, deletion.recordType);

// Mark deletion as executed
await prisma.deletionRequest.update({
where: { id: deletion.id },
data: {
executed: true,
executedAt: new Date(),
},
});

// Create audit trail entry
await prisma.auditTrail.create({
data: {
entityType: deletion.recordType,
entityId: deletion.recordId,
action: 'DELETE',
userId: 'SYSTEM',
timestamp: new Date(),
changes: {
deletionRequestId: deletion.id,
reason: deletion.deletionReason,
},
},
});

console.log(`Deleted record ${deletion.recordId}`);
} catch (error) {
console.error(`Failed to delete record ${deletion.recordId}:`, error);
}
}
});

async function softDeleteRecord(recordId: string, recordType: string): Promise<void> {
await prisma[recordType].update({
where: { id: recordId },
data: {
deleted: true,
deletedAt: new Date(),
deletedBy: 'SYSTEM', // Automated deletion
},
});
}

5. Access Control for Records

5.1 RBAC Integration

Role-Based Access Control Matrix (from 21-rbac-model.md):

RoleView OwnView AllCreateEditApproveExportAudit Trail
ORIGINATOR✅ (DRAFT only)
ASSIGNER✅ (DRAFT only)
ASSIGNEE✅ (execution fields)
SYSTEM_OWNER
QA
VENDOR✅ (assigned only)✅ (assigned only)
ADMIN
AUDITOR

5.2 Session Management

Session Configuration:

interface SessionConfig {
// Session timeout (FDA §11.10(d) - device checks)
inactivityTimeout: number; // 15 minutes default
maxSessionDuration: number; // 8 hours maximum

// Concurrent session limits
maxConcurrentSessions: number; // 3 per user

// Session security
requireMFA: boolean; // true for QA, SYSTEM_OWNER roles
ipBindingEnabled: boolean; // true (prevent session hijacking)

// Session audit
logAllAccess: boolean; // true (all session events logged)
}

const sessionConfigs: Record<string, SessionConfig> = {
QA: {
inactivityTimeout: 15 * 60 * 1000, // 15 minutes
maxSessionDuration: 8 * 60 * 60 * 1000, // 8 hours
maxConcurrentSessions: 2,
requireMFA: true,
ipBindingEnabled: true,
logAllAccess: true,
},
SYSTEM_OWNER: {
inactivityTimeout: 15 * 60 * 1000,
maxSessionDuration: 8 * 60 * 60 * 1000,
maxConcurrentSessions: 3,
requireMFA: true,
ipBindingEnabled: true,
logAllAccess: true,
},
ASSIGNEE: {
inactivityTimeout: 30 * 60 * 1000, // 30 minutes
maxSessionDuration: 12 * 60 * 60 * 1000, // 12 hours (longer for field work)
maxConcurrentSessions: 1,
requireMFA: false,
ipBindingEnabled: false,
logAllAccess: true,
},
AUDITOR: {
inactivityTimeout: 15 * 60 * 1000,
maxSessionDuration: 4 * 60 * 60 * 1000, // 4 hours (read-only sessions)
maxConcurrentSessions: 1,
requireMFA: true,
ipBindingEnabled: true,
logAllAccess: true,
},
};

Session Lifecycle:

import jwt from 'jsonwebtoken';
import Redis from 'ioredis';

const redis = new Redis();

async function createSession(
userId: string,
role: string,
ipAddress: string,
userAgent: string
): Promise<Session> {
const config = sessionConfigs[role] || sessionConfigs.ASSIGNEE;

// Check concurrent session limit
const activeSessions = await redis.smembers(`user:${userId}:sessions`);
if (activeSessions.length >= config.maxConcurrentSessions) {
throw new Error(`Maximum concurrent sessions (${config.maxConcurrentSessions}) exceeded`);
}

// Create session
const sessionId = crypto.randomUUID();
const now = new Date();
const session: Session = {
id: sessionId,
userId,
role,
ipAddress,
userAgent,
createdAt: now,
lastActivityAt: now,
expiresAt: new Date(now.getTime() + config.maxSessionDuration),
inactivityTimeout: config.inactivityTimeout,
};

// Store in Redis
await redis.setex(
`session:${sessionId}`,
config.maxSessionDuration / 1000,
JSON.stringify(session)
);
await redis.sadd(`user:${userId}:sessions`, sessionId);

// Create JWT token
const token = jwt.sign(
{
sessionId,
userId,
role,
ipAddress,
},
process.env.JWT_SECRET!,
{
expiresIn: config.maxSessionDuration / 1000,
}
);

// Log session creation
await prisma.sessionEvent.create({
data: {
sessionId,
userId,
event: 'SESSION_CREATED',
timestamp: now,
ipAddress,
userAgent,
},
});

return { ...session, token };
}

async function validateSession(
sessionId: string,
ipAddress: string
): Promise<Session> {
const sessionData = await redis.get(`session:${sessionId}`);
if (!sessionData) {
throw new Error('Session not found or expired');
}

const session: Session = JSON.parse(sessionData);

// Check IP binding
const config = sessionConfigs[session.role];
if (config.ipBindingEnabled && session.ipAddress !== ipAddress) {
await terminateSession(sessionId, 'IP_MISMATCH');
throw new Error('Session IP mismatch - possible hijacking attempt');
}

// Check inactivity timeout
const inactivityMs = Date.now() - session.lastActivityAt.getTime();
if (inactivityMs > session.inactivityTimeout) {
await terminateSession(sessionId, 'INACTIVITY_TIMEOUT');
throw new Error('Session expired due to inactivity');
}

// Update last activity
session.lastActivityAt = new Date();
await redis.setex(
`session:${sessionId}`,
(session.expiresAt.getTime() - Date.now()) / 1000,
JSON.stringify(session)
);

return session;
}

async function terminateSession(
sessionId: string,
reason: string
): Promise<void> {
const sessionData = await redis.get(`session:${sessionId}`);
if (!sessionData) return;

const session: Session = JSON.parse(sessionData);

// Remove from Redis
await redis.del(`session:${sessionId}`);
await redis.srem(`user:${session.userId}:sessions`, sessionId);

// Log session termination
await prisma.sessionEvent.create({
data: {
sessionId,
userId: session.userId,
event: 'SESSION_TERMINATED',
timestamp: new Date(),
metadata: { reason },
},
});
}

interface Session {
id: string;
userId: string;
role: string;
ipAddress: string;
userAgent: string;
createdAt: Date;
lastActivityAt: Date;
expiresAt: Date;
inactivityTimeout: number;
token?: string;
}

5.3 Field-Level Access Control

Sensitive Field Protection:

// Field-level access control configuration
const fieldAccessRules: Record<string, FieldAccessRule[]> = {
work_orders: [
{
field: 'estimated_cost',
allowedRoles: ['SYSTEM_OWNER', 'QA', 'ADMIN'],
reason: 'Financial data - restricted',
},
{
field: 'vendor_contract_details',
allowedRoles: ['SYSTEM_OWNER', 'ADMIN'],
reason: 'Vendor contracts - confidential',
},
{
field: 'internal_notes',
allowedRoles: ['SYSTEM_OWNER', 'QA', 'ADMIN', 'AUDITOR'],
reason: 'Internal notes - restricted',
},
],
persons: [
{
field: 'salary',
allowedRoles: ['ADMIN'],
reason: 'HR data - highly restricted',
},
{
field: 'ssn',
allowedRoles: [],
reason: 'PII - never exposed via API',
},
],
};

interface FieldAccessRule {
field: string;
allowedRoles: string[];
reason: string;
}

function filterFieldsByRole(
record: any,
recordType: string,
userRole: string
): any {
const rules = fieldAccessRules[recordType] || [];
const filtered = { ...record };

for (const rule of rules) {
if (!rule.allowedRoles.includes(userRole)) {
delete filtered[rule.field]; // Redact field
}
}

return filtered;
}

5.4 Temporal Access (Time-Bound Grants)

External Auditor Access:

async function grantTemporalAccess(
auditorUserId: string,
grantedBy: string,
accessScope: AccessScope,
durationHours: number
): Promise<TemporalAccessGrant> {
const expiresAt = new Date(Date.now() + durationHours * 60 * 60 * 1000);

const grant = await prisma.temporalAccessGrant.create({
data: {
auditorUserId,
grantedBy,
grantedAt: new Date(),
expiresAt,
accessScope: JSON.stringify(accessScope),
active: true,
},
});

// Create audit trail entry
await prisma.auditTrail.create({
data: {
entityType: 'TEMPORAL_ACCESS_GRANT',
entityId: grant.id,
action: 'GRANT_CREATED',
userId: grantedBy,
timestamp: new Date(),
changes: {
auditor: auditorUserId,
scope: accessScope,
duration: durationHours,
},
},
});

return grant;
}

interface AccessScope {
recordTypes: string[]; // ['work_orders', 'approvals']
tenantId?: string; // Specific tenant or all
dateRange?: { // Temporal scope
start: Date;
end: Date;
};
}

async function checkTemporalAccess(
auditorUserId: string,
recordType: string,
recordId: string
): Promise<boolean> {
const grants = await prisma.temporalAccessGrant.findMany({
where: {
auditorUserId,
active: true,
expiresAt: {
gte: new Date(),
},
},
});

for (const grant of grants) {
const scope: AccessScope = JSON.parse(grant.accessScope);

// Check record type
if (!scope.recordTypes.includes(recordType)) continue;

// Check tenant scope (if specified)
if (scope.tenantId) {
const record = await prisma[recordType].findUnique({
where: { id: recordId },
});
if (record.tenantId !== scope.tenantId) continue;
}

// Check date range (if specified)
if (scope.dateRange) {
const record = await prisma[recordType].findUnique({
where: { id: recordId },
});
if (record.createdAt < scope.dateRange.start || record.createdAt > scope.dateRange.end) {
continue;
}
}

// Grant allows access
return true;
}

return false;
}

6. Audit Trail Implementation

6.1 Automated Audit Trail

Per FDA 21 CFR Part 11 §11.10(e): Audit trail must be:

  • Computer-generated
  • Time-stamped
  • Independent of operator (cannot be disabled)
  • Secure from modification

Audit Trail Data Model:

CREATE TABLE audit_trail (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),

-- Entity being audited
entity_type VARCHAR(100) NOT NULL, -- 'work_order', 'approval', 'person', etc.
entity_id UUID NOT NULL,
entity_version INT, -- Record version when action occurred

-- Action details
action VARCHAR(50) NOT NULL, -- 'CREATE', 'UPDATE', 'DELETE', 'APPROVE', 'REJECT', etc.
user_id UUID NOT NULL REFERENCES persons(id),
user_name VARCHAR(255) NOT NULL, -- Snapshot of name at time of action
user_role VARCHAR(100),

-- Timing (UTC, microsecond precision)
timestamp TIMESTAMP(6) WITH TIME ZONE NOT NULL DEFAULT NOW(),

-- Change tracking
field_name VARCHAR(255), -- Specific field changed (for UPDATE)
old_value TEXT, -- Previous value (JSON for complex types)
new_value TEXT, -- New value

-- Context
ip_address INET,
session_id UUID,
user_agent TEXT,

-- Integrity (hash chain)
previous_entry_hash VARCHAR(64), -- SHA-256 of previous audit entry
current_entry_hash VARCHAR(64) NOT NULL, -- SHA-256 of this entry

-- Metadata
metadata JSONB, -- Additional context

CHECK (timestamp IS NOT NULL),
CHECK (user_id IS NOT NULL),
CHECK (current_entry_hash IS NOT NULL)
);

-- Indexes for performance
CREATE INDEX idx_audit_trail_entity ON audit_trail(entity_type, entity_id);
CREATE INDEX idx_audit_trail_user ON audit_trail(user_id, timestamp DESC);
CREATE INDEX idx_audit_trail_timestamp ON audit_trail(timestamp DESC);
CREATE INDEX idx_audit_trail_tenant ON audit_trail(tenant_id, timestamp DESC);

-- Trigger to prevent modification of audit trail
CREATE OR REPLACE FUNCTION prevent_audit_modification()
RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'UPDATE' OR TG_OP = 'DELETE') THEN
RAISE EXCEPTION 'Audit trail records cannot be modified or deleted';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER enforce_audit_immutability
BEFORE UPDATE OR DELETE ON audit_trail
FOR EACH ROW
EXECUTE FUNCTION prevent_audit_modification();

6.2 Audit Trail Hash Chain

Implementation:

async function createAuditEntry(
entry: AuditEntry
): Promise<void> {
// Fetch previous entry hash (for chain integrity)
const previousEntry = await prisma.auditTrail.findFirst({
where: { tenantId: entry.tenantId },
orderBy: { timestamp: 'desc' },
select: { currentEntryHash: true },
});

const previousHash = previousEntry?.currentEntryHash || null;

// Calculate current entry hash
const canonicalEntry = {
tenantId: entry.tenantId,
entityType: entry.entityType,
entityId: entry.entityId,
entityVersion: entry.entityVersion,
action: entry.action,
userId: entry.userId,
timestamp: entry.timestamp.toISOString(),
fieldName: entry.fieldName,
oldValue: entry.oldValue,
newValue: entry.newValue,
previousEntryHash: previousHash,
};

const currentHash = crypto
.createHash('sha256')
.update(JSON.stringify(canonicalEntry, Object.keys(canonicalEntry).sort()))
.digest('hex');

// Insert audit entry
await prisma.auditTrail.create({
data: {
...entry,
previousEntryHash: previousHash,
currentEntryHash: currentHash,
},
});
}

interface AuditEntry {
tenantId: string;
entityType: string;
entityId: string;
entityVersion?: number;
action: string;
userId: string;
userName: string;
userRole?: string;
timestamp: Date;
fieldName?: string;
oldValue?: string;
newValue?: string;
ipAddress?: string;
sessionId?: string;
userAgent?: string;
metadata?: any;
}

6.3 Audit Trail Verification

Integrity Verification:

async function verifyAuditTrailIntegrity(
tenantId: string,
startDate?: Date,
endDate?: Date
): Promise<AuditIntegrityResult> {
const entries = await prisma.auditTrail.findMany({
where: {
tenantId,
timestamp: {
gte: startDate,
lte: endDate,
},
},
orderBy: { timestamp: 'asc' },
});

const results: AuditEntryIntegrityCheck[] = [];
let previousHash: string | null = null;

for (const entry of entries) {
// Verify hash chain link
if (entry.previousEntryHash !== previousHash) {
results.push({
entryId: entry.id,
timestamp: entry.timestamp,
valid: false,
error: `Hash chain broken: expected ${previousHash}, got ${entry.previousEntryHash}`,
});
continue;
}

// Recalculate hash and compare
const canonicalEntry = {
tenantId: entry.tenantId,
entityType: entry.entityType,
entityId: entry.entityId,
entityVersion: entry.entityVersion,
action: entry.action,
userId: entry.userId,
timestamp: entry.timestamp.toISOString(),
fieldName: entry.fieldName,
oldValue: entry.oldValue,
newValue: entry.newValue,
previousEntryHash: entry.previousEntryHash,
};

const calculatedHash = crypto
.createHash('sha256')
.update(JSON.stringify(canonicalEntry, Object.keys(canonicalEntry).sort()))
.digest('hex');

if (calculatedHash !== entry.currentEntryHash) {
results.push({
entryId: entry.id,
timestamp: entry.timestamp,
valid: false,
error: `Hash mismatch: calculated ${calculatedHash}, stored ${entry.currentEntryHash}`,
});
} else {
results.push({
entryId: entry.id,
timestamp: entry.timestamp,
valid: true,
});
}

previousHash = entry.currentEntryHash;
}

return {
tenantId,
totalEntries: entries.length,
validEntries: results.filter(r => r.valid).length,
integrityStatus: results.every(r => r.valid) ? 'INTACT' : 'COMPROMISED',
details: results.filter(r => !r.valid), // Only include failures
};
}

interface AuditIntegrityResult {
tenantId: string;
totalEntries: number;
validEntries: number;
integrityStatus: 'INTACT' | 'COMPROMISED';
details: AuditEntryIntegrityCheck[];
}

interface AuditEntryIntegrityCheck {
entryId: string;
timestamp: Date;
valid: boolean;
error?: string;
}

6.4 Audit Trail Queries

Common Audit Queries:

-- All actions on a specific work order
SELECT
timestamp,
action,
user_name,
field_name,
old_value,
new_value
FROM audit_trail
WHERE
entity_type = 'work_order'
AND entity_id = :work_order_id
ORDER BY timestamp ASC;

-- All actions by a specific user
SELECT
timestamp,
entity_type,
entity_id,
action,
field_name
FROM audit_trail
WHERE
user_id = :user_id
AND timestamp >= :start_date
AND timestamp <= :end_date
ORDER BY timestamp DESC;

-- All approvals in date range
SELECT
timestamp,
entity_id,
user_name,
metadata->>'decision' as decision,
metadata->>'comment' as comment
FROM audit_trail
WHERE
action = 'APPROVE'
AND timestamp >= :start_date
AND timestamp <= :end_date
ORDER BY timestamp DESC;

-- Failed login attempts
SELECT
timestamp,
user_name,
ip_address,
metadata->>'failure_reason' as reason
FROM audit_trail
WHERE
action = 'LOGIN_FAILED'
AND timestamp >= NOW() - INTERVAL '24 hours'
ORDER BY timestamp DESC;

7. Data Integrity Monitoring

7.1 Continuous Integrity Monitoring

Monitoring Dashboard Metrics:

MetricDescriptionAlert Threshold
Hash Verification Pass Rate% of records passing daily hash verification< 99.9%
Orphan RecordsRecords without audit trail entries> 0
Audit Chain GapsMissing hash chain links> 0
Unauthorized Access AttemptsFailed authorization checks> 10/hour
Bulk OperationsMass updates/deletes> 100 records/minute
Session AnomaliesConcurrent sessions from different IPs> 0

Implementation:

import Prometheus from 'prom-client';

// Metrics
const hashVerificationGauge = new Prometheus.Gauge({
name: 'bioqms_hash_verification_pass_rate',
help: 'Percentage of records passing hash verification',
});

const orphanRecordsGauge = new Prometheus.Gauge({
name: 'bioqms_orphan_records_count',
help: 'Number of records without audit trail entries',
});

const auditChainGapsGauge = new Prometheus.Gauge({
name: 'bioqms_audit_chain_gaps_count',
help: 'Number of gaps in audit trail hash chain',
});

const unauthorizedAccessCounter = new Prometheus.Counter({
name: 'bioqms_unauthorized_access_attempts_total',
help: 'Total unauthorized access attempts',
labelNames: ['user_id', 'resource_type'],
});

// Monitoring tasks
cron.schedule('*/15 * * * *', async () => {
// Hash verification pass rate
const verificationResults = await getRecentVerificationResults();
const passRate = verificationResults.passed / verificationResults.total;
hashVerificationGauge.set(passRate * 100);

if (passRate < 0.999) {
await sendAlert({
severity: 'CRITICAL',
title: 'Hash Verification Pass Rate Below Threshold',
message: `Pass rate: ${(passRate * 100).toFixed(2)}% (threshold: 99.9%)`,
});
}

// Orphan record detection
const orphanCount = await detectOrphanRecords();
orphanRecordsGauge.set(orphanCount);

if (orphanCount > 0) {
await sendAlert({
severity: 'HIGH',
title: 'Orphan Records Detected',
message: `${orphanCount} records found without audit trail entries`,
});
}

// Audit chain gap detection
const gapCount = await detectAuditChainGaps();
auditChainGapsGauge.set(gapCount);

if (gapCount > 0) {
await sendAlert({
severity: 'CRITICAL',
title: 'Audit Trail Hash Chain Compromised',
message: `${gapCount} gaps detected in hash chain`,
});
}
});

async function detectOrphanRecords(): Promise<number> {
const orphans = await prisma.$queryRaw<{ count: number }[]>`
SELECT COUNT(*) as count
FROM work_orders wo
WHERE NOT EXISTS (
SELECT 1
FROM audit_trail at
WHERE at.entity_type = 'work_order'
AND at.entity_id = wo.id
AND at.action = 'CREATE'
)
`;

return Number(orphans[0].count);
}

async function detectAuditChainGaps(): Promise<number> {
// Find entries where previous_entry_hash doesn't match prior entry's current_entry_hash
const gaps = await prisma.$queryRaw<{ count: number }[]>`
WITH numbered_entries AS (
SELECT
id,
current_entry_hash,
previous_entry_hash,
LAG(current_entry_hash) OVER (ORDER BY timestamp) as expected_previous_hash
FROM audit_trail
ORDER BY timestamp
)
SELECT COUNT(*) as count
FROM numbered_entries
WHERE previous_entry_hash != expected_previous_hash
AND expected_previous_hash IS NOT NULL
`;

return Number(gaps[0].count);
}

7.2 Anomaly Detection

Behavioral Anomaly Detection:

interface AnomalyDetectionRule {
name: string;
query: string;
threshold: number;
severity: 'LOW' | 'MEDIUM' | 'HIGH' | 'CRITICAL';
}

const anomalyRules: AnomalyDetectionRule[] = [
{
name: 'Unusual Bulk Update',
query: `
SELECT user_id, COUNT(*) as update_count
FROM audit_trail
WHERE action = 'UPDATE'
AND timestamp >= NOW() - INTERVAL '5 minutes'
GROUP BY user_id
HAVING COUNT(*) > 100
`,
threshold: 100,
severity: 'HIGH',
},
{
name: 'Off-Hours Access',
query: `
SELECT user_id, COUNT(*) as access_count
FROM audit_trail
WHERE timestamp::time NOT BETWEEN '06:00' AND '22:00'
AND timestamp >= NOW() - INTERVAL '1 hour'
GROUP BY user_id
HAVING COUNT(*) > 10
`,
threshold: 10,
severity: 'MEDIUM',
},
{
name: 'Geographic Anomaly',
query: `
SELECT
user_id,
COUNT(DISTINCT ip_address) as ip_count
FROM audit_trail
WHERE timestamp >= NOW() - INTERVAL '1 hour'
GROUP BY user_id
HAVING COUNT(DISTINCT ip_address) > 3
`,
threshold: 3,
severity: 'HIGH',
},
{
name: 'Rapid Status Changes',
query: `
SELECT
entity_id,
COUNT(*) as status_change_count
FROM audit_trail
WHERE field_name = 'status'
AND timestamp >= NOW() - INTERVAL '10 minutes'
GROUP BY entity_id
HAVING COUNT(*) > 5
`,
threshold: 5,
severity: 'MEDIUM',
},
];

// Run anomaly detection every 5 minutes
cron.schedule('*/5 * * * *', async () => {
for (const rule of anomalyRules) {
const results = await prisma.$queryRawUnsafe(rule.query);

if (results.length > 0) {
await sendAlert({
severity: rule.severity,
title: `Anomaly Detected: ${rule.name}`,
message: `${results.length} anomalies detected. Details: ${JSON.stringify(results)}`,
});

// Log to security event log
await prisma.securityEvent.create({
data: {
eventType: 'ANOMALY_DETECTED',
severity: rule.severity,
ruleName: rule.name,
detectedAt: new Date(),
details: results,
},
});
}
}
});

8. Record Types and Classification

8.1 Data Criticality Levels

LevelDescriptionExamplesControls
Level 1: GxP CriticalRecords directly impacting product quality, patient safety, or regulatory complianceWork orders, IQ/OQ/PQ, electronic signatures, approvals, deviations, CAPAs• HSM-backed encryption
• Mandatory e-signatures
• Immutable audit trail
• 7-year retention minimum
• Dual approval for deletion
Level 2: Business CriticalRecords essential for business operations but not directly GxPConfiguration, user accounts, roles, training assignments• KMS encryption
• Audit trail required
• 3-year retention
• Manager approval for deletion
Level 3: OperationalSupporting records for day-to-day operationsLogs, metrics, temporary data, notifications• Standard encryption
• Optional audit trail
• 90-day retention
• Automated deletion

8.2 Classification Matrix

const recordClassifications: Record<string, RecordClassification> = {
work_orders: {
level: 'LEVEL_1',
category: 'GxP_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'HSM',
signatureRequired: true,
auditTrailRequired: true,
retentionYears: 7,
deletionApprovals: 2,
regulatoryBasis: 'FDA 21 CFR Part 11',
},
electronic_signatures: {
level: 'LEVEL_1',
category: 'GxP_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'HSM',
signatureRequired: false, // Signatures themselves don't need signatures
auditTrailRequired: true,
retentionYears: 7,
deletionApprovals: 2,
regulatoryBasis: 'FDA 21 CFR Part 11 §11.10(e)',
},
approvals: {
level: 'LEVEL_1',
category: 'GxP_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'HSM',
signatureRequired: true,
auditTrailRequired: true,
retentionYears: 7,
deletionApprovals: 2,
regulatoryBasis: 'FDA 21 CFR Part 11',
},
audit_trail: {
level: 'LEVEL_1',
category: 'GxP_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'KMS',
signatureRequired: false,
auditTrailRequired: false, // Audit trail doesn't audit itself
retentionYears: 10,
deletionApprovals: 2,
regulatoryBasis: 'FDA 21 CFR Part 11 §11.10(e)',
},
persons: {
level: 'LEVEL_2',
category: 'BUSINESS_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'KMS',
signatureRequired: false,
auditTrailRequired: true,
retentionYears: 3,
deletionApprovals: 1,
regulatoryBasis: 'SOC 2 CC6.1',
},
training_records: {
level: 'LEVEL_1',
category: 'GxP_CRITICAL',
encryptionRequired: true,
encryptionMethod: 'KMS',
signatureRequired: true,
auditTrailRequired: true,
retentionYears: 7,
deletionApprovals: 2,
regulatoryBasis: 'FDA 21 CFR 211.25(a)',
},
system_logs: {
level: 'LEVEL_3',
category: 'OPERATIONAL',
encryptionRequired: false,
encryptionMethod: null,
signatureRequired: false,
auditTrailRequired: false,
retentionYears: 0.25, // 90 days
deletionApprovals: 0, // Automated deletion
regulatoryBasis: null,
},
};

interface RecordClassification {
level: 'LEVEL_1' | 'LEVEL_2' | 'LEVEL_3';
category: 'GxP_CRITICAL' | 'BUSINESS_CRITICAL' | 'OPERATIONAL';
encryptionRequired: boolean;
encryptionMethod: 'HSM' | 'KMS' | null;
signatureRequired: boolean;
auditTrailRequired: boolean;
retentionYears: number;
deletionApprovals: number;
regulatoryBasis: string | null;
}

9. Compliance Mapping Table

9.1 FDA 21 CFR Part 11 Compliance Matrix

CitationRequirementSystem ControlImplementation EvidenceTest Reference
§11.10(a)Validation of systems to ensure accuracy, reliability, consistent intended performance• IQ/OQ/PQ documentation
• Automated testing suite
• Continuous validation
docs/validation/system-validation-plan.md
tests/integration/
Test-Plan-001
§11.10(b)Ability to generate accurate and complete copies of records• PDF/A-2b export
• CSV/JSON/XML export
• Complete audit trail export
exportRecordPackage() function
exportWorkOrderToPDF()
Test-Export-001
§11.10(c)Protection of records to enable accurate and ready retrieval• Immutable storage (append-only)
• SHA-256 hash chain
• Database triggers preventing UPDATE/DELETE
prevent_record_modification() trigger
calculateRecordHash()
Test-Integrity-001
§11.10(d)Limiting system access to authorized individuals• RBAC with 8 roles
• Session management (15-min timeout)
• MFA for critical roles
checkPermission() function
sessionConfigs
Test-RBAC-001
§11.10(e)Use of secure, computer-generated, time-stamped audit trail• Automated audit trail
• Microsecond-precision timestamps
• Hash chain integrity
createAuditEntry() function
audit_trail table
Test-Audit-001
§11.10(f)Use of operational system checks• Database constraints (CHECK, NOT NULL, FK)
• Application-level validation
• Hash verification
WorkOrderSchema (Zod)
Database constraints
Test-Validation-001
§11.10(g)Determination that persons who develop, maintain, or use electronic record/signature systems have the education, training, and experience to perform their assigned tasks• Training record management
• Role assignment requires training
• Quarterly competency review
training_records table
role_assignments table
Test-Training-001
§11.10(h)Establishment of and adherence to written policies• This document (ERC-001)
• Cryptographic Standards Policy
• RBAC Model
docs/compliance/ directoryPolicy-Review-001
§11.10(i)Establishment of adequate controls over systems documentation• Version-controlled schemas
• Migration tracking
• Change control process
Git version control
prisma/migrations/
Doc-Control-001
§11.10(j)Controls for open systems• mTLS for inter-service communication
• TLS 1.3 for external APIs
• API authentication (JWT)
HSM TLS certificates
API Gateway config
Test-TLS-001
§11.10(k)Controls for systems documentation• ADR documentation
• API documentation (OpenAPI)
• Database schema docs
docs/architecture/adrs/
openapi.yaml
Doc-Complete-001
§11.50Signature manifestations• Printed name, date/time, meaning
• PDF export includes signature block
exportWorkOrderToPDF() signature block
ElectronicSignature table
Test-Signature-001
§11.70Signature/record linking• ECDSA P-256 cryptographic binding
• Document hash in signature
calculateRecordHash()
ElectronicSignature.cryptoHash
Test-Binding-001

Overall Compliance Status: 14/14 controls implemented (100%)

9.2 HIPAA Security Rule Compliance Matrix

CitationRequirementSystem ControlImplementation EvidenceTest Reference
§164.312(b)Audit controls: Hardware, software, and/or procedural mechanisms that record and examine activity in information systems• Automated audit trail
• Immutable audit log
• Continuous integrity monitoring
audit_trail table
verifyAuditTrailIntegrity()
Test-Audit-002
§164.312(c)(1)Integrity controls: Mechanisms to authenticate that ePHI has not been altered or destroyed• SHA-256 hash chain
• Daily hash verification
• Anomaly detection
calculateRecordHash()
Daily cron job
Test-Integrity-002
§164.312(c)(2)Mechanism to authenticate ePHI• Cryptographic signatures
• Hash verification on retrieval
verifyRecordIntegrity()Test-Auth-001
§164.316(b)(1)(i)Time limit for retention• Configurable retention policies
• 6-year default for PHI
retention_policies tableTest-Retention-001
§164.316(b)(2)(i)Retain documentation for 6 years• 6-year retention for policy docs
• Version-controlled policies
Git history
Document control
Policy-Retention-001
§164.316(b)(2)(ii)Make documentation available to workforce and HHS• Export functionality
• Auditor read-only access
exportRecordPackage()
AUDITOR role
Test-Export-002

Overall Compliance Status: 6/6 controls implemented (100%)

9.3 SOC 2 Type II Trust Service Criteria Compliance Matrix

CriterionControl ObjectiveSystem ControlEvidenceTest Reference
CC6.1Logical access - Restrict logical access• RBAC with 8 roles
• Session management
• MFA for critical roles
Permission matrix
sessionConfigs
SOC2-Access-001
CC6.7Encryption key management• HSM for signing keys
• KMS for data keys
• Automated key rotation
Crypto Standards Policy
HSM Integration Architecture
SOC2-Crypto-001
CC7.2System monitoring - Detect anomalies• Continuous integrity monitoring
• Anomaly detection rules
• Security alerting
Prometheus metrics
anomalyRules
SOC2-Monitor-001
CC8.1Change management - Authorize changes• Version control for records
• Change requires approval
• Audit trail for all changes
updateWorkOrder() function
Audit trail
SOC2-Change-001

Overall Compliance Status: 4/4 controls implemented (100%)

9.4 ALCOA+ Principles Compliance Matrix

PrincipleRequirementSystem ControlImplementation Evidence
AttributableRecord clearly identifies who performed actioncreated_by, modified_by fields
• User name captured in audit trail
WorkOrder table schema
audit_trail.user_name
LegibleRecord is readable and permanent• UTF-8 text encoding
• PDF/A-2b export format
• Embedded fonts
PDF export implementation
ContemporaneousRecord created at time of activity• Server-generated timestamps
• Microsecond precision
• UTC timezone
created_at, modified_at (default NOW())
OriginalFirst recording or certified copy• Immutable primary record
• Version history preserved
• Export includes integrity hash
Append-only storage
Version control
AccurateRecord is correct and complete• Database constraints
• Application validation
• Required field enforcement
WorkOrderSchema (Zod)
CHECK constraints
CompleteAll data present at time of activity• No null values for required fields
• Full audit trail
• Context metadata
NOT NULL constraints
metadata JSONB field
ConsistentData recorded in expected sequence• Monotonic version numbers
• Timestamp ordering
• Hash chain
version field incrementing
Hash chain validation
EnduringRecord preserved throughout retention• Archival to S3 Glacier
• PDF/A-2b for long-term storage
• 7-10 year retention
Archival cron job
Retention policies
AvailableRecord retrievable when needed• Full-text search
• Advanced filtering
• Export in multiple formats
searchRecords() API
Export endpoints

Overall Compliance Status: 9/9 principles implemented (100%)


10. Appendices

Appendix A: Glossary

TermDefinition
ALCOA+Attributable, Legible, Contemporaneous, Original, Accurate, Complete, Consistent, Enduring, Available — data integrity principles
Append-Only StorageStorage pattern where records are never updated or deleted, only new versions appended
Audit TrailTamper-evident chronological record of system activities and changes
Cryptographic Hash ChainSeries of records where each record includes hash of previous record, creating tamper-evident chain
DEKData Encryption Key — symmetric key used to encrypt data (short-lived, ephemeral)
Electronic SignatureComputer data compilation executed by individual to authenticate identity and approval
Envelope EncryptionTwo-tier encryption: DEK encrypts data, KEK encrypts DEK
GxPGood Practice Quality Guidelines (GMP, GCP, GLP, etc.) — regulatory standards
HashFixed-size output of cryptographic hash function (e.g., SHA-256) used for integrity verification
HSMHardware Security Module — tamper-resistant device for cryptographic key protection
Immutable RecordRecord that cannot be modified after creation (append-only, version-controlled)
KEKKey Encryption Key — key used to wrap/unwrap other keys
Legal HoldPreservation of records beyond normal retention due to litigation or investigation
RBACRole-Based Access Control — access permissions based on user roles
Retention PeriodDuration records must be preserved per regulatory requirements
Soft DeleteMarking record as deleted without physical removal (preserves for audit)
Temporal AccessTime-bound access grant that expires after specified duration
Version ChainLinked sequence of record versions with cryptographic integrity

Appendix B: References

DocumentLocationDescription
FDA 21 CFR Part 11https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=11Electronic Records; Electronic Signatures
HIPAA Security Rulehttps://www.hhs.gov/hipaa/for-professionals/security/index.html§164.312 Technical Safeguards
SOC 2 Trust Service Criteriahttps://us.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2reportAICPA Trust Services Criteria
ALCOA+ PrinciplesWHO TRS 996 Annex 5Data integrity principles for GxP records
E-Signature Architecturedocs/architecture/17-e-signature-architecture.mdBIO-QMS e-signature technical design
Cryptographic Standards Policydocs/compliance/crypto-standards-policy.mdCODITECT-BIO-CRYPTO-001
HSM Integration Architecturedocs/compliance/hsm-integration-architecture.mdHSM key management
RBAC Modeldocs/compliance/21-rbac-model.mdRole definitions and permission matrix
DocumentLocationPurpose
System Validation Plandocs/validation/system-validation-plan.mdIQ/OQ/PQ validation approach
Database Schema Documentationprisma/schema.prismaComplete data model
API Documentationopenapi.yamlREST API specification
Architecture Decision Recordsdocs/architecture/adrs/Key architectural decisions
Security Architecturedocs/architecture/security-architecture.md5-layer authorization, zero-trust network
Incident Response Plandocs/security/incident-response-plan.mdSecurity incident procedures

Appendix D: Change Log

VersionDateAuthorChangesApproval
0.1.02026-02-14Security ArchitectInitial draft based on D.2.2 requirementsN/A (draft)
0.2.02026-02-15Compliance OfficerAdded ALCOA+ compliance mappingN/A (draft)
1.0.02026-02-16CISOFinal review, approved for publicationPending executive approval

Document ID: CODITECT-BIO-ERC-001 Version: 1.0.0 Classification: Internal - Restricted Next Review Date: 2027-02-16 Policy Owner: Chief Information Security Officer Document Location: docs/compliance/electronic-record-controls.md Approval Status: Draft (pending executive signature)

Confidentiality Notice: This document contains proprietary information and is intended solely for authorized personnel of CODITECT Biosciences. Unauthorized distribution is prohibited.


Copyright 2026 AZ1.AI Inc. All rights reserved. Developer: Hal Casteel, CEO/CTO Product: CODITECT-BIO-QMS | Part of the CODITECT Product Suite Classification: Internal - Confidential


END OF DOCUMENT