Skip to main content

Validation Test Execution Framework

Document ID: CODITECT-BIO-VAL-FRAMEWORK-001 Version: 1.0.0 Effective Date: 2026-02-16 Classification: Internal - Restricted Owner: Validation Engineering Team


Document Control

Approval History

RoleNameSignatureDate
Validation Lead[Pending][Digital Signature]YYYY-MM-DD
QA Manager[Pending][Digital Signature]YYYY-MM-DD
Quality Director[Pending][Digital Signature]YYYY-MM-DD
Engineering Director[Pending][Digital Signature]YYYY-MM-DD

Revision History

VersionDateAuthorChangesApproval Status
1.0.02026-02-16Validation TeamInitial releaseDraft

Distribution List

  • Validation Engineering Team
  • QA Engineering Team
  • Backend Engineering Team
  • DevOps Team
  • Quality Assurance Management
  • Regulatory Affairs

Review Schedule

Review TypeFrequencyNext Review DateResponsible Party
Annual Review12 months2027-02-16Validation Lead
Post-Deployment ReviewPer releaseN/AQA Manager
Technology Update ReviewAs neededN/AEngineering Director

1. Executive Summary

1.1 Purpose

This document establishes the comprehensive automated test execution framework for validation of the CODITECT Biosciences Quality Management System (BIO-QMS). The framework provides:

  1. Automated IQ/OQ/PQ Test Execution - Full installation, operational, and performance qualification testing with zero manual intervention
  2. Evidence Capture - Automated collection of screenshots, logs, database states, and API responses
  3. Data Integrity Verification - Cryptographic validation of audit trails, record immutability, and hash chain integrity
  4. Regression Testing - Comprehensive re-validation suite for system updates and deployment verification
  5. Continuous Compliance - CI/CD integration ensuring every deployment meets validation requirements

1.2 Scope

In Scope:

  • IQ (Installation Qualification): 15+ automated test cases
  • OQ (Operational Qualification): 50+ automated test cases
  • PQ (Performance Qualification): 15+ automated test cases
  • Screenshot automation via Playwright
  • Evidence packaging and storage
  • Hash chain verification
  • Regression suite execution
  • CI/CD validation gates

Out of Scope:

  • Test protocol authoring (covered in D.2.1)
  • Manual test execution procedures
  • Validation review workflows (covered in D.2.5)
  • Production deployment procedures

1.3 Regulatory Alignment

FrameworkStandardCompliance Requirement
FDA 21 CFR Part 11§11.10(a)System validation requirements
GAMP 5Category 5Custom software validation
ISPE Baseline Guide Vol. 5-Computer system validation lifecycle
ISO/IEC 25010Quality ModelSoftware quality characteristics

1.4 Success Criteria

  • 80%+ Test Automation - Minimum 80% of all test cases fully automated
  • 15-Minute Evidence Retrieval - Any validation artifact retrievable within 15 minutes
  • Zero Manual Screenshots - 100% automated screenshot capture at all verification points
  • 100% Traceability - Complete bidirectional trace from requirements to evidence
  • 99.9% Regression Pass Rate - Less than 0.1% false failures in regression suite

2. Architecture Overview

2.1 Framework Components

┌─────────────────────────────────────────────────────────────────┐
│ Validation Test Framework │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ IQ Test Suite │ │ OQ Test Suite │ │ PQ Test Suite │ │
│ │ 15+ cases │ │ 50+ cases │ │ 15+ cases │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └──────────────────┴──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ Test Execution Engine │ │
│ │ - Jest Test Runner │ │
│ │ - Playwright Browser Automation │ │
│ │ - Custom Reporters & Collectors │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ Evidence Collection Layer │ │
│ │ - Screenshot Capture │ │
│ │ - Log Aggregation │ │
│ │ - Database State Snapshots │ │
│ │ - API Response Recording │ │
│ │ - Timing/Performance Metrics │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ Integrity Verification │ │
│ │ - Hash Chain Validation │ │
│ │ - Merkle Tree Generation │ │
│ │ - Digital Signature │ │
│ │ - Tamper Detection │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ Evidence Storage (GCS) │ │
│ │ - Tamper-Evident Archive │ │
│ │ - 7-Year Retention │ │
│ │ - Cryptographic Verification │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘

2.2 Technology Stack

ComponentTechnologyVersionPurpose
Test RunnerJest29.7+Test execution, assertions, reporting
Browser AutomationPlaywright1.40+UI testing, screenshot capture
TypeScriptTypeScript5.3+Type-safe test implementation
Database ClientPrisma5.7+Database state verification
API ClientAxios1.6+RESTful API testing
Performance Testingk60.48+Load testing, throughput measurement
Evidence StorageGoogle Cloud Storage-Tamper-evident artifact storage
Hash GenerationNode.js crypto-SHA-256 hashing, HMAC chains
Digital Signatures@noble/curves1.3+ECDSA P-256 signatures

2.3 Directory Structure

backend/
├── tests/
│ ├── validation/
│ │ ├── iq/ # Installation Qualification
│ │ │ ├── system-prerequisites.test.ts
│ │ │ ├── database-connectivity.test.ts
│ │ │ ├── encryption-verification.test.ts
│ │ │ ├── tls-certificate.test.ts
│ │ │ └── ... (15+ test files)
│ │ ├── oq/ # Operational Qualification
│ │ │ ├── rbac/
│ │ │ │ ├── role-permissions.test.ts
│ │ │ │ ├── cross-tenant-isolation.test.ts
│ │ │ │ └── sod-enforcement.test.ts
│ │ │ ├── workflows/
│ │ │ │ ├── state-transitions.test.ts
│ │ │ │ ├── approval-chain.test.ts
│ │ │ │ └── rejection-handling.test.ts
│ │ │ ├── signatures/
│ │ │ │ ├── e-signature-creation.test.ts
│ │ │ │ ├── signature-binding.test.ts
│ │ │ │ └── re-authentication.test.ts
│ │ │ ├── audit-trail/
│ │ │ │ ├── audit-recording.test.ts
│ │ │ │ ├── hash-chain.test.ts
│ │ │ │ └── immutability.test.ts
│ │ │ └── ... (50+ test files)
│ │ ├── pq/ # Performance Qualification
│ │ │ ├── load-testing/
│ │ │ │ ├── concurrent-users.test.ts
│ │ │ │ ├── api-throughput.test.ts
│ │ │ │ └── database-performance.test.ts
│ │ │ ├── scalability/
│ │ │ │ ├── document-upload.test.ts
│ │ │ │ ├── search-response.test.ts
│ │ │ │ └── report-generation.test.ts
│ │ │ └── ... (15+ test files)
│ │ ├── utils/
│ │ │ ├── evidence-collector.ts # Evidence capture utilities
│ │ │ ├── screenshot-manager.ts # Playwright screenshot automation
│ │ │ ├── hash-verifier.ts # Integrity verification
│ │ │ ├── test-data-factory.ts # Test data generation
│ │ │ └── gcs-uploader.ts # Evidence storage
│ │ ├── fixtures/
│ │ │ ├── test-users.json
│ │ │ ├── test-workflows.json
│ │ │ └── expected-results.json
│ │ └── reports/
│ │ ├── validation-reporter.ts # Custom Jest reporter
│ │ └── evidence-packager.ts # Evidence package assembly
│ └── jest.config.validation.js # Validation-specific Jest config
└── scripts/
├── validation/
│ ├── run-iq.sh # IQ execution wrapper
│ ├── run-oq.sh # OQ execution wrapper
│ ├── run-pq.sh # PQ execution wrapper
│ ├── run-regression.sh # Full regression suite
│ └── upload-evidence.sh # GCS evidence upload

3. Installation Qualification (IQ)

3.1 IQ Test Categories

CategoryTest CountAutomationDescription
System Prerequisites3100%Node.js, PostgreSQL, Redis versions
Application Startup2100%NestJS bootstrap, health check
Database Connectivity4100%Connection, migrations, RLS policies
External Services3100%KMS, GCS, SMTP connectivity
Security Configuration3100%TLS certificates, encryption keys
Total15100%-

3.2 IQ Test Specifications

3.2.1 System Prerequisites Verification

Test ID: IQ-PREREQ-001 Test Case: Node.js Version Verification

// tests/validation/iq/system-prerequisites.test.ts

import { describe, test, expect } from '@jest/globals';
import { execSync } from 'child_process';
import { EvidenceCollector } from '../utils/evidence-collector';

describe('IQ-PREREQ-001: Node.js Version Verification', () => {
const evidence = new EvidenceCollector('IQ-PREREQ-001');

test('Node.js version matches deployment specification', async () => {
// Capture system state before verification
const nodeVersion = execSync('node --version').toString().trim();
const expectedVersion = process.env.EXPECTED_NODE_VERSION || 'v20.11.0';

evidence.captureSystemInfo({
nodeVersion,
expectedVersion,
timestamp: new Date().toISOString(),
});

// Verify major version match (v20.x.x)
const actualMajor = nodeVersion.split('.')[0];
const expectedMajor = expectedVersion.split('.')[0];

expect(actualMajor).toBe(expectedMajor);

// Record pass result
await evidence.recordResult({
testCaseId: 'IQ-PREREQ-001',
testStep: '1.1',
expectedResult: `Node.js version ${expectedVersion} or compatible`,
actualResult: `Node.js version ${nodeVersion} detected`,
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

Test ID: IQ-PREREQ-002 Test Case: PostgreSQL Version and Configuration

// tests/validation/iq/database-connectivity.test.ts

import { PrismaClient } from '@prisma/client';
import { EvidenceCollector } from '../utils/evidence-collector';

describe('IQ-PREREQ-002: PostgreSQL Version and Configuration', () => {
const prisma = new PrismaClient();
const evidence = new EvidenceCollector('IQ-PREREQ-002');

test('PostgreSQL version meets minimum requirements', async () => {
const result = await prisma.$queryRaw<[{ version: string }]>`
SELECT version();
`;

const versionString = result[0].version;
evidence.captureDatabaseQuery({
query: 'SELECT version();',
result: versionString,
timestamp: new Date().toISOString(),
});

// Expect PostgreSQL 15.x or higher
const versionMatch = versionString.match(/PostgreSQL (\d+\.\d+)/);
expect(versionMatch).toBeTruthy();

const majorVersion = parseFloat(versionMatch![1]);
expect(majorVersion).toBeGreaterThanOrEqual(15.0);

await evidence.recordResult({
testCaseId: 'IQ-PREREQ-002',
testStep: '1.1',
expectedResult: 'PostgreSQL 15.0 or higher',
actualResult: `PostgreSQL ${majorVersion} detected`,
passFail: 'pass',
});
});

test('Row-Level Security (RLS) is enabled globally', async () => {
const rlsStatus = await prisma.$queryRaw<[{ enabled: boolean }]>`
SELECT current_setting('row_security') = 'on' AS enabled;
`;

evidence.captureDatabaseQuery({
query: "SELECT current_setting('row_security');",
result: rlsStatus[0].enabled,
timestamp: new Date().toISOString(),
});

expect(rlsStatus[0].enabled).toBe(true);

await evidence.recordResult({
testCaseId: 'IQ-PREREQ-002',
testStep: '1.2',
expectedResult: 'Row-level security enabled',
actualResult: `RLS status: ${rlsStatus[0].enabled}`,
passFail: 'pass',
});
});

afterAll(async () => {
await prisma.$disconnect();
await evidence.upload();
});
});

3.2.2 Database Migration Status

Test ID: IQ-DB-001 Test Case: All Migrations Applied Successfully

// tests/validation/iq/database-migrations.test.ts

import { execSync } from 'child_process';
import { EvidenceCollector } from '../utils/evidence-collector';

describe('IQ-DB-001: Database Migration Status', () => {
const evidence = new EvidenceCollector('IQ-DB-001');

test('No pending migrations', async () => {
// Run Prisma migration status
let migrationOutput: string;
try {
migrationOutput = execSync('npx prisma migrate status', {
encoding: 'utf-8',
});
} catch (error: any) {
migrationOutput = error.stdout || error.stderr;
}

evidence.captureCommandOutput({
command: 'npx prisma migrate status',
output: migrationOutput,
timestamp: new Date().toISOString(),
});

// Verify no pending migrations
expect(migrationOutput).toContain('Database schema is up to date');
expect(migrationOutput).not.toContain('pending migration');

await evidence.recordResult({
testCaseId: 'IQ-DB-001',
testStep: '2.1',
expectedResult: 'Database schema is up to date',
actualResult: 'All migrations applied successfully',
passFail: 'pass',
});
});

test('Database schema hash matches deployment manifest', async () => {
const schemaHash = execSync(
'npx prisma migrate diff --from-schema-datamodel prisma/schema.prisma --to-schema-datasource prisma/schema.prisma --script | sha256sum',
{ encoding: 'utf-8' }
)
.split(' ')[0]
.trim();

const expectedHash = process.env.EXPECTED_SCHEMA_HASH || '';

evidence.captureHash({
entity: 'database-schema',
algorithm: 'SHA-256',
hash: schemaHash,
expectedHash,
timestamp: new Date().toISOString(),
});

expect(schemaHash).toBe(expectedHash);

await evidence.recordResult({
testCaseId: 'IQ-DB-001',
testStep: '2.2',
expectedResult: `Schema hash: ${expectedHash}`,
actualResult: `Schema hash: ${schemaHash}`,
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

3.2.3 External Service Connectivity

Test ID: IQ-EXT-001 Test Case: KMS Connectivity and Key Access

// tests/validation/iq/external-services.test.ts

import { KmsService } from '../../src/security/kms.service';
import { EvidenceCollector } from '../utils/evidence-collector';

describe('IQ-EXT-001: KMS Connectivity and Key Access', () => {
const kmsService = new KmsService();
const evidence = new EvidenceCollector('IQ-EXT-001');

test('KMS service reachable and authenticated', async () => {
const healthCheck = await kmsService.healthCheck();

evidence.captureApiResponse({
service: 'Google Cloud KMS',
endpoint: '/health',
responseStatus: healthCheck.status,
responseTime: healthCheck.responseTimeMs,
timestamp: new Date().toISOString(),
});

expect(healthCheck.status).toBe('connected');
expect(healthCheck.responseTimeMs).toBeLessThan(500);

await evidence.recordResult({
testCaseId: 'IQ-EXT-001',
testStep: '3.1',
expectedResult: 'KMS health check returns connected status',
actualResult: `KMS status: ${healthCheck.status} (${healthCheck.responseTimeMs}ms)`,
passFail: 'pass',
});
});

test('Encryption key accessible and operational', async () => {
const testPlaintext = 'VALIDATION_TEST_PAYLOAD_IQ-EXT-001';
const encrypted = await kmsService.encrypt(testPlaintext);
const decrypted = await kmsService.decrypt(encrypted);

evidence.captureEncryptionRoundtrip({
plaintext: testPlaintext,
encrypted: encrypted.substring(0, 32) + '...',
decrypted,
match: decrypted === testPlaintext,
timestamp: new Date().toISOString(),
});

expect(decrypted).toBe(testPlaintext);

await evidence.recordResult({
testCaseId: 'IQ-EXT-001',
testStep: '3.2',
expectedResult: 'Encrypt/decrypt round-trip successful',
actualResult: 'Plaintext recovered successfully',
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

3.2.4 TLS Certificate Verification

Test ID: IQ-SEC-001 Test Case: TLS 1.3 Enforcement and Certificate Validity

// tests/validation/iq/tls-certificate.test.ts

import * as tls from 'tls';
import { EvidenceCollector } from '../utils/evidence-collector';

describe('IQ-SEC-001: TLS Certificate Verification', () => {
const evidence = new EvidenceCollector('IQ-SEC-001');
const apiUrl = process.env.API_URL || 'https://localhost:3000';

test('TLS 1.3 enforced on API endpoint', async () => {
const hostname = new URL(apiUrl).hostname;
const port = parseInt(new URL(apiUrl).port || '443');

const tlsSocket = tls.connect(
{ host: hostname, port, minVersion: 'TLSv1.3' },
() => {
const protocol = tlsSocket.getProtocol();
const cipher = tlsSocket.getCipher();

evidence.captureTlsInfo({
hostname,
port,
protocol,
cipher: cipher?.name || 'unknown',
timestamp: new Date().toISOString(),
});

expect(protocol).toBe('TLSv1.3');

tlsSocket.end();
}
);

await new Promise((resolve) => tlsSocket.on('end', resolve));

await evidence.recordResult({
testCaseId: 'IQ-SEC-001',
testStep: '4.1',
expectedResult: 'TLS 1.3 protocol enforced',
actualResult: 'TLS 1.3 confirmed',
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

3.3 IQ Execution Script

#!/bin/bash
# scripts/validation/run-iq.sh
# Automated IQ execution with evidence collection

set -euo pipefail

# Configuration
EVIDENCE_DIR="./evidence/iq-$(date +%Y%m%d-%H%M%S)"
REPORT_DIR="./reports/validation"
IQ_TEST_PATTERN="tests/validation/iq/**/*.test.ts"

echo "==================================================="
echo " CODITECT BIO-QMS Installation Qualification "
echo " Execution Date: $(date -u +%Y-%m-%dT%H:%M:%SZ) "
echo "==================================================="

# Create evidence directories
mkdir -p "$EVIDENCE_DIR"
mkdir -p "$REPORT_DIR"

# Set environment variables for test execution
export EVIDENCE_OUTPUT_DIR="$EVIDENCE_DIR"
export VALIDATION_MODE="IQ"
export JEST_JUNIT_OUTPUT_DIR="$REPORT_DIR"

# Run IQ test suite
echo ""
echo "[1/4] Running IQ Test Suite..."
npm run test:validation:iq -- \
--testPathPattern="$IQ_TEST_PATTERN" \
--reporters=default \
--reporters=jest-junit \
--reporters=./tests/validation/reports/validation-reporter.ts \
--coverage=false \
--maxWorkers=1

# Generate evidence manifest
echo ""
echo "[2/4] Generating Evidence Manifest..."
node scripts/validation/generate-evidence-manifest.js \
--input "$EVIDENCE_DIR" \
--output "$EVIDENCE_DIR/MANIFEST.json"

# Compute Merkle root
echo ""
echo "[3/4] Computing Evidence Integrity Hash..."
node scripts/validation/compute-merkle-root.js \
--input "$EVIDENCE_DIR" \
--output "$EVIDENCE_DIR/MERKLE-ROOT.json"

# Upload to GCS
echo ""
echo "[4/4] Uploading Evidence Package to GCS..."
gsutil -m cp -r "$EVIDENCE_DIR" \
"gs://coditect-bio-qms-validation-evidence/iq/$(date +%Y-%m-%d)/"

echo ""
echo "==================================================="
echo " IQ Execution Complete "
echo " Evidence Location: $EVIDENCE_DIR "
echo " Report Location: $REPORT_DIR/junit.xml "
echo "==================================================="

4. Operational Qualification (OQ)

4.1 OQ Test Categories

CategoryTest CountAutomationDescription
RBAC Enforcement12100%Role permissions, SOD, cross-tenant isolation
Workflow State Machines15100%Valid/invalid transitions, guard conditions
E-Signature895%Creation, binding, verification, re-auth
Encryption/Decryption5100%Field encryption, key rotation, round-trip
API Endpoint Validation8100%All critical endpoints, error handling
Audit Trail Recording7100%Entry creation, immutability, hash chain
Total5599%-

4.2 RBAC Verification Tests

4.2.1 Role Permission Matrix

Test ID: OQ-RBAC-001 Test Case: Comprehensive Role Permission Validation

// tests/validation/oq/rbac/role-permissions.test.ts

import { describe, test, expect, beforeAll, afterAll } from '@jest/globals';
import { TestUserFactory } from '../../utils/test-data-factory';
import { ApiClient } from '../../utils/api-client';
import { EvidenceCollector } from '../../utils/evidence-collector';
import { ScreenshotManager } from '../../utils/screenshot-manager';

describe('OQ-RBAC-001: Role Permission Matrix Validation', () => {
const evidence = new EvidenceCollector('OQ-RBAC-001');
const screenshots = new ScreenshotManager('OQ-RBAC-001');
const testUsers = new TestUserFactory();

// Permission matrix from reference: 22-rbac-permissions-matrix.md
const permissionMatrix = [
{
operation: 'Create WO (DRAFT)',
allowed: ['ORIGINATOR', 'ASSIGNER', 'SYS_OWNER'],
denied: ['ASSIGNEE', 'QA', 'VENDOR', 'ADMIN', 'AUDITOR'],
},
{
operation: 'Approve (System Owner)',
allowed: ['SYS_OWNER'],
denied: ['ORIGINATOR', 'ASSIGNER', 'ASSIGNEE', 'QA', 'VENDOR', 'ADMIN', 'AUDITOR'],
},
{
operation: 'Approve (QA — regulatory only)',
allowed: ['QA'],
denied: ['ORIGINATOR', 'ASSIGNER', 'ASSIGNEE', 'SYS_OWNER', 'VENDOR', 'ADMIN', 'AUDITOR'],
},
// ... 25+ total permission combinations
];

beforeAll(async () => {
// Create test users for each role
await testUsers.createAllRoles();
});

permissionMatrix.forEach((permission) => {
describe(`Operation: ${permission.operation}`, () => {
permission.allowed.forEach((role) => {
test(`${role} can perform ${permission.operation}`, async () => {
const user = testUsers.getUser(role);
const client = new ApiClient(user.token);

// Attempt operation
const response = await client.performOperation(permission.operation);

// Capture screenshot of successful operation
await screenshots.capture(`${role}-${permission.operation}-success`, {
context: { role, operation: permission.operation },
});

evidence.captureApiResponse({
role,
operation: permission.operation,
responseStatus: response.status,
expectedStatus: 'success',
timestamp: new Date().toISOString(),
});

expect(response.status).toBe(200);

await evidence.recordResult({
testCaseId: 'OQ-RBAC-001',
testStep: `${role}.${permission.operation}`,
expectedResult: `${role} successfully performs ${permission.operation}`,
actualResult: `HTTP ${response.status}`,
passFail: 'pass',
});
});
});

permission.denied.forEach((role) => {
test(`${role} cannot perform ${permission.operation}`, async () => {
const user = testUsers.getUser(role);
const client = new ApiClient(user.token);

// Attempt operation (expect failure)
const response = await client.performOperation(permission.operation);

// Capture screenshot of denied operation
await screenshots.capture(`${role}-${permission.operation}-denied`, {
context: { role, operation: permission.operation },
});

evidence.captureApiResponse({
role,
operation: permission.operation,
responseStatus: response.status,
expectedStatus: 'forbidden',
timestamp: new Date().toISOString(),
});

expect(response.status).toBe(403);

await evidence.recordResult({
testCaseId: 'OQ-RBAC-001',
testStep: `${role}.${permission.operation}.denied`,
expectedResult: `${role} receives 403 Forbidden`,
actualResult: `HTTP ${response.status}`,
passFail: 'pass',
});
});
});
});
});

afterAll(async () => {
await screenshots.upload();
await evidence.upload();
await testUsers.cleanup();
});
});

4.2.2 Cross-Tenant Isolation

Test ID: OQ-RBAC-006 Test Case: Multi-Tenant Data Isolation

// tests/validation/oq/rbac/cross-tenant-isolation.test.ts

import { PrismaClient } from '@prisma/client';
import { TestUserFactory } from '../../utils/test-data-factory';
import { EvidenceCollector } from '../../utils/evidence-collector';

describe('OQ-RBAC-006: Cross-Tenant Isolation', () => {
const prisma = new PrismaClient();
const evidence = new EvidenceCollector('OQ-RBAC-006');
const testUsers = new TestUserFactory();

test('Tenant A user cannot see Tenant B work orders', async () => {
// Create test work orders in two tenants
const tenantA = await testUsers.createTenant('TenantA');
const tenantB = await testUsers.createTenant('TenantB');

const woTenantA = await prisma.workOrder.create({
data: {
tenantId: tenantA.id,
summary: 'Tenant A Work Order',
status: 'DRAFT',
// ... other fields
},
});

const woTenantB = await prisma.workOrder.create({
data: {
tenantId: tenantB.id,
summary: 'Tenant B Work Order',
status: 'DRAFT',
// ... other fields
},
});

// Attempt cross-tenant access
const userA = testUsers.getUser('ORIGINATOR', tenantA.id);

// Set RLS context for Tenant A
await prisma.$executeRaw`
SET LOCAL app.tenant_id = ${tenantA.id};
`;

// Query all work orders (should only return Tenant A)
const visibleWOs = await prisma.workOrder.findMany({
where: { tenantId: tenantA.id },
});

evidence.captureDatabaseQuery({
query: 'SELECT * FROM work_orders WHERE tenant_id = $1',
params: [tenantA.id],
resultCount: visibleWOs.length,
expectedCount: 1,
timestamp: new Date().toISOString(),
});

expect(visibleWOs.length).toBe(1);
expect(visibleWOs[0].id).toBe(woTenantA.id);
expect(visibleWOs.find((wo) => wo.id === woTenantB.id)).toBeUndefined();

// Attempt direct access to Tenant B work order by ID
const directAccess = await prisma.workOrder.findUnique({
where: { id: woTenantB.id },
});

expect(directAccess).toBeNull(); // RLS policy blocks access

await evidence.recordResult({
testCaseId: 'OQ-RBAC-006',
testStep: '6.1',
expectedResult: 'Tenant A user sees only Tenant A work orders',
actualResult: `${visibleWOs.length} work order(s) visible, Tenant B WO blocked`,
passFail: 'pass',
});
});

afterAll(async () => {
await prisma.$disconnect();
await evidence.upload();
});
});

4.3 Workflow State Machine Tests

4.3.1 Valid Transition Sequence

Test ID: OQ-SM-001 Test Case: Happy Path DRAFT → COMPLETED

// tests/validation/oq/workflows/state-transitions.test.ts

import { WorkflowOrchestrator } from '../../utils/workflow-orchestrator';
import { EvidenceCollector } from '../../utils/evidence-collector';
import { ScreenshotManager } from '../../utils/screenshot-manager';

describe('OQ-SM-001: Valid State Transition Sequence', () => {
const workflow = new WorkflowOrchestrator();
const evidence = new EvidenceCollector('OQ-SM-001');
const screenshots = new ScreenshotManager('OQ-SM-001');

test('Complete workflow: DRAFT → PLANNED → SCHEDULED → IN_PROGRESS → PENDING_REVIEW → APPROVED → COMPLETED', async () => {
// Step 1: Create work order in DRAFT
const wo = await workflow.createWorkOrder({
summary: 'OQ-SM-001 Test Work Order',
status: 'DRAFT',
});

await screenshots.capture('step-1-draft-created', {
context: { workOrderId: wo.id, status: 'DRAFT' },
});

expect(wo.status).toBe('DRAFT');

// Step 2: Transition DRAFT → PLANNED
await workflow.transition(wo.id, 'DRAFT', 'PLANNED', {
jobPlanId: 'test-job-plan',
scheduleId: 'test-schedule',
});

await screenshots.capture('step-2-planned', {
context: { workOrderId: wo.id, status: 'PLANNED' },
});

const woPlanned = await workflow.getWorkOrder(wo.id);
expect(woPlanned.status).toBe('PLANNED');

// Step 3: Transition PLANNED → SCHEDULED
await workflow.transition(wo.id, 'PLANNED', 'SCHEDULED', {
assigneeId: 'test-assignee',
});

await screenshots.capture('step-3-scheduled', {
context: { workOrderId: wo.id, status: 'SCHEDULED' },
});

const woScheduled = await workflow.getWorkOrder(wo.id);
expect(woScheduled.status).toBe('SCHEDULED');

// Step 4: Transition SCHEDULED → IN_PROGRESS (with assignee e-signature)
await workflow.transition(wo.id, 'SCHEDULED', 'IN_PROGRESS', {
signatureRequired: true,
signerId: 'test-assignee',
meaning: 'I acknowledge responsibility for this task',
});

await screenshots.capture('step-4-in-progress', {
context: { workOrderId: wo.id, status: 'IN_PROGRESS' },
});

const woInProgress = await workflow.getWorkOrder(wo.id);
expect(woInProgress.status).toBe('IN_PROGRESS');

// Step 5: Transition IN_PROGRESS → PENDING_REVIEW
await workflow.transition(wo.id, 'IN_PROGRESS', 'PENDING_REVIEW', {
executionNotes: 'Work completed successfully',
});

await screenshots.capture('step-5-pending-review', {
context: { workOrderId: wo.id, status: 'PENDING_REVIEW' },
});

const woPendingReview = await workflow.getWorkOrder(wo.id);
expect(woPendingReview.status).toBe('PENDING_REVIEW');

// Step 6: Transition PENDING_REVIEW → APPROVED (with System Owner + QA approval)
await workflow.transition(wo.id, 'PENDING_REVIEW', 'APPROVED', {
approvals: [
{
role: 'SYSTEM_OWNER',
approverId: 'test-system-owner',
signatureRequired: true,
meaning: 'I approve this change to the validated system',
},
{
role: 'QA',
approverId: 'test-qa',
signatureRequired: true,
meaning: 'QA approval for regulatory work order',
},
],
});

await screenshots.capture('step-6-approved', {
context: { workOrderId: wo.id, status: 'APPROVED' },
});

const woApproved = await workflow.getWorkOrder(wo.id);
expect(woApproved.status).toBe('APPROVED');

// Step 7: Transition APPROVED → COMPLETED
await workflow.transition(wo.id, 'APPROVED', 'COMPLETED', {
postApprovalTasksCompleted: true,
});

await screenshots.capture('step-7-completed', {
context: { workOrderId: wo.id, status: 'COMPLETED' },
});

const woCompleted = await workflow.getWorkOrder(wo.id);
expect(woCompleted.status).toBe('COMPLETED');

// Verify audit trail completeness
const auditTrail = await workflow.getAuditTrail(wo.id);
expect(auditTrail.length).toBeGreaterThanOrEqual(7); // At least one entry per transition

await evidence.recordResult({
testCaseId: 'OQ-SM-001',
testStep: 'full-workflow',
expectedResult: 'All transitions succeed, audit trail complete',
actualResult: `Work order completed successfully with ${auditTrail.length} audit entries`,
passFail: 'pass',
});
});

afterAll(async () => {
await screenshots.upload();
await evidence.upload();
});
});

4.3.2 Invalid Transition Rejection

Test ID: OQ-SM-002 Test Case: Invalid State Transition Detection

// tests/validation/oq/workflows/invalid-transitions.test.ts

import { WorkflowOrchestrator } from '../../utils/workflow-orchestrator';
import { EvidenceCollector } from '../../utils/evidence-collector';

describe('OQ-SM-002: Invalid Transition Rejection', () => {
const workflow = new WorkflowOrchestrator();
const evidence = new EvidenceCollector('OQ-SM-002');

test('Attempt DRAFT → COMPLETED (skipping intermediate states)', async () => {
const wo = await workflow.createWorkOrder({
summary: 'OQ-SM-002 Invalid Transition Test',
status: 'DRAFT',
});

// Attempt invalid transition
let errorThrown = false;
let errorMessage = '';

try {
await workflow.transition(wo.id, 'DRAFT', 'COMPLETED', {});
} catch (error: any) {
errorThrown = true;
errorMessage = error.message;
}

evidence.captureApiResponse({
operation: 'transition DRAFT → COMPLETED',
responseStatus: 422,
errorMessage,
timestamp: new Date().toISOString(),
});

expect(errorThrown).toBe(true);
expect(errorMessage).toContain('Invalid state transition');
expect(errorMessage).toContain('DRAFT → COMPLETED');
expect(errorMessage).toContain('Valid transitions from DRAFT: [PLANNED, CANCELLED]');

await evidence.recordResult({
testCaseId: 'OQ-SM-002',
testStep: '2.1',
expectedResult: '422 error with valid transitions listed',
actualResult: errorMessage,
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

4.4 E-Signature Verification

4.4.1 E-Signature Creation and Binding

Test ID: OQ-SIG-001 Test Case: Valid E-Signature with Re-Authentication

// tests/validation/oq/signatures/e-signature-creation.test.ts

import { SignatureService } from '../../utils/signature-service';
import { EvidenceCollector } from '../../utils/evidence-collector';
import { ScreenshotManager } from '../../utils/screenshot-manager';

describe('OQ-SIG-001: E-Signature Creation with Re-Authentication', () => {
const sigService = new SignatureService();
const evidence = new EvidenceCollector('OQ-SIG-001');
const screenshots = new ScreenshotManager('OQ-SIG-001');

test('Create e-signature with all required components', async () => {
// Simulate user re-authentication
const authToken = await sigService.reAuthenticate({
userId: 'test-user',
password: 'test-password',
});

await screenshots.capture('step-1-reauth-prompt', {
context: { userId: 'test-user' },
});

expect(authToken).toBeTruthy();

// Create e-signature
const signature = await sigService.createSignature({
signerId: 'test-user',
entityType: 'WORK_ORDER',
entityId: 'test-wo-001',
meaning: 'I approve this change to the validated system',
authToken,
});

await screenshots.capture('step-2-signature-created', {
context: { signatureId: signature.id },
});

evidence.captureSignature({
signatureId: signature.id,
signerId: signature.signerId,
meaning: signature.meaning,
signedAt: signature.signedAt,
algorithm: signature.algorithm,
timestamp: new Date().toISOString(),
});

// Verify signature components per 21 CFR Part 11 §11.50
expect(signature.signerId).toBe('test-user');
expect(signature.signedAt).toBeTruthy();
expect(signature.meaning).toBe('I approve this change to the validated system');
expect(signature.algorithm).toBe('ECDSA-SHA256');

// Verify signature is bound to specific entity (Part 11 §11.70)
expect(signature.entityType).toBe('WORK_ORDER');
expect(signature.entityId).toBe('test-wo-001');

await evidence.recordResult({
testCaseId: 'OQ-SIG-001',
testStep: '3.1',
expectedResult: 'E-signature created with signer ID, timestamp, meaning, and entity binding',
actualResult: `Signature ${signature.id} created successfully`,
passFail: 'pass',
});
});

test('Signature without re-authentication is rejected', async () => {
let errorThrown = false;
let errorMessage = '';

try {
await sigService.createSignature({
signerId: 'test-user',
entityType: 'WORK_ORDER',
entityId: 'test-wo-002',
meaning: 'I approve this change',
authToken: 'INVALID_TOKEN',
});
} catch (error: any) {
errorThrown = true;
errorMessage = error.message;
}

await screenshots.capture('step-3-reauth-required', {
context: { error: errorMessage },
});

evidence.captureApiResponse({
operation: 'create signature without re-auth',
responseStatus: 401,
errorMessage,
timestamp: new Date().toISOString(),
});

expect(errorThrown).toBe(true);
expect(errorMessage).toContain('re-authentication required');

await evidence.recordResult({
testCaseId: 'OQ-SIG-001',
testStep: '3.2',
expectedResult: '401 Unauthorized — re-authentication required',
actualResult: errorMessage,
passFail: 'pass',
});
});

afterAll(async () => {
await screenshots.upload();
await evidence.upload();
});
});

4.5 Audit Trail Integrity

4.5.1 Hash Chain Verification

Test ID: OQ-AUDIT-006 Test Case: Audit Trail Hash Chain Integrity

// tests/validation/oq/audit-trail/hash-chain.test.ts

import { AuditService } from '../../utils/audit-service';
import { HashVerifier } from '../../utils/hash-verifier';
import { EvidenceCollector } from '../../utils/evidence-collector';

describe('OQ-AUDIT-006: Hash Chain Integrity', () => {
const auditService = new AuditService();
const hashVerifier = new HashVerifier();
const evidence = new EvidenceCollector('OQ-AUDIT-006');

test('Verify hash chain for 100 consecutive audit entries', async () => {
// Create test work order with multiple transitions
const workOrderId = 'test-wo-hash-chain';
const auditEntries = await auditService.getAuditTrail(workOrderId);

expect(auditEntries.length).toBeGreaterThanOrEqual(100);

// Verify each entry's hash includes previous entry's hash
let chainValid = true;
let firstInvalidEntry = null;

for (let i = 1; i < auditEntries.length; i++) {
const currentEntry = auditEntries[i];
const previousEntry = auditEntries[i - 1];

// Reconstruct hash: HMAC-SHA256(current_data + previous_hash)
const reconstructedHash = hashVerifier.computeChainHash(
currentEntry,
previousEntry.hash
);

if (reconstructedHash !== currentEntry.hash) {
chainValid = false;
firstInvalidEntry = i;
break;
}
}

evidence.captureHashChain({
workOrderId,
totalEntries: auditEntries.length,
chainValid,
firstInvalidEntry,
timestamp: new Date().toISOString(),
});

expect(chainValid).toBe(true);
expect(firstInvalidEntry).toBeNull();

await evidence.recordResult({
testCaseId: 'OQ-AUDIT-006',
testStep: '6.1',
expectedResult: 'Hash chain valid for all entries',
actualResult: `Verified ${auditEntries.length} entries, chain intact`,
passFail: 'pass',
});
});

test('Detect tampered audit entry', async () => {
// Create test audit entry
const originalEntry = await auditService.createEntry({
entityType: 'WORK_ORDER',
entityId: 'test-wo-tamper',
action: 'STATUS_CHANGE',
performedBy: 'test-user',
previousValue: { status: 'DRAFT' },
newValue: { status: 'PLANNED' },
});

// Attempt to modify entry (should fail due to database trigger)
let tamperAttemptFailed = false;
let errorMessage = '';

try {
await auditService.updateEntry(originalEntry.id, {
newValue: { status: 'COMPLETED' }, // Attempt tampering
});
} catch (error: any) {
tamperAttemptFailed = true;
errorMessage = error.message;
}

evidence.captureTamperAttempt({
entryId: originalEntry.id,
tamperAttemptFailed,
errorMessage,
timestamp: new Date().toISOString(),
});

expect(tamperAttemptFailed).toBe(true);
expect(errorMessage).toContain('Audit trail records are immutable');

await evidence.recordResult({
testCaseId: 'OQ-AUDIT-006',
testStep: '6.2',
expectedResult: 'Tamper attempt blocked by database trigger',
actualResult: errorMessage,
passFail: 'pass',
});
});

afterAll(async () => {
await evidence.upload();
});
});

5. Performance Qualification (PQ)

5.1 PQ Test Categories

CategoryTest CountAutomationDescription
Concurrent User Load3100%50, 100, 500 concurrent users
Document Operations3100%Upload, download, search under load
Workflow Throughput3100%State transitions per second
Database Performance3100%Query response times at scale
Search Performance2100%Full-text search on 100K+ documents
Report Generation2100%Large report generation time
Total16100%-

5.2 Concurrent User Load Testing

5.2.1 k6 Load Test Script

Test ID: PQ-PERF-001 Test Case: 100 Concurrent Users Sustained Load

// tests/validation/pq/load-testing/concurrent-users.k6.js

import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Trend } from 'k6/metrics';

// Custom metrics
const errorRate = new Rate('errors');
const apiResponseTime = new Trend('api_response_time');

export const options = {
stages: [
{ duration: '2m', target: 100 }, // Ramp up to 100 users over 2 minutes
{ duration: '30m', target: 100 }, // Sustain 100 users for 30 minutes
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95th percentile < 500ms
errors: ['rate<0.01'], // Error rate < 1%
},
};

const BASE_URL = __ENV.API_URL || 'https://localhost:3000';

export default function () {
// Scenario 1: User authentication
const loginRes = http.post(`${BASE_URL}/api/auth/login`, {
email: `testuser${__VU}@example.com`,
password: 'ValidPassword123!',
});

check(loginRes, {
'login successful': (r) => r.status === 200,
'received auth token': (r) => r.json('accessToken') !== undefined,
});

errorRate.add(loginRes.status !== 200);
apiResponseTime.add(loginRes.timings.duration);

if (loginRes.status !== 200) {
return; // Exit if login failed
}

const authToken = loginRes.json('accessToken');
const headers = {
Authorization: `Bearer ${authToken}`,
'Content-Type': 'application/json',
};

// Scenario 2: List work orders
const listWoRes = http.get(`${BASE_URL}/api/work-orders`, { headers });

check(listWoRes, {
'list work orders successful': (r) => r.status === 200,
'received work orders array': (r) => Array.isArray(r.json('data')),
});

errorRate.add(listWoRes.status !== 200);
apiResponseTime.add(listWoRes.timings.duration);

sleep(1);

// Scenario 3: Create work order
const createWoRes = http.post(
`${BASE_URL}/api/work-orders`,
JSON.stringify({
summary: `PQ Test Work Order ${__VU}-${__ITER}`,
detail: 'Performance qualification load test',
sourceType: 'AUTOMATION',
}),
{ headers }
);

check(createWoRes, {
'create work order successful': (r) => r.status === 201,
'received work order ID': (r) => r.json('id') !== undefined,
});

errorRate.add(createWoRes.status !== 201);
apiResponseTime.add(createWoRes.timings.duration);

sleep(2);

// Scenario 4: Fetch audit trail
if (createWoRes.status === 201) {
const workOrderId = createWoRes.json('id');
const auditRes = http.get(`${BASE_URL}/api/audit-trail?entityId=${workOrderId}`, {
headers,
});

check(auditRes, {
'fetch audit trail successful': (r) => r.status === 200,
'audit entries returned': (r) => r.json('data').length > 0,
});

errorRate.add(auditRes.status !== 200);
apiResponseTime.add(auditRes.timings.duration);
}

sleep(3);
}

export function handleSummary(data) {
return {
'/tmp/pq-perf-001-summary.json': JSON.stringify(data, null, 2),
'/tmp/pq-perf-001-summary.html': htmlReport(data),
};
}

function htmlReport(data) {
// Generate HTML report for evidence package
return `
<!DOCTYPE html>
<html>
<head>
<title>PQ-PERF-001: 100 Concurrent Users Load Test</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
table { border-collapse: collapse; width: 100%; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #4CAF50; color: white; }
.pass { color: green; font-weight: bold; }
.fail { color: red; font-weight: bold; }
</style>
</head>
<body>
<h1>PQ-PERF-001: 100 Concurrent Users Load Test</h1>
<h2>Summary</h2>
<table>
<tr><th>Metric</th><th>Value</th><th>Threshold</th><th>Status</th></tr>
<tr>
<td>Total Requests</td>
<td>${data.metrics.http_reqs.values.count}</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Failed Requests</td>
<td>${data.metrics.http_req_failed.values.passes}</td>
<td>&lt; 1%</td>
<td class="${data.metrics.errors.values.rate < 0.01 ? 'pass' : 'fail'}">
${data.metrics.errors.values.rate < 0.01 ? 'PASS' : 'FAIL'}
</td>
</tr>
<tr>
<td>P95 Response Time</td>
<td>${data.metrics.http_req_duration.values['p(95)']} ms</td>
<td>&lt; 500 ms</td>
<td class="${data.metrics.http_req_duration.values['p(95)'] < 500 ? 'pass' : 'fail'}">
${data.metrics.http_req_duration.values['p(95)'] < 500 ? 'PASS' : 'FAIL'}
</td>
</tr>
</table>
<h2>Test Execution</h2>
<p><strong>Test ID:</strong> PQ-PERF-001</p>
<p><strong>Execution Date:</strong> ${new Date().toISOString()}</p>
<p><strong>Duration:</strong> ${data.state.testRunDurationMs / 1000} seconds</p>
</body>
</html>
`;
}

5.2.2 PQ Execution Wrapper

// tests/validation/pq/load-testing/concurrent-users.test.ts

import { execSync } from 'child_process';
import { EvidenceCollector } from '../../utils/evidence-collector';

describe('PQ-PERF-001: Concurrent User Load Testing', () => {
const evidence = new EvidenceCollector('PQ-PERF-001');

test('100 concurrent users sustained for 30 minutes', async () => {
// Run k6 load test
const k6Output = execSync(
'k6 run tests/validation/pq/load-testing/concurrent-users.k6.js',
{
encoding: 'utf-8',
env: {
...process.env,
API_URL: process.env.VALIDATION_API_URL,
},
}
);

evidence.captureCommandOutput({
command: 'k6 run concurrent-users.k6.js',
output: k6Output,
timestamp: new Date().toISOString(),
});

// Parse k6 summary JSON
const summaryPath = '/tmp/pq-perf-001-summary.json';
const summary = JSON.parse(require('fs').readFileSync(summaryPath, 'utf-8'));

const p95ResponseTime = summary.metrics.http_req_duration.values['p(95)'];
const errorRate = summary.metrics.errors.values.rate;

evidence.capturePerformanceMetrics({
testCaseId: 'PQ-PERF-001',
concurrentUsers: 100,
duration: '30 minutes',
p95ResponseTime,
errorRate,
totalRequests: summary.metrics.http_reqs.values.count,
timestamp: new Date().toISOString(),
});

// Verify thresholds
expect(p95ResponseTime).toBeLessThan(500);
expect(errorRate).toBeLessThan(0.01);

await evidence.recordResult({
testCaseId: 'PQ-PERF-001',
testStep: '1.1',
expectedResult: 'P95 < 500ms, error rate < 1%',
actualResult: `P95: ${p95ResponseTime}ms, error rate: ${(errorRate * 100).toFixed(2)}%`,
passFail: 'pass',
});
}, 2400000); // 40-minute timeout

afterAll(async () => {
await evidence.upload();
});
});

5.3 Database Performance at Scale

5.3.1 Query Performance with 500K Work Orders

Test ID: PQ-PERF-004 Test Case: Query Performance at Data Volume

// tests/validation/pq/database-performance/query-performance.test.ts

import { PrismaClient } from '@prisma/client';
import { TestDataFactory } from '../../utils/test-data-factory';
import { EvidenceCollector } from '../../utils/evidence-collector';

describe('PQ-PERF-004: Query Performance at Data Volume', () => {
const prisma = new PrismaClient();
const dataFactory = new TestDataFactory();
const evidence = new EvidenceCollector('PQ-PERF-004');

beforeAll(async () => {
// Seed database with 500K work orders
await dataFactory.seedWorkOrders(500000);
}, 600000); // 10-minute timeout for seeding

test('Query P95 < 100ms with 500K work orders', async () => {
const queryTimings = [];

// Execute 1000 random queries
for (let i = 0; i < 1000; i++) {
const startTime = Date.now();

await prisma.workOrder.findMany({
where: { status: 'IN_PROGRESS' },
take: 50,
orderBy: { createdAt: 'desc' },
});

const duration = Date.now() - startTime;
queryTimings.push(duration);
}

// Calculate P95
queryTimings.sort((a, b) => a - b);
const p95Index = Math.floor(queryTimings.length * 0.95);
const p95 = queryTimings[p95Index];

evidence.capturePerformanceMetrics({
testCaseId: 'PQ-PERF-004',
metric: 'query-p95',
totalRecords: 500000,
queryCount: 1000,
p95ResponseTime: p95,
timestamp: new Date().toISOString(),
});

expect(p95).toBeLessThan(100);

await evidence.recordResult({
testCaseId: 'PQ-PERF-004',
testStep: '4.1',
expectedResult: 'Query P95 < 100ms',
actualResult: `P95: ${p95}ms`,
passFail: 'pass',
});
});

afterAll(async () => {
await prisma.$disconnect();
await evidence.upload();
});
});

6. Screenshot Automation

6.1 Screenshot Manager Implementation

// tests/validation/utils/screenshot-manager.ts

import { chromium, Browser, Page } from 'playwright';
import * as path from 'path';
import * as fs from 'fs';

export interface ScreenshotOptions {
context?: Record<string, any>;
fullPage?: boolean;
annotate?: boolean;
}

export class ScreenshotManager {
private testCaseId: string;
private browser: Browser | null = null;
private page: Page | null = null;
private screenshots: string[] = [];
private evidenceDir: string;

constructor(testCaseId: string) {
this.testCaseId = testCaseId;
this.evidenceDir = path.join(
process.env.EVIDENCE_OUTPUT_DIR || './evidence',
testCaseId,
'screenshots'
);
fs.mkdirSync(this.evidenceDir, { recursive: true });
}

async initialize(): Promise<void> {
this.browser = await chromium.launch({
headless: true,
args: ['--no-sandbox', '--disable-dev-shm-usage'],
});

this.page = await this.browser.newPage({
viewport: { width: 1920, height: 1080 },
});
}

async capture(
stepName: string,
options: ScreenshotOptions = {}
): Promise<string> {
if (!this.page) {
await this.initialize();
}

const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `${this.testCaseId}_${stepName}_${timestamp}.png`;
const filepath = path.join(this.evidenceDir, filename);

// Capture screenshot
await this.page!.screenshot({
path: filepath,
fullPage: options.fullPage ?? true,
});

// Optionally annotate with timestamp and context
if (options.annotate) {
await this.annotateScreenshot(filepath, {
timestamp: new Date().toISOString(),
testCaseId: this.testCaseId,
stepName,
context: options.context,
});
}

this.screenshots.push(filepath);

// Create metadata sidecar
const metadataPath = filepath.replace('.png', '.metadata.json');
fs.writeFileSync(
metadataPath,
JSON.stringify(
{
testCaseId: this.testCaseId,
stepName,
timestamp: new Date().toISOString(),
filepath,
url: await this.page!.url(),
context: options.context,
},
null,
2
)
);

return filepath;
}

async captureElement(
selector: string,
stepName: string,
options: ScreenshotOptions = {}
): Promise<string> {
if (!this.page) {
await this.initialize();
}

const element = await this.page!.locator(selector);
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `${this.testCaseId}_${stepName}_element_${timestamp}.png`;
const filepath = path.join(this.evidenceDir, filename);

await element.screenshot({ path: filepath });
this.screenshots.push(filepath);

return filepath;
}

private async annotateScreenshot(
filepath: string,
metadata: Record<string, any>
): Promise<void> {
// Use sharp or canvas to add timestamp annotation
// (Implementation depends on image processing library)
// For now, metadata is stored in sidecar JSON
}

async navigateTo(url: string): Promise<void> {
if (!this.page) {
await this.initialize();
}
await this.page!.goto(url, { waitUntil: 'networkidle' });
}

async close(): Promise<void> {
if (this.browser) {
await this.browser.close();
this.browser = null;
this.page = null;
}
}

async upload(): Promise<void> {
// Upload all screenshots to GCS
const { execSync } = require('child_process');

this.screenshots.forEach((filepath) => {
const gcsPath = `gs://coditect-bio-qms-validation-evidence/${this.testCaseId}/screenshots/${path.basename(filepath)}`;

execSync(`gsutil cp "${filepath}" "${gcsPath}"`, { stdio: 'inherit' });

// Upload metadata sidecar
const metadataPath = filepath.replace('.png', '.metadata.json');
if (fs.existsSync(metadataPath)) {
const gcsMetadataPath = gcsPath.replace('.png', '.metadata.json');
execSync(`gsutil cp "${metadataPath}" "${gcsMetadataPath}"`, {
stdio: 'inherit',
});
}
});

console.log(`✓ Uploaded ${this.screenshots.length} screenshots to GCS`);
}
}

6.2 Screenshot Usage Example

// Example usage in OQ test

import { ScreenshotManager } from '../../utils/screenshot-manager';

describe('OQ-RBAC-001: Role Permission Matrix', () => {
const screenshots = new ScreenshotManager('OQ-RBAC-001');

beforeAll(async () => {
await screenshots.initialize();
await screenshots.navigateTo('https://validation.bio-qms.local/login');
});

test('ORIGINATOR can create work order', async () => {
// Login as ORIGINATOR
await screenshots.navigateTo('https://validation.bio-qms.local/work-orders/new');

// Capture screenshot of create form
await screenshots.capture('originator-create-form', {
context: { role: 'ORIGINATOR', operation: 'create_work_order' },
annotate: true,
});

// Fill form and submit
// ...

// Capture screenshot of success message
await screenshots.capture('originator-create-success', {
context: { workOrderId: 'WO-12345' },
annotate: true,
});
});

afterAll(async () => {
await screenshots.close();
await screenshots.upload();
});
});

7. Data Integrity Verification

7.1 Hash Chain Verifier

// tests/validation/utils/hash-verifier.ts

import * as crypto from 'crypto';

export interface AuditEntry {
id: string;
entityType: string;
entityId: string;
action: string;
performedBy: string;
performedAt: string;
previousValue: any;
newValue: any;
hash: string;
}

export class HashVerifier {
/**
* Compute HMAC-SHA256 hash for audit entry chaining
*/
computeChainHash(currentEntry: Omit<AuditEntry, 'hash'>, previousHash: string): string {
const data = JSON.stringify({
id: currentEntry.id,
entityType: currentEntry.entityType,
entityId: currentEntry.entityId,
action: currentEntry.action,
performedBy: currentEntry.performedBy,
performedAt: currentEntry.performedAt,
previousValue: currentEntry.previousValue,
newValue: currentEntry.newValue,
previousHash,
});

return crypto.createHmac('sha256', process.env.AUDIT_CHAIN_SECRET!).update(data).digest('hex');
}

/**
* Verify entire audit trail hash chain
*/
verifyAuditChain(entries: AuditEntry[]): { valid: boolean; firstInvalidIndex: number | null } {
if (entries.length === 0) {
return { valid: true, firstInvalidIndex: null };
}

for (let i = 1; i < entries.length; i++) {
const currentEntry = entries[i];
const previousEntry = entries[i - 1];

const expectedHash = this.computeChainHash(
{
id: currentEntry.id,
entityType: currentEntry.entityType,
entityId: currentEntry.entityId,
action: currentEntry.action,
performedBy: currentEntry.performedBy,
performedAt: currentEntry.performedAt,
previousValue: currentEntry.previousValue,
newValue: currentEntry.newValue,
},
previousEntry.hash
);

if (expectedHash !== currentEntry.hash) {
return { valid: false, firstInvalidIndex: i };
}
}

return { valid: true, firstInvalidIndex: null };
}

/**
* Verify record immutability (detect tampering)
*/
async verifyRecordImmutability(
recordId: string,
originalHash: string,
currentHash: string
): Promise<boolean> {
return originalHash === currentHash;
}

/**
* Generate Merkle tree root for evidence package
*/
computeMerkleRoot(fileHashes: string[]): string {
if (fileHashes.length === 0) {
return '';
}

if (fileHashes.length === 1) {
return fileHashes[0];
}

let currentLevel = fileHashes;

while (currentLevel.length > 1) {
const nextLevel: string[] = [];

for (let i = 0; i < currentLevel.length; i += 2) {
const left = currentLevel[i];
const right = i + 1 < currentLevel.length ? currentLevel[i + 1] : left;

const combined = crypto.createHash('sha256').update(left + right).digest('hex');

nextLevel.push(combined);
}

currentLevel = nextLevel;
}

return currentLevel[0];
}
}

8. Regression Suite Execution

8.1 Regression Test Strategy

Execution Triggers:

  • Every deployment to staging
  • Every deployment to production
  • Weekly scheduled run (Sunday 00:00 UTC)
  • On-demand via CI/CD pipeline

Scope:

  • Full IQ suite (15 tests)
  • Full OQ suite (55 tests)
  • Targeted PQ suite (5 critical performance tests)

Pass Criteria:

  • 100% IQ pass rate (zero tolerance for infrastructure failures)
  • 98% OQ pass rate (max 1 flaky test allowed)
  • 100% PQ pass rate within ±10% variance

8.2 Regression Execution Script

#!/bin/bash
# scripts/validation/run-regression.sh
# Full validation regression suite

set -euo pipefail

EXECUTION_DATE=$(date +%Y-%m-%d)
EVIDENCE_DIR="./evidence/regression-${EXECUTION_DATE}"
REPORT_DIR="./reports/regression"

echo "=============================================="
echo " CODITECT BIO-QMS Regression Suite "
echo " Execution Date: ${EXECUTION_DATE} "
echo "=============================================="

mkdir -p "$EVIDENCE_DIR"
mkdir -p "$REPORT_DIR"

# Step 1: Run IQ tests
echo ""
echo "[1/4] Running IQ Tests (15 test cases)..."
npm run test:validation:iq -- \
--reporters=default \
--reporters=jest-junit \
--outputFile="$REPORT_DIR/iq-results.xml"

IQ_EXIT_CODE=$?

# Step 2: Run OQ tests
echo ""
echo "[2/4] Running OQ Tests (55 test cases)..."
npm run test:validation:oq -- \
--reporters=default \
--reporters=jest-junit \
--outputFile="$REPORT_DIR/oq-results.xml"

OQ_EXIT_CODE=$?

# Step 3: Run targeted PQ tests
echo ""
echo "[3/4] Running PQ Tests (5 critical tests)..."
npm run test:validation:pq -- \
--testNamePattern="PQ-(PERF-001|PERF-004|SLA-001)" \
--reporters=default \
--reporters=jest-junit \
--outputFile="$REPORT_DIR/pq-results.xml"

PQ_EXIT_CODE=$?

# Step 4: Generate regression report
echo ""
echo "[4/4] Generating Regression Report..."
node scripts/validation/generate-regression-report.js \
--iq-results "$REPORT_DIR/iq-results.xml" \
--oq-results "$REPORT_DIR/oq-results.xml" \
--pq-results "$REPORT_DIR/pq-results.xml" \
--output "$REPORT_DIR/regression-summary.html"

# Exit with failure if any suite failed
if [ $IQ_EXIT_CODE -ne 0 ] || [ $OQ_EXIT_CODE -ne 0 ] || [ $PQ_EXIT_CODE -ne 0 ]; then
echo ""
echo "❌ REGRESSION SUITE FAILED"
echo " IQ Exit Code: $IQ_EXIT_CODE"
echo " OQ Exit Code: $OQ_EXIT_CODE"
echo " PQ Exit Code: $PQ_EXIT_CODE"
exit 1
else
echo ""
echo "✅ REGRESSION SUITE PASSED"
exit 0
fi

8.3 CI/CD Integration

# .github/workflows/validation-regression.yml

name: Validation Regression Suite

on:
push:
branches: [main, staging]
schedule:
- cron: '0 0 * * 0' # Every Sunday at 00:00 UTC
workflow_dispatch: # Manual trigger

jobs:
validation-regression:
runs-on: ubuntu-latest
timeout-minutes: 120

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20.11.0'

- name: Install dependencies
run: npm ci

- name: Setup test database
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=validation_test \
-e POSTGRES_DB=bio_qms_validation \
postgres:15

- name: Run database migrations
run: npx prisma migrate deploy

- name: Run validation regression suite
env:
DATABASE_URL: postgresql://postgres:validation_test@localhost:5432/bio_qms_validation
VALIDATION_API_URL: http://localhost:3000
EVIDENCE_OUTPUT_DIR: ./evidence/regression-${{ github.run_id }}
run: |
bash scripts/validation/run-regression.sh

- name: Upload evidence to GCS
if: always()
uses: google-github-actions/upload-cloud-storage@v1
with:
path: ./evidence/regression-${{ github.run_id }}
destination: coditect-bio-qms-validation-evidence/regression/${{ github.run_id }}

- name: Publish test results
if: always()
uses: dorny/test-reporter@v1
with:
name: Validation Regression Results
path: ./reports/regression/*.xml
reporter: jest-junit

- name: Notify on failure
if: failure()
uses: 8398a7/action-slack@v3
with:
status: failure
text: 'Validation regression suite failed'
webhook_url: ${{ secrets.SLACK_WEBHOOK_URL }}

9. Evidence Collection Utilities

9.1 Evidence Collector Implementation

// tests/validation/utils/evidence-collector.ts

import * as fs from 'fs';
import * as path from 'path';
import { execSync } from 'child_process';

export interface TestResult {
testCaseId: string;
testStep: string;
expectedResult: string;
actualResult: string;
passFail: 'pass' | 'fail';
timestamp?: string;
}

export class EvidenceCollector {
private testCaseId: string;
private evidenceDir: string;
private results: TestResult[] = [];
private artifacts: string[] = [];

constructor(testCaseId: string) {
this.testCaseId = testCaseId;
this.evidenceDir = path.join(
process.env.EVIDENCE_OUTPUT_DIR || './evidence',
testCaseId
);
fs.mkdirSync(this.evidenceDir, { recursive: true });
}

async recordResult(result: TestResult): Promise<void> {
this.results.push({
...result,
timestamp: result.timestamp || new Date().toISOString(),
});
}

captureSystemInfo(info: Record<string, any>): void {
const filepath = path.join(this.evidenceDir, 'system-info.json');
this.writeArtifact(filepath, JSON.stringify(info, null, 2));
}

captureDatabaseQuery(query: {
query: string;
result: any;
timestamp: string;
}): void {
const filepath = path.join(
this.evidenceDir,
`db-query-${Date.now()}.json`
);
this.writeArtifact(filepath, JSON.stringify(query, null, 2));
}

captureApiResponse(response: Record<string, any>): void {
const filepath = path.join(
this.evidenceDir,
`api-response-${Date.now()}.json`
);
this.writeArtifact(filepath, JSON.stringify(response, null, 2));
}

captureCommandOutput(output: {
command: string;
output: string;
timestamp: string;
}): void {
const filepath = path.join(
this.evidenceDir,
`command-output-${Date.now()}.txt`
);
this.writeArtifact(filepath, `Command: ${output.command}\n\n${output.output}`);
}

captureHash(hash: Record<string, any>): void {
const filepath = path.join(this.evidenceDir, `hash-${Date.now()}.json`);
this.writeArtifact(filepath, JSON.stringify(hash, null, 2));
}

capturePerformanceMetrics(metrics: Record<string, any>): void {
const filepath = path.join(
this.evidenceDir,
`performance-metrics-${Date.now()}.json`
);
this.writeArtifact(filepath, JSON.stringify(metrics, null, 2));
}

private writeArtifact(filepath: string, content: string): void {
fs.writeFileSync(filepath, content, 'utf-8');
this.artifacts.push(filepath);

// Compute SHA-256 hash for artifact
const hash = execSync(`sha256sum "${filepath}"`, { encoding: 'utf-8' })
.split(' ')[0]
.trim();

// Create metadata sidecar
const metadataPath = `${filepath}.metadata.json`;
fs.writeFileSync(
metadataPath,
JSON.stringify(
{
testCaseId: this.testCaseId,
filepath,
sha256: hash,
timestamp: new Date().toISOString(),
},
null,
2
)
);
}

async upload(): Promise<void> {
// Generate evidence manifest
const manifest = {
testCaseId: this.testCaseId,
executionDate: new Date().toISOString(),
results: this.results,
artifacts: this.artifacts.map((filepath) => ({
path: filepath,
sha256: execSync(`sha256sum "${filepath}"`, { encoding: 'utf-8' })
.split(' ')[0]
.trim(),
})),
};

const manifestPath = path.join(this.evidenceDir, 'MANIFEST.json');
fs.writeFileSync(manifestPath, JSON.stringify(manifest, null, 2));

// Upload entire evidence directory to GCS
const gcsPath = `gs://coditect-bio-qms-validation-evidence/${this.testCaseId}`;
execSync(`gsutil -m cp -r "${this.evidenceDir}" "${gcsPath}"`, {
stdio: 'inherit',
});

console.log(`✓ Uploaded evidence package to ${gcsPath}`);
}
}

10. Jest Configuration for Validation

// backend/tests/jest.config.validation.js

module.exports = {
displayName: 'validation',
preset: 'ts-jest',
testEnvironment: 'node',
roots: ['<rootDir>/tests/validation'],
testMatch: ['**/*.test.ts'],
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json'],
collectCoverageFrom: [
'src/**/*.{ts,tsx}',
'!src/**/*.d.ts',
'!src/**/*.spec.ts',
],
coverageDirectory: '<rootDir>/coverage/validation',
coverageReporters: ['text', 'lcov', 'html'],
setupFilesAfterEnv: ['<rootDir>/tests/validation/setup.ts'],
testTimeout: 120000, // 2 minutes default timeout
maxWorkers: 1, // Sequential execution for validation tests
reporters: [
'default',
['jest-junit', {
outputDirectory: './reports/validation',
outputName: 'junit.xml',
classNameTemplate: '{filepath}',
titleTemplate: '{title}',
ancestorSeparator: ' › ',
}],
['<rootDir>/tests/validation/reports/validation-reporter.ts', {}],
],
};

11. Cross-References

This framework integrates with:

  • D.2.1: Validation Test Protocols - Protocol templates for IQ/OQ/PQ
  • D.2.2: Validation Record Controls - Electronic record integrity
  • D.2.3: Validation Signature Controls - E-signature requirements
  • D.2.4: Validation Evidence Package - Evidence storage and packaging
  • D.2.5: Validation Review and Approval - QA review procedures
  • 18-state-machine-specification.md - Workflow state definitions
  • 22-rbac-permissions-matrix.md - Permission verification matrix
  • 70-validation-protocol-templates.md - Test case specifications

Document ID: CODITECT-BIO-VAL-FRAMEWORK-001 Version: 1.0.0 Classification: Internal - Restricted Next Review Date: 2027-02-16 Framework Owner: Validation Engineering Lead Document Location: docs/compliance/validation-test-framework.md Approval Status: Draft (pending QA approval)

Confidentiality Notice: This document contains proprietary information and is intended solely for authorized personnel of CODITECT Biosciences. Unauthorized distribution is prohibited.


END OF DOCUMENT