Skip to main content

Algorithmic Impact Assessment (AIA)

Ethical, Legal, and Societal Risk Evaluation


Document Control

FieldDetails
Document TypeRisk Assessment / Compliance Record
PrerequisiteSystem must be classified as High or Critical Risk
Completed ByProduct Owner & Lead Data Scientist (with Privacy/Legal support)
Versionv2.0
Framework AlignmentEU AI Act FRIA (Article 27), NIST AI RMF 2.0, ISO/IEC 42001

1. Context & Scope

FieldResponse
Project Name
Inventory ID
Assessment Date
Risk Tier[High / Critical]
Reason for High Risk
EU AI Act High-Risk Category
Assessor Name(s)
Review Board Liaison

2. Affected Stakeholders

2.1 Primary Subjects

Who will be impacted by the decisions or content generated by this system?

  • Employees / Job Applicants
  • Customers / Consumers
  • The General Public
  • Vulnerable Populations (specify below)
  • Business Partners / Vendors
  • Other: _______________

Vulnerable Populations Affected:

  • Children (under 18)
  • Elderly
  • Patients / Medical contexts
  • Persons with disabilities
  • Asylum seekers / Migrants
  • Low-income individuals
  • Other: _______________

2.2 Nature of Impact

Describe how subjects are affected. Does the system grant/deny a benefit? Monitor behavior? Make recommendations?

[Narrative description - be specific about decisions made and their consequences]

2.3 Scale of Impact

MetricValue
Number of individuals affected (estimated)
Geographic scope
Frequency of decisions
Duration of impact per decision

2.4 Stakeholder Consultation

Have you consulted with affected groups or their representatives?

  • Yes - User Research conducted
  • Yes - Employee representative consultation
  • Yes - Customer focus groups
  • Yes - External advisory input
  • No

If Yes, attach findings. If No, explain why:

[Explanation]

3. Fairness & Non-Discrimination

3.1 Protected Attributes Assessment

Does the model use—or could it infer—any of the following?

Protected AttributeUsed DirectlyCould Be InferredJustification
Race / Ethnicity[ ][ ]
Gender / Sex[ ][ ]
Age[ ][ ]
Disability Status[ ][ ]
Religion / Political Belief[ ][ ]
National Origin[ ][ ]
Sexual Orientation[ ][ ]
Socio-economic Status[ ][ ]
Pregnancy / Family Status[ ][ ]

3.2 Proxy Variable Analysis

Even if protected attributes are removed, are there proxy variables that correlate with them?

VariablePotential Proxy ForCorrelation AssessedAction Taken
Zip Code / Postal CodeRace, Income[ ]
Credit ScoreRace, Income[ ]
Education LevelSocio-economic[ ]
Name / LanguageEthnicity, National Origin[ ]
Browsing HistoryVarious[ ]

3.3 Bias Testing Strategy

How have you tested for disparate impact?

Testing MethodConductedResultsThreshold Met
Demographic Parity[ ]
Equalized Odds[ ]
Predictive Parity[ ]
Counterfactual Fairness[ ]
Disparate Impact Ratio (≥0.8)[ ]
  • We have not tested for bias → STOP: Compliance Violation - Testing Required

3.4 Bias Mitigation

If bias was detected, what mitigation was applied?

Mitigation AppliedDescriptionEffectiveness
Pre-processing (data)
In-processing (algorithm)
Post-processing (output)
None required

4. Human Autonomy & Transparency

4.1 User Notification

How are users notified that AI is involved?

  • Explicit disclaimer / Pop-up
  • Terms of Service reference
  • Watermarking (for generated content)
  • Verbal/written disclosure by staff
  • No notification → Requires Justification

EU AI Act Transparency Compliance:

  • Users informed of AI interaction (Article 52)
  • Emotion recognition disclosed (if applicable)
  • Deep fake labeled (if applicable)

4.2 Explainability Assessment

If a user asks "Why was I rejected/flagged?", can we provide a specific reason?

LevelAvailableMethod
Global Explanation (general model behavior)[ ]
Local Explanation (specific decision)[ ]
Counterfactual ("what would change outcome")[ ]
Black Box (cannot explain)[ ]Requires Executive Waiver

4.3 Contestability & Redress

What is the process for a human to appeal the AI's decision?

ElementAvailableDescription
Appeal mechanism exists[ ]
Human review available[ ]
Timeframe for appeal[ ]
Clear communication of rights[ ]
Automated re-evaluation option[ ]

Appeal Process Description:

[Describe the "Human Review" workflow]

5. Privacy & Surveillance

5.1 Data Minimization

QuestionResponseJustification
Is all data collected necessary for the function?[Yes/No]
Could the same result be achieved with less data?[Yes/No]
Is there a data retention limit?[Yes/No]Duration:

5.2 Re-identification Risk

Risk LevelAssessment
HighData could easily be combined to identify individuals
MediumSome re-identification risk with effort
LowProperly anonymized/aggregated
N/ANo personal data involved

Mitigation measures:

[Describe anonymization, pseudonymization, or other measures]

5.3 Surveillance Assessment

QuestionResponse
Does system track user behavior in real-time?[Yes/No]
Does it monitor employee performance?[Yes/No]
Does it track location?[Yes/No]
Does it analyze biometrics?[Yes/No]
Is continuous monitoring involved?[Yes/No]

If Yes to any, ensure compliance with:

  • Employee notification and consent
  • GDPR Article 22 automated decision-making requirements
  • Local labor law requirements

6. Human Agency & Automation Bias

6.1 Human-in-the-Loop Reality

Will humans actually exercise independent judgment, or will they rubber-stamp AI decisions?

ScenarioAssessment
Frequency of human disagreement with AI[Frequently / Rarely / Never]
Time available for human review[Adequate / Limited / Insufficient]
Training on AI limitations provided[Yes / No]
Authority to override without penalty[Yes / No]
Metrics tracking override rate[Yes / No]

6.2 Automation Bias Mitigation

MitigationImplemented
Mandatory review time before action[ ]
AI confidence threshold for human escalation[ ]
Regular training on AI limitations[ ]
Override tracking and analysis[ ]
Rotating human reviewers[ ]

7. Safety & Security

7.1 Safety Assessment

HazardPresentMitigation
Physical safety risk[ ]
Psychological harm potential[ ]
Critical infrastructure impact[ ]
Emergency service disruption[ ]

7.2 Security Controls

ControlImplementedDetails
Access control (least privilege)[ ]
Audit logging[ ]
Encryption (transit/rest)[ ]
Adversarial testing completed[ ]
Incident response plan[ ]

8. Environmental & Societal Impact

8.1 Environmental Considerations

FactorAssessment
Training compute carbon footprint
Inference energy consumption
Sustainability measures

8.2 Broader Societal Impact

Impact AreaAssessmentMitigation
Employment displacement
Concentration of power
Misinformation potential
Digital divide implications

9. Agentic AI Assessment (If Applicable)

9.1 Autonomy Assessment

CapabilityPresentControls
Autonomous decision-making[ ]
Tool access (APIs, databases)[ ]
External communication[ ]
Self-modification[ ]
Multi-agent coordination[ ]

9.2 Action Boundary Verification

BoundaryDefinedEnforcedTested
Permitted actions list[ ][ ][ ]
Prohibited actions list[ ][ ][ ]
Rate limits[ ][ ][ ]
Approval gates[ ][ ][ ]
Kill switch[ ][ ][ ]

10. Residual Risk Summary

After applying all controls, rate the remaining risk:

Risk CategoryInherent RiskMitigation AppliedResidual Risk
Bias / Discrimination[L/M/H][L/M/H]
Lack of Explainability[L/M/H][L/M/H]
Privacy Violation[L/M/H][L/M/H]
Reputational Harm[L/M/H][L/M/H]
Safety Harm[L/M/H][L/M/H]
Regulatory Non-Compliance[L/M/H][L/M/H]
Automation Bias[L/M/H][L/M/H]
Agentic Risk[L/M/H][L/M/H]

11. Final Determination

11.1 Review Board Decision

  • APPROVED - Proceed to Deployment
  • APPROVED WITH CONDITIONS - See conditions below
  • DEFERRED - Additional assessment required
  • REJECTED - Risk outweighs business value

11.2 Conditions of Approval (if applicable)

ConditionOwnerDue Date

11.3 Mandatory Ongoing Requirements

RequirementFrequencyOwner
Bias monitoring
Performance review
Stakeholder feedback
AIA refresh

12. Approvals

RoleNameDateSignature
Lead Assessor
Business Owner
AI Risk Officer
Privacy Officer
Legal Counsel

13. Document History

VersionDateAuthorChanges

14. Attachments

DocumentAttached
Bias Testing Report[ ]
Stakeholder Consultation Summary[ ]
Privacy Impact Assessment[ ]
Security Assessment[ ]
Human Oversight Design[ ]

Next Step: Proceed to Artifact 8: Implementation Plan (30-60-90 Days)


CODITECT AI Risk Management Framework

Document ID: AI-RMF-07 | Version: 2.0.0 | Status: Active


AZ1.AI Inc. | CODITECT Platform

Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001


This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.

Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel