Algorithmic Impact Assessment (AIA)
Ethical, Legal, and Societal Risk Evaluation
Document Control
| Field | Details |
|---|---|
| Document Type | Risk Assessment / Compliance Record |
| Prerequisite | System must be classified as High or Critical Risk |
| Completed By | Product Owner & Lead Data Scientist (with Privacy/Legal support) |
| Version | v2.0 |
| Framework Alignment | EU AI Act FRIA (Article 27), NIST AI RMF 2.0, ISO/IEC 42001 |
1. Context & Scope
| Field | Response |
|---|---|
| Project Name | |
| Inventory ID | |
| Assessment Date | |
| Risk Tier | [High / Critical] |
| Reason for High Risk | |
| EU AI Act High-Risk Category | |
| Assessor Name(s) | |
| Review Board Liaison |
2. Affected Stakeholders
2.1 Primary Subjects
Who will be impacted by the decisions or content generated by this system?
- Employees / Job Applicants
- Customers / Consumers
- The General Public
- Vulnerable Populations (specify below)
- Business Partners / Vendors
- Other: _______________
Vulnerable Populations Affected:
- Children (under 18)
- Elderly
- Patients / Medical contexts
- Persons with disabilities
- Asylum seekers / Migrants
- Low-income individuals
- Other: _______________
2.2 Nature of Impact
Describe how subjects are affected. Does the system grant/deny a benefit? Monitor behavior? Make recommendations?
[Narrative description - be specific about decisions made and their consequences]
2.3 Scale of Impact
| Metric | Value |
|---|---|
| Number of individuals affected (estimated) | |
| Geographic scope | |
| Frequency of decisions | |
| Duration of impact per decision |
2.4 Stakeholder Consultation
Have you consulted with affected groups or their representatives?
- Yes - User Research conducted
- Yes - Employee representative consultation
- Yes - Customer focus groups
- Yes - External advisory input
- No
If Yes, attach findings. If No, explain why:
[Explanation]
3. Fairness & Non-Discrimination
3.1 Protected Attributes Assessment
Does the model use—or could it infer—any of the following?
| Protected Attribute | Used Directly | Could Be Inferred | Justification |
|---|---|---|---|
| Race / Ethnicity | [ ] | [ ] | |
| Gender / Sex | [ ] | [ ] | |
| Age | [ ] | [ ] | |
| Disability Status | [ ] | [ ] | |
| Religion / Political Belief | [ ] | [ ] | |
| National Origin | [ ] | [ ] | |
| Sexual Orientation | [ ] | [ ] | |
| Socio-economic Status | [ ] | [ ] | |
| Pregnancy / Family Status | [ ] | [ ] |
3.2 Proxy Variable Analysis
Even if protected attributes are removed, are there proxy variables that correlate with them?
| Variable | Potential Proxy For | Correlation Assessed | Action Taken |
|---|---|---|---|
| Zip Code / Postal Code | Race, Income | [ ] | |
| Credit Score | Race, Income | [ ] | |
| Education Level | Socio-economic | [ ] | |
| Name / Language | Ethnicity, National Origin | [ ] | |
| Browsing History | Various | [ ] |
3.3 Bias Testing Strategy
How have you tested for disparate impact?
| Testing Method | Conducted | Results | Threshold Met |
|---|---|---|---|
| Demographic Parity | [ ] | ||
| Equalized Odds | [ ] | ||
| Predictive Parity | [ ] | ||
| Counterfactual Fairness | [ ] | ||
| Disparate Impact Ratio (≥0.8) | [ ] |
- We have not tested for bias → STOP: Compliance Violation - Testing Required
3.4 Bias Mitigation
If bias was detected, what mitigation was applied?
| Mitigation Applied | Description | Effectiveness |
|---|---|---|
| Pre-processing (data) | ||
| In-processing (algorithm) | ||
| Post-processing (output) | ||
| None required |
4. Human Autonomy & Transparency
4.1 User Notification
How are users notified that AI is involved?
- Explicit disclaimer / Pop-up
- Terms of Service reference
- Watermarking (for generated content)
- Verbal/written disclosure by staff
- No notification → Requires Justification
EU AI Act Transparency Compliance:
- Users informed of AI interaction (Article 52)
- Emotion recognition disclosed (if applicable)
- Deep fake labeled (if applicable)
4.2 Explainability Assessment
If a user asks "Why was I rejected/flagged?", can we provide a specific reason?
| Level | Available | Method |
|---|---|---|
| Global Explanation (general model behavior) | [ ] | |
| Local Explanation (specific decision) | [ ] | |
| Counterfactual ("what would change outcome") | [ ] | |
| Black Box (cannot explain) | [ ] | Requires Executive Waiver |
4.3 Contestability & Redress
What is the process for a human to appeal the AI's decision?
| Element | Available | Description |
|---|---|---|
| Appeal mechanism exists | [ ] | |
| Human review available | [ ] | |
| Timeframe for appeal | [ ] | |
| Clear communication of rights | [ ] | |
| Automated re-evaluation option | [ ] |
Appeal Process Description:
[Describe the "Human Review" workflow]
5. Privacy & Surveillance
5.1 Data Minimization
| Question | Response | Justification |
|---|---|---|
| Is all data collected necessary for the function? | [Yes/No] | |
| Could the same result be achieved with less data? | [Yes/No] | |
| Is there a data retention limit? | [Yes/No] | Duration: |
5.2 Re-identification Risk
| Risk Level | Assessment |
|---|---|
| High | Data could easily be combined to identify individuals |
| Medium | Some re-identification risk with effort |
| Low | Properly anonymized/aggregated |
| N/A | No personal data involved |
Mitigation measures:
[Describe anonymization, pseudonymization, or other measures]
5.3 Surveillance Assessment
| Question | Response |
|---|---|
| Does system track user behavior in real-time? | [Yes/No] |
| Does it monitor employee performance? | [Yes/No] |
| Does it track location? | [Yes/No] |
| Does it analyze biometrics? | [Yes/No] |
| Is continuous monitoring involved? | [Yes/No] |
If Yes to any, ensure compliance with:
- Employee notification and consent
- GDPR Article 22 automated decision-making requirements
- Local labor law requirements
6. Human Agency & Automation Bias
6.1 Human-in-the-Loop Reality
Will humans actually exercise independent judgment, or will they rubber-stamp AI decisions?
| Scenario | Assessment |
|---|---|
| Frequency of human disagreement with AI | [Frequently / Rarely / Never] |
| Time available for human review | [Adequate / Limited / Insufficient] |
| Training on AI limitations provided | [Yes / No] |
| Authority to override without penalty | [Yes / No] |
| Metrics tracking override rate | [Yes / No] |
6.2 Automation Bias Mitigation
| Mitigation | Implemented |
|---|---|
| Mandatory review time before action | [ ] |
| AI confidence threshold for human escalation | [ ] |
| Regular training on AI limitations | [ ] |
| Override tracking and analysis | [ ] |
| Rotating human reviewers | [ ] |
7. Safety & Security
7.1 Safety Assessment
| Hazard | Present | Mitigation |
|---|---|---|
| Physical safety risk | [ ] | |
| Psychological harm potential | [ ] | |
| Critical infrastructure impact | [ ] | |
| Emergency service disruption | [ ] |
7.2 Security Controls
| Control | Implemented | Details |
|---|---|---|
| Access control (least privilege) | [ ] | |
| Audit logging | [ ] | |
| Encryption (transit/rest) | [ ] | |
| Adversarial testing completed | [ ] | |
| Incident response plan | [ ] |
8. Environmental & Societal Impact
8.1 Environmental Considerations
| Factor | Assessment |
|---|---|
| Training compute carbon footprint | |
| Inference energy consumption | |
| Sustainability measures |
8.2 Broader Societal Impact
| Impact Area | Assessment | Mitigation |
|---|---|---|
| Employment displacement | ||
| Concentration of power | ||
| Misinformation potential | ||
| Digital divide implications |
9. Agentic AI Assessment (If Applicable)
9.1 Autonomy Assessment
| Capability | Present | Controls |
|---|---|---|
| Autonomous decision-making | [ ] | |
| Tool access (APIs, databases) | [ ] | |
| External communication | [ ] | |
| Self-modification | [ ] | |
| Multi-agent coordination | [ ] |
9.2 Action Boundary Verification
| Boundary | Defined | Enforced | Tested |
|---|---|---|---|
| Permitted actions list | [ ] | [ ] | [ ] |
| Prohibited actions list | [ ] | [ ] | [ ] |
| Rate limits | [ ] | [ ] | [ ] |
| Approval gates | [ ] | [ ] | [ ] |
| Kill switch | [ ] | [ ] | [ ] |
10. Residual Risk Summary
After applying all controls, rate the remaining risk:
| Risk Category | Inherent Risk | Mitigation Applied | Residual Risk |
|---|---|---|---|
| Bias / Discrimination | [L/M/H] | [L/M/H] | |
| Lack of Explainability | [L/M/H] | [L/M/H] | |
| Privacy Violation | [L/M/H] | [L/M/H] | |
| Reputational Harm | [L/M/H] | [L/M/H] | |
| Safety Harm | [L/M/H] | [L/M/H] | |
| Regulatory Non-Compliance | [L/M/H] | [L/M/H] | |
| Automation Bias | [L/M/H] | [L/M/H] | |
| Agentic Risk | [L/M/H] | [L/M/H] |
11. Final Determination
11.1 Review Board Decision
- APPROVED - Proceed to Deployment
- APPROVED WITH CONDITIONS - See conditions below
- DEFERRED - Additional assessment required
- REJECTED - Risk outweighs business value
11.2 Conditions of Approval (if applicable)
| Condition | Owner | Due Date |
|---|---|---|
11.3 Mandatory Ongoing Requirements
| Requirement | Frequency | Owner |
|---|---|---|
| Bias monitoring | ||
| Performance review | ||
| Stakeholder feedback | ||
| AIA refresh |
12. Approvals
| Role | Name | Date | Signature |
|---|---|---|---|
| Lead Assessor | |||
| Business Owner | |||
| AI Risk Officer | |||
| Privacy Officer | |||
| Legal Counsel |
13. Document History
| Version | Date | Author | Changes |
|---|---|---|---|
14. Attachments
| Document | Attached |
|---|---|
| Bias Testing Report | [ ] |
| Stakeholder Consultation Summary | [ ] |
| Privacy Impact Assessment | [ ] |
| Security Assessment | [ ] |
| Human Oversight Design | [ ] |
Next Step: Proceed to Artifact 8: Implementation Plan (30-60-90 Days)
CODITECT AI Risk Management Framework
Document ID: AI-RMF-07 | Version: 2.0.0 | Status: Active
AZ1.AI Inc. | CODITECT Platform
Framework Alignment: NIST AI RMF 2.0 | EU AI Act | ISO/IEC 42001
This document is part of the CODITECT AI Risk Management Framework. For questions or updates, contact the AI Governance Office.
Repository: coditect-ai-risk-management-framework Last Updated: 2026-01-15 Owner: AZ1.AI Inc. | Lead: Hal Casteel