What is an AI Impact Assessment?
A deep dive into the AIA
By Dr. Christina Catenacci
Nov 8, 2024
Key Points
AIAs involve businesses looking at their programs or activities that may have an impact on individuals, communities, or an ecosystem and assessing the risks associated with their deployment of AI—and making solid plans for mitigating those risks
There are a few examples of AIAs in the public and the private sectors
AIAs (assessing and mitigating risks when deploying AI) are different than PIAs (assessing and mitigating risks associated with the privacy of individuals) and AHRIAs (assessing and mitigating risks dealing with human rights of individuals)
Some people call it an AI Impact Assessment, and others call it an Algorithmic Impact Assessment—but what is the AIA? What is the difference between AIAs and other types of assessments like Privacy Impact Assessments and Humen Rights AI Impact Assessments? This article answers these questions.
What is an AIA?
Plainly put, when people talk about AIAs, they are talking about looking at their programs or activities that may have an impact on individuals, communities, or an ecosystem and assessing the risks associated with their deployment of AI—and setting out their plans for mitigating those risks.
An example in the public sector
Let us take an example. In Canada, since April 1, 2020, government departments have been required to complete an AIA pursuant to the Directive on Automated Decision Making (Directive). The purpose of the Directive is to ensure that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society, and leads to more efficient, accurate, consistent and interpretable decisions made pursuant to Canadian law.
The Directive apples to any system, tool, or statistical model used to make an administrative decision or a related assessment about a client. It only applies to automated decision systems in production and excludes systems operating in test environments.
To that end, the Algorithmic Impact Assessment tool supports the Directive. The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 51 risk and 34 mitigation questions. Assessment scores are based on many factors, including the system's design, algorithm, decision type, impact, and data.
In addition, the AIA was developed based on best practices in consultation with both internal and external stakeholders. Also. it was developed in the open and is available to the public for sharing and reuse under an open licence.
The AIA is designed to help departments and agencies better understand and manage the risks associated with automated decision systems. The AIA is composed of questions in various formats to assess the areas of risk, and the mitigation measures in place to manage the risks identified. After this, there is a section on scoring with respect to both risks and mitigation measures. Once the scoring is completed, the impacts of automating an administrative decision are classified into four levels, ranging from Level I (little impact) to Level IV (very high impact). The AIA is available as an online questionnaire, which should be taken at the beginning of the design phase of a project. Additionally, the AIA should be completed for a second time, prior to the production of the system, to validate that the results accurately reflect the system that was built. The revised AIA should be released on the Open Government Portal as the final results.
Certain requirements are connected to each impact level (from I to IV): these requirements are referred to as Impact Level Requirements. For instance, when it comes to notice, Level I has no requirements but Levels II to IV require notice to be posted through all service delivery channels. Level IV has the extra requirement of publishing documentation on relevant websites listing details about the automated decision system.
Another example is human-in-the-loop requirements. Levels I and II do not require direct human involvement, but Levels III and IV require that decisions not be made without having specific human intervention points during the decision-making process, and the final decision must be made by a human. The Assistant Deputy Minister is responsible for completing and releasing the final results of an AIA prior to the production of any automated decision system, and applying the relevant Impact Level Requirements as determined by the AIA. Finally, the AIA should be reviewed and updated on a scheduled basis, and when the functionality or scope of the system changes.
An example in the private sector
The above discussion dealt with AIAs and the government. But what about AIAs in the private sector? Are there any tools that can help companies complete an AIA?
One example is the United Kingdom’s recent initiative to launch a platform to help businesses manage AI risks and build trust. More specifically, the platform offers guidance and resources, outlining steps for businesses to conduct AIAs, evaluate AI systems, and check data for bias. Following a government report published in November 2024, the platform was subsequently announced within days. The report stated that the goal of the AI Assurance Platform was to help AI developers and deployers to navigate the complex AI landscape. The platform will act as a one-stop-shop for AI assurance—with tools, services, frameworks, and practices in one place.
The ultimate goal of the platform is for the Department for Science, Innovation, and Technology to gradually create a set of accessible tools to enable baseline good practice for the responsible development and deployment of AI. Consequently, organizations will be supported, and building blocks for a more robust ecosystem will be established. This, in turn, will help to maintain trust in AI technologies.
More precisely, the platform will identify and mitigate the potential risks and harms posed by AI. The government has focused on capitalising on the growing demand for AI assurance tools and services, and partnered with industry to develop a new roadmap, which will help navigate international standards on AI assurance.
Small and medium-sized enterprises will be able to use a self-assessment tool to implement responsible AI management practices in their organisations and make better decisions as they develop and use AI systems. Moreover, the government plans to launch a public consultation to obtain industry feedback. Indeed, this initiative is fairly new, and the details are not yet solidified.
Another example from the private sector
Microsoft has produced the Responsible AI Impact Assessment Template. Released in June, 2022, Microsoft’s AIA template was an effort to define a process for assessing the impact an AI system may have on people, organizations, and society. It decided to share the final output of its research to help other companies.
The sections of the document that can be completed are as follows:
System information
System profile
System lifecycle stage
System description
System purpose
System features
Geographic areas and languages
Deployment mode
Intended uses
Intended uses
Assessment of fitness for purpose
Stakeholders, potential benefits, and potential harms
Stakeholders for Goal-driven requirements from the Responsible AI Standard
Goal A5: Human oversight and control
Goal T1: System intelligibility for decision making
Goal T2: Communication to stakeholders
Goal T3: Disclosure of AI interaction
Fairness considerations
Goal F1: Quality of service
Goal F2: Allocation of resources and opportunities
Goal F3: Minimization of stereotyping, demeaning, and erasing outputs
Technology readiness assessment
Task complexity
Role of humans
Deployment environment complexity
Adverse impact
Restricted Uses
Unsupported uses
Known limitations
Potential impact of failure on stakeholders
Potential impact of misuse on stakeholders
Sensitive Uses
Data requirements
Data requirements
Existing data sets
Summary of Impact
Potential harms and preliminary mitigations
Goal Applicability
Signing off on the Impact Assessment
How are AI Impact Assessments different from other types of assessments?
A Privacy Impact Assessment (PIA) is a risk management process that helps organizations ensure that they meet legislative requirements and identify the impacts their programs and activities will have on individuals’ privacy. On the other hand, the AIA mainly involves identifying risks associated with the deployment of AI (risks on individuals, communities, and the environment) and mitigating those risks.
Furthermore, the Law Commission of Ontario and the Ontario Human Rights Commission have just created the first AI Human Rights Impact Assessment (AHRIA) in English and in French that is based om Canadian human rights law. Announced November 2024, the purpose of the AHRIA is to:
Strengthen knowledge and understanding of AI and human rights
Provide practical guidance on AI and Canadian human rights law
Identify mitigation strategies to address bias and discrimination from AI systems
The AHRIA is based on the following principles:
Human rights must be forefront in AI development
Ontario’s and Canada’s Human Rights Laws apply to AI systems
Assessment of human rights in AI is a multi-faceted process that requires integrated expertise
The HRIA is one piece of AI governance
The tool is for the any organization in the public and private sector that intends on designing, implementing, or relying on an AI system. It applies throughout Canada and broadly to any algorithm, automated decision-making system, or AI system.
The AHRIA should be completed:
when the idea for the AI system is explored and developed
before the AI system is made available to external parties (for example, before a vendor makes a model or application available to purchasers, or service providers deploy an AI technology for customer service)
within ninety days of a material change in the system
yearly as part of regular maintenance and reviews
The AHRIA has two parts:
Impact and Discrimination: assesses whether the AI system presents human rights harms
The purpose of the AI system
Is the AI system at high risk for human rights violations?
Does the AI system show differential treatment?
Is the differential treatment permissible?
Does the AI system consider accommodation?
Results
Response and Mitigation: assesses how to minimize risks
Mitigation strategies
Internal procedure for assessing human rights
Disclosure, data quality and explainability
Consultations
Testing and review
Thus, companies are recommended to learn more about the AIA and other types of assessments such as the PIA and the AHRIA. Although there is some work involved with completing these assessments, doing them can go a long way to help prevent problems from surfacing in the future.