top of page

What Happened to the Algorithmic Accountability Act

The US's Algorithmic Accountability Act is in effect

Dr. Christina Catenacci

Sept 19, 2024

Key Points:  

 

  • the Algorithmic Accountability Act of 2023 is indeed in effect and contains several requirements for covered entities 

 

  • the penalties are serious for covered entities that do not comply—the FTC can enforce the Act and has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Also, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief 

 

  • Canada does not have anything like the Algorithmic Accountability Act of 2023 or the EU’s AI Act 


Some may be wondering about the Algorithmic Accountability Act in the United States. What happened to it? Did it ever pass? Do companies need to comply with it? How is the American approach different than that of the EU and Canada? 

 

What is the Algorithmic Accountability Act? 

 

Generally speaking, the Algorithmic Accountability Act requires businesses that use automated decision systems to make critical decisions to study and report on the impact of those systems on consumers. What are critical decisions? They could be any decisions that have a significant effect on a consumer’s life, including housing, educational opportunities, employment, essential utilities, healthcare, family planning, legal services, or financial services. 

 

The Act also establishes the Bureau of Technology to advise the Federal Trade Commission (FTC) about the technological aspects of its functions. 

 

What is the status of the Algorithmic Accountability Act? 

 

This story began in the beginning of 2022, with the 117th Congress. Bill HR 6580, the Algorithmic Accountability Act of 2022, was introduced in the House of Representatives and referred to the House Committee on Energy and Commerce. The bill was then referred to the Subcommittee on Consumer Protection and Commerce. 

 

However, nothing happened after that point. That is, it failed to gain the support it needed to become law. It was not until September 2023 that Bill HR 5628, the Algorithmic Accountability Act of 2023, was introduced in the House in the 118th Congress. Subsequently, Bill 5628 was referred to the House Committee on Energy and Commerce, and later referred to the Subcommittee on Innovation, Data, and Commerce.  

 

This time, it passed in the House and the Senate. It went to the President and then became law. 

 

What does the law require? 

 

The Algorithmic Accountability Act of 2023 contains several definitions, some of which include augmented critical decision process (process), automated decision system (system), biometrics, covered entity, critical decision (decision), deploy, develop, identifying information, impact assessment, passive computing infrastructure, and third-party decision recipient.   

 

This Act applies to covered entities. Under the Act, a covered entity includes any person, partnership, or corporation that deploys any process and has any of the following: 

 

  • had greater than $50,000,000 in average annual gross receipts or is deemed to have greater than $250,000,000 in equity value for three tax years 

 

  • possesses, manages, modifies, handles, analyzes, controls, or otherwise uses identifying information about more than 1,000,000 consumers, households, or consumer devices for developing or deploying any system or process 

 

  • is substantially owned, operated, or controlled by a person, partnership, or corporation that meets the above two requirements 

 

  • had greater than $5,000,000 in average annual gross receipts or is deemed to have greater than $25,000,000 in equity value for three tax years 

 

  • deploys any system that is developed for implementation or use, or that the person, partnership, or corporation reasonably expects to be implemented or used, in a process  

 

  • meets any of the above criteria in the last three years 

 

Essentially, covered entities must perform impact assessments of any deployed process that was developed for implementation or use or that the covered entity reasonably expects to be implemented or used, in an augmented critical decision process and any augmented critical decision process, both prior to and after deployment by the covered entity. 

 

The covered entity must also maintain documentation of any impact assessment performed for three years longer than the duration of time for which the system or process is deployed. Some other main requirements include: 

 

  • disclosing status as a covered entity  

 

  • submitting to the FTC an annual summary report for ongoing impact assessment of any deployed system or process (in addition to the initial summary report that is required for new systems or processes) 

 

  • consulting with relevant internal stakeholders such as employees and ethics teams and independent external stakeholders such as civil society and technology experts) as frequently as necessary  

 

  • attempting to eliminate or mitigate any impact that effects a consumer’s life in a timely manner 


When it comes to the impact assessment, covered entities need to consider several things, depending on whether the system or process is new or ongoing. For example, for new systems or processes, covered entities must: 


  • provide any necessary documentation  

 

  • describe the baseline process being enhanced or replaced by a process 

 

  • include information regarding any known harm, shortcoming, failure case, or material negative impact on consumers of the previously existing process used to make the critical decision 

 

  • include information on the intended benefits of and need for the process, and 

 

  • the intended purpose of the system or process 

 

It is also important to note that covered entities must, in accordance with National Institute of Standards and Technology (NIST) or other Federal Government best practices and standards, perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the system or process. Some examples of this include assessing and documenting the data minimization practices, assessing the information security measures that are in place, and assessing and documenting the current and potential future or downstream positive and negative impacts of these systems or processes. 

 

With respect to training and education, all employees, contractors, and other agents must be trained regarding any documented material negative impacts on consumers from similar systems or processes and any improved methods of developing or performing an impact assessment for such system or process based on industry best practices and relevant proposals and publications from experts, such as advocates, journalists, and academics.  

 

Covered entities must also maintain and keep updated documentation of any data or other input documents used to develop, test, maintain, or update the system or process, including things such as sourcing (metadata about the structure and type of data, an explanation of the methodology, and whether consumers provided informed consent for the inclusion and further use of data or other input information about themselves). Other factors to consider include why the data was used and what alternatives were explored, evaluations of the rights of consumers, and assessments of explainability and transparency. 

   

One cannot forget about the responsibilities of covered entities to identify any capabilities, tools, standards, datasets, security protocols, improvements to stakeholder engagement, or other resources that may be necessary or beneficial to improving the system, process, or the impact assessment of a system or process, in areas such as: performance, including accuracy, robustness, and reliability; fairness, including bias and non-discrimination; transparency, explainability, contestability, and opportunity for recourse; privacy and security; personal and public safety; efficiency and timeliness; cost; or any other area determined appropriate by the FTC. 

 

The FTC will be publishing an annual report summarizing the information in summary reports, and a public repository that is designed to publish a limited subset of the information about each system and process for which the FTC received a summary report. The FTC will also be publishing guidance and technical assistance. 

 

Most importantly, covered entities must note that the FTC can enforce the Act as it has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Moreover, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief. 

 

How is this Act different from the EU’s Artificial Intelligence Act? And Canada? 

 

As discussed above, the American approach in the Act solely focuses on automated processes and systems deployed to render critical decisions. It is a standalone regime and is quite brief. 

On the other hand, the EU’s Artificial Intelligence Act (AI Act) covers a wider range of AI systems and provides nuanced regulatory requirements that are associated with the level of risk that an AI system poses to the public. In particular, the EU’s AI Act separates AI systems into three categories: 


  • unacceptable risk 

 

  • high-risk 

 

  • low/minimal risk 


Yet, there are some similarities between the two approaches. That is, both are involved in serious decisions that have a significant impact on consumers. 


What about Canada? Unfortunately, Canada does not have a law like the US Act described above, or the EU AI Act. That said, there is something similar to the US Act in the Canadian public sector, namely the Directive on Automated Decision-Making (Directive). This Directive requires algorithmic impact assessments for each system that is deployed by a federal institution. Again, this Directive does not apply to private sector businesses in Canada.  

 

When it comes to the private sector, we are still dealing with Bill C-27, which is still in its infancy and combines an updated privacy law to PIPEDA (the CPPA) with a brand-new AI law (AIDA). This legislative process may be delayed even further if there is an early election, which could happen at any time, now that the NDP has prematurely ripped up the 2022 supply and confidence deal he struck to support Prime Minister Justin Trudeau’s minority government. 

 

Lastly, it is worth pointing out that neither the proposed CPPA nor AIDA takle the concept of algorithmic accountability in the same way as the US or the EU. In fact, AIDA is completely lacking when it comes to algorithmic accountability. 

bottom of page