top of page

Artificial Intelligence Act in the European Union

EU's AI Act came into force on August 1, 2024

By Dr. Christina Catenacci

Sept 6, 2024

Key Points


  1. The AI Act came into force on August 1, 2024 and begins to apply August 2, 2026 


  2. The more risk there is, the more stringent the obligations in the AI Act 


  3. There are serious administrative fines for noncompliance 



On August 1, 2024, the Artificial Intelligence Act (AI Act) entered into force.  


Pursuant to Article 113, it begins to apply on August 2, 2026. Now referred to as the golden standard in AI regulation, the AI Act is the world's first comprehensive AI law. 


In fact, creating the AI Act was part of the EU’s digital transformation


As one of the EU’s priorities, the digital transformation integrates digital technologies by companies. For instance, digital platforms, the Internet of Things, cloud computing, and AI are all among the technologies that are involved in the digital revolution. In terms of AI, the EU sees several benefits, including for improving healthcare, competitive advantage for businesses, and the green economy. 


First proposed in 2021, the AI Act classifies the various applications according to the risk they pose to users. The more risk, the more regulation is required. The main goal is to ensure that AI systems that are used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. One thing is clear: AI systems are to be overseen by humans. 


Since there are different rules for different risk levels, we can set out some of the obligations for providers and users depending on the level of risk from AI: 


  • Unacceptable risk: certain systems are just too risky since they pose a threat to humans. Therefore, the following are banned: cognitive behavioural manipulation of people or specific vulnerable groups; social scoring; biometric identification and categorisation of people; and real-time and remote biometric identification systems such as facial recognition 

 

  • High risk: certain systems that negatively affect safety or fundamental rights are considered high risk and are divided into two categories: AI systems that are used in products falling under the EU’s product safety legislation like toys, cars, and medical devices; AI systems falling into specific areas that will have to be registered in an EU database like management and operation of critical infrastructure, employment; and law enforcement. These systems need to be assessed before being put on the market and throughout their lifecycle 

 

  • Specific transparency risk: systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled and must be prevented from generating illegal content. For example, generative AI (like ChatGPT) must comply with transparency requirements and copyright law  

 

  • Minimal risk: most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct. 


Article 1 clearly sets out the purpose of the AI Act: to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the EU and supporting innovation.  


It lays down: harmonized rules for the placing on the market, the putting into service, and the use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the placing on the market of general-purpose AI models; rules on market monitoring, market surveillance, governance and enforcement; and measures to support innovation, with a particular focus on SMEs, including start-ups. 


The AI Act applies to the following: 


  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country 

 

  • deployers of AI systems that have their place of establishment or are located within the EU 

 

  • providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU 

 

  • importers and distributors of AI systems 

 

  • product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark 

 

  • authorized representatives of providers, which are not established in the EU 

 

  • affected persons that are located in the EU 


This provision is very important for Canadians because, similar to the EU’s General Data Protection Regulation (GDPR)it can apply if you live in a third country outside the EU. That is, if providers place on the market or put into service AI systems or place on the market general-purpose AI models in the EU, they need to pay attention to and comply with the requirements in the AI Act. And if providers and deployers of AI systems have the output produced by the AI system in the EU, they need to pay attention and comply. 


Why is this important? Following the list of prohibited practices in Article 5 and the numerous obligations set out in the remaining parts of the AI Act, there is a penalty provision in Article 99 that is critical for private-sector entities to understand: noncompliance with the prohibited AI practices referred to in Article 5 are subject to administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to seven percent of its total worldwide annual turnover for the preceding financial year, whichever is higher.  


Moreover, noncompliance with Articles 16, 22, 23, 24, 26, 31, 33, 34, or 50 are subject to administrative fines of up to €15,000,000 or, if the offender is an undertaking, up to three percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. Member States need to notify the Commission regarding the penalties imposed. 


When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account and, and regard must be given to the following: 


  • the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them 

 

  • whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement 

 

  • whether administrative fines have already been applied by other authorities to the same operator for infringements of other EU or national law, when such infringements result from the same activity or omission constituting a relevant infringement of the Act 

 

  • the size, the annual turnover and market share of the operator committing the infringement 

 

  • any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement 

 

  • the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement 

 

  • the degree of responsibility of the operator taking into account the technical and organizational measures implemented by it 

 

  • the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement 

 

  • the intentional or negligent character of the infringement 

 

  • any action taken by the operator to mitigate the harm suffered by the affected persons 


Administrative fines set out in Article 100 deal with fines on EU institutions, bodies, offices and agencies (the public sector) falling within the scope of the AI Act. These fines can be hefty too and can be up to €1,500,000. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account, and due regard must be given to the following: 


  • the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by them 

 

  • the degree of responsibility of the EU institution, body, office or agency, taking into account technical and organisational measures implemented by them 

 

  • any action taken by the EU institution, body, office or agency to mitigate the damage suffered by affected persons 

 

  • the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the EU institution, body, office or agency concerned with regard to the same subject matter 

 

  • any similar previous infringements by the EU institution, body, office or agency 

 

  • the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the EU institution, body, office or agency notified the infringement 

 

  • the annual budget of the EU institution, body, office or agency 


As can be seen from the above discussion, we all need to pay attention to the golden standard of AI regulation. And Canadians (and other third countries) to which the AI Act applies should sit up and ensure that they are in compliance. 


The good news is that there is a grace period to allow organizations to become in compliance. It is highly recommended that businesses use the time to learn as much as possible about the regulation and hire competent professionals to help them comply. 

bottom of page