top of page

Convention on AI

Ten parties have signed. Will Canada?

By Dr. Christina Catenacci

Nov 1, 2024

Key Points 


 

  • The main principles in the Convention include: human dignity and individual autonomy; transparency and oversight; accountability and reliability; equality and non-discrimination; privacy and personal data protection; reliability; and safe innovation 

 

  • Parties need to adopt or maintain measures to ensure that the activities within the lifecycle of AI systems are consistent with obligations to protect human rights, democracy, and the rule of law—Canada has not yet signed it 

In September 2024, the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was released.


As of the date of publishing of this article, the following have signed the Convention: the European Union, the United States, Israel, the United Kingdom, Norway, San Marino, the Republic of Moldova, Iceland, Georgia, and Andorra. Canada is not yet on this list.  


What is in the Convention? 


The goal of the Convention is to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy, and the rule of law. Parties to the Convention need to adopt or maintain appropriate legislative, administrative, or other measures to give effect to the provisions set out in the Convention. The measures need to be graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy, and the rule of law throughout the lifecycle of AI systems.  


Under the Convention, an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. 


The Convention applies to the activities within the lifecycle of AI systems that have the potential to interfere with human rights, democracy, and the rule of law. More specifically, each party must apply the Convention to the activities within the lifecycle of AI systems undertaken by public authorities or private actors acting on their behalf. Also, each party must address the risks and impacts arising from activities within the lifecycle of AI systems by private actors in line with the goal of the Convention. 


Pursuant to Article 3 that deals with scope, it is necessary for parties to specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature, or when depositing its instrument of ratification, acceptance, approval or accession, how it intends to implement this obligation. For instance, Norway has already done so: 


“In accordance with Article 3, paragraph 1.b, of the Convention, the Kingdom of Norway declares that it shall apply the principles and obligations set forth in Chapters II to VI of this Convention to activities of private actors” 


There are some exceptions in the Convention: parties are not required to apply the Convention related to the protection of national security interests including national defense. Similarly, the Convention does not apply to research and development activities regarding AI systems that are not yet made available for use, unless testing is undertaken.  


General Obligations 


Under the Convention, parties must; 


  • adopt or maintain measures to ensure that the activities within the lifecycle of AI systems are consistent with obligations to protect human rights 

 

  • adopt or maintain measures that seek to ensure that AI systems are not used to undermine the integrity, independence, and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence, and access to justice 

 

  • adopt or maintain measures that seek to protect its democratic processes in the context of activities within the lifecycle of AI systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions


Principles 


Some of the main principles that are referred to in the Convention include: 


  • human dignity and individual autonomy 

 

  • transparency and oversight 

 

  • accountability and reliability  

 

  • equality and non-discrimination 

 

  • privacy and personal data protection 

 

  • reliability 

 

  • safe innovation 


Remedies, Safeguards, and Mitigation of Risks 


Parties must also adopt or maintain measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of AI systems. More specifically, the measures need to: ensure that relevant information regarding AI systems that have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons; ensure that the information is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and the use of the system itself; and an effective possibility for persons concerned to lodge a complaint to competent authorities. 


Another important responsibility is that parties must ensure that, where an AI system significantly impacts upon the enjoyment of human rights, effective procedural guarantees, safeguards and rights, in accordance with the applicable international and domestic law, are available to persons affected thereby. More precisely, parties need to ensure that the persons interacting with AI systems are notified that they are interacting with such systems rather than with a human. 


Moreover, parties must take into account the principles noted in the Convention and adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential impacts to human rights, democracy, and the rule of law. These measures must be graduated and differentiated, as appropriate, and: 


  • take due account of the context and intended use of AI systems, in particular as concerns risks to human rights, democracy, and the rule of law 

 

  • take due account of the severity and probability of potential impacts 

 

  • consider, where appropriate, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted 

 

  • apply iteratively throughout the activities within the lifecycle of the AI system 

 

  • include monitoring for risks and adverse impacts to human rights, democracy, and the rule of law 

 

  • include documentation of risks, actual and potential impacts, and the risk management approach, and 

 

  • require, where appropriate, testing of AI systems before making them available for first use and when they are significantly modified 


In fact, parties must adopt or maintain measures that seek to ensure that adverse impacts of AI systems to human rights, democracy, and the rule of law are adequately addressed. Such adverse impacts and measures to address them should be documented and inform the relevant risk management measures described. Parties need to assess the need for a moratorium or ban or other appropriate measures in respect of certain uses of AI systems where it considers such uses incompatible with the respect for human rights, the functioning of democracy, or the rule of law. 


Implementation of the Convention  


The implementation of the provisions of the Convention by the parties must be secured with a mind to the following: 


  • non-discrimination 

 

  • the rights of persons with disabilities and children 

 

  • public consultation 

 

  • digital literacy and skills 

 

  • safeguard for existing human rights 

 

  • wider protection than what is stipulated in the Convention 

 

Furthermore, the parties need to consult periodically to facilitate the effective application and implementation of the Convention, the exchange of information on critical developments, and the cooperation of stakeholders. 


Parties also need to report to the parties within the first two years after becoming a party and periodically thereafter. The parties must cooperate in the realization of the purpose of the Convention.  


There are also oversight mechanisms in the Convention: parties must establish or designate one or more effective mechanisms to oversee their compliance with the obligations in the Convention. In fact, each party must ensure that such mechanisms exercise their duties independently and impartially and that they have the necessary powers, expertise, and resources to effectively fulfil their tasks of overseeing compliance with the obligations in this Convention.  


What can we take from the Convention? 


This is the first-ever international legally binding treaty in this field. Although Canada may have participated in the negotiation of the Convention, it has not yet signed the Convention. Given the situation in Canada where Bill C-27 could die on the order paper at any moment because of an election, it is not clear whether any further progress in AI will be made in Canada in the near future.  

bottom of page