Understanding the Council of Europe AI Treaty
This article offers a concise overview of the Council of Europe’s (CoE) recent efforts in shaping global artificial intelligence (AI) governance. The newly adopted Framework Convention on AI has important implications for member states and beyond, notably Canada. Organizations and employers beginning to navigate the AI landscape will want to assess the widespread impact of this treaty on businesses worldwide.
What is the Council of Europe?
The Council of Europe (CoE) is an international organization founded in 1949 to promote human rights, democracy and the rule of law in Europe. Based in Strasbourg, it includes 46 member states, among them the 27 European Union (EU) member countries. Distinct from EU bodies such as the European Council and the Council of the EU, the CoE is known for its creation of the European Convention on Human Rights (ECHR) and the establishment of the European Court of Human Rights, which oversees the implementation of the ECHR.
Canada was granted official observer status with the Council of Europe’s Committee of Ministers in 1996. In 1997, Canadian parliamentarians were granted official observer status with the Parliamentary Assembly of the Council of Europe. The Deputy Head of Mission of the Mission of Canada to the European Union is Canada’s Permanent Observer to the Council of Europe. In 2019, Canada also joined the Council of Europe’s European Commission for Democracy through Law (the Venice Commission), which supports Canada’s and the EU’s collective efforts to advance democratic principles, human rights and fundamental freedoms across Europe and internationally. Canada has been an observer to the Venice Commission since 1991.
What is AI?
Artificial intelligence (AI) refers to machine-based systems that can make decisions, predictions and recommendations based on input data. These systems influence both physical and virtual environments and are used across various sectors, including public services and private enterprises. AI encompasses a wide range of technologies, from simple algorithms to complex machine learning models.
Moreover, AI is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.
Why was a treaty necessary?
The development and use of AI pose significant risks to human rights, democracy and the rule of law. Simply put, AI systems can infringe on basic rights such as non-discrimination, freedom of expression, privacy, socioeconomic inequality and access to justice. They can also undermine democratic principles, lead to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. AI can lie and lack accountability in many ways, such as: AI can generate fake or misleading content, such as deepfakes, fake news, or fake reviews, that can deceive or manipulate people. Given these potential negative impacts, the Council of Europe recognized the need for a legally binding international treaty to responsibly govern AI and protect these fundamental values.
How does AI relate to people’s rights?
According to Kate Jones, Associate Fellow, International Law Programme at Chatham House,
“Governments and companies are already deploying AI to assist in making decisions that can have major consequences for the lives of individual citizens and societies.
AI offers far-reaching benefits for human development but also presents risks. These include, among others, further division between the privileged and the unprivileged; erosion of individual freedoms through surveillance; and the replacement of independent thought and judgement with automated control.
Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity. AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organizations do not mention human rights at all. This is an error that requires urgent correction.” [You can read more on her paper on AI governance and human rights, https://www.chathamhouse.org/2023/01/ai-governance-and-human-rights]
For example, AI can be used to detect social benefits fraud, monitor workplaces and predict criminal behaviour. However, these uses often raise concerns about discrimination, privacy violations, lack of transparency and accountability.
The treaty aims to ensure that AI systems are designed and used in ways that uphold human rights, democracy and the rule of law, providing safeguards and legal remedies for individuals whose rights are affected by AI.
Who is bound by the treaty?
The treaty is legally binding for the Council of Europe member states and other signatories, which include Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay. While the convention is primarily aimed at state parties, it indirectly affects businesses and other private entities. These entities must comply with national laws and regulations enacted by the signatory states to implement the treaty’s provisions. This approach allows flexibility for different legal systems and respects international human rights obligations.
What does the treaty accomplish, and when was it adopted?
AI regulation has been a main focus for dozens of countries, and now the European Union (and many other countries) are creating (or trying to create) more clear-cut measures to manage the rising sophistication of artificial intelligence. Most are trying to regulate the way AI is used, but have no intention to hold back progress in basic technology.
The Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law was adopted on May 17, 2024, and will be opened for signature on September 5, 2024, in Vilnius, Lithuania. The treaty establishes a comprehensive legal framework to ensure AI systems are developed and used in ways that respect human rights, democracy and the rule of law. It includes provisions for transparency, accountability, non-discrimination and privacy protection, requiring states to implement these principles through national legislative and regulatory measures.
How might this relate to organization and employers?
Organizations and employers, particularly those using, or intending and planning to use AI systems, and have a global impact, will be significantly impacted by the treaty. They will need to ensure that their use of AI complies with the national laws and regulations enacted by signatory states to implement the treaty. This means adopting measures for transparency, accountability and non-discrimination in AI applications, such as employee surveillance or automated decision-making processes. Organizations and employers must also be prepared for potential oversight and be ready to provide legal remedies for customers and employees whose rights might be affected by AI systems.
There are many steps businesses can take when integrating AI into their operations. Organizations can develop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of their company culture and routine business discussions, establishing standards to determine acceptable AI technologies.
Takeaways
The Council of Europe’s AI treaty marks a significant step towards regulating artificial intelligence on an international scale. By addressing the lifecycle of AI systems and their potential risks, the treaty aims to balance innovation with the protection of human rights, democracy and the rule of law. As countries begin to adopt and implement this treaty, it will shape the future governance of AI and ensure that its development aligns with the core values of a democratic society.
Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post that calls for national and global leadership in regulating artificial intelligence:
“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”
Mike Thomas, Senior features writer at Built In, in his article, 12 Risks and Dangers of Artificial Intelligence (AI) adds,
“Balancing high-tech innovation with human-centered thinking is an ideal method for producing responsible AI technology and ensuring the future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes. “
Comments are closed.