The AI Act aims to regulate the use of Artificial Intelligence (AI) technology in the EU. This legislation was adopted on June 13, 2024 and will be gradually applied to AI systems from February 2025. The aim is to establish a legal framework to ensure that AI systems used in Europe are safe, transparent, ethical and respect fundamental rights.
The regulation is based on a classification of risk, putting AI systems into four categories: unacceptable risk, high risk, limited risk and minimal risk. Systems with unacceptable risk, such as those capable of manipulating or exploiting the vulnerabilities of people, will be banned. High-risk systems, used in critical areas such as health and infrastructure, will be subject to stringent requirements in terms of transparency, robustness and security. With this legal framework, the AI Act aims to encourage innovation while protecting people from potential abuse and discrimination related to the use of AI.
Winston Maxwell, law professor at Télécom Paris, and Anne-Sophie Taillandier, Director of the TeraLab platform (see below) look back on the origins and challenges of this regulatory and legal framework.
Why is it necessary to regulate AI?
Anne-Sophie Taillandier: It’s true that for many sectors, these new regulations won’t change very much. The fields of health and banking for example are already regulated, and subject to extremely stringent requirements in terms of digital practices. The AI Act was created due to the arrival of generative AI, which led to an AI boom, with systems becoming easily accessible to the general public and widely used.
Winston Maxwell: The AI Act adopts a ‘product safety’ approach, inspired by legislation in the area of medical devices. The act bans the use of the most extreme AI applications such as those used to manipulate vulnerable people, and in practice, this will have a major impact on applications considered to be of high risk. In these cases, the AI system will need to bear ‘CE’ marking just as you see on the back of a toothbrush or toy, to demonstrate its compliance with European regulations.
The AI Act provides for specific exemptions in the case of applications with unacceptable risk. The use of facial recognition tools by the police for example is forbidden in theory, but there are exceptions such as anti-terrorism initiatives or for conducting criminal investigations into serious crimes.
What was the driving force behind the creation of this regulation?
WM: Big Tech operate in their own world in which technology evolves so quickly that legislation cannot always keep pace. In terms of AI, the European Commission has been very responsive and the AI Act was very quickly put forward to send a key message that this technology has to comply with European principles.
This Act was also introduced to avoid fragmented national regulations. Taking into account the political significance of AI, it would be tempting for each EU country to adopt its own regulations. In putting forward the draft proposal for the AI Act quite early on, the European Commission brought together national initiatives within a unique and harmonized European project from the outset.
How will the measures of the AI Act be introduced?
WM: Every provider of a high-risk AI system will need to demonstrate that their product complies with the regulations set out in the AI Act. These regulations will be defined by a set of harmonized standards. The European Committee for Standardization (CEN in French), and the European Committee for Electrotechnical Standardization (CENELEC in French) are working together on developing these standards, which will then be presented to the European Commission for validation.
AST: The standard-setting process is sometimes long and requires extensive consultations and reviews until a consensus is reached that satisfies EU legislative and regulatory requirements, as was the case with the Data Governance Act (DGA) for example.
WM: Each country should also have its own coordination or supervisory authority, which, for the moment isn’t the case, even though the French Data Protection Authority (CNIL) has already positioned itself as being the AI-specialized institution. This supervisory authority will most likely be different in each country and for each sector, but in France there may be a body that will work closely with the Prime Minister, to guarantee a high level of protection of fundamental rights and also to protect our start-ups on the economic front.
How is this regulation perceived by digital players?
AST: Up until now, it was the Wild West, and tech companies benefited from the situation They all appear to be in favor of regulations but it is always more complicated in practice, and most of them are naturally not too keen on restrictions. The problem is that these players are also very powerful, and it is not easy to set standards if they are not on board.
WM: On the other hand, some of the Tech Giants use regulations to their advantage. They see an opportunity to play the ‘compliance’ card and win market share. Complex regulations such as the AI Act require costly upgrade procedures that the major players are used to, particularly thanks to the General Data Protection Regulations (GDPR). Therefore, there is concern that the regulations are beneficial to big tech companies on the whole, at the cost of SMEs, even if the AI Act includes provisions in their favor. In France, this particularly concerns Mistral AI [French start-up founded in April 2023, responsible for major chatbots including ChatGPT] and it would be a real shame to jeopardize this particularly promising project!
Could the restrictions from the AI Act hinder innovation?
WM: In fact, the GDPR already regulates a tremendous number of practices in the field of AI because a lot of the applications involve the use of personal data. When it was first launched, everyone thought the GDPR would be a hindrance, but nobody has managed to prove that is the case because the market itself began edging towards a more responsible approach. The United States does not have its own GDPR but has deployed other legal means to prohibit abusive practices from certain tech companies. Facebook practices that have been pinpointed in Europe are also identified on the other side of the Atlantic, sometimes with even bigger fines.
Today, the GDPR has become a global reference and a great number of countries outside Europe have looked into implementing similar regulations. As for the AI Act, it is still too soon to know whether other countries will follow the European example.
AST: The AI Act is not a set of regulations for algorithms but for the uses and the different levels of associated risk. But as we said, some sectors are already monitored and regulated very closely. The DGA might have more of an impact because it obliges data sharing systems to respect strict data protection and privacy standards. There’s also the Digital Markets Act (DMA), and Digital Services Act (DSA). There are several regulations that have to be considered to find ways of ensuring AI systems are responsible. But in fact, even though they seem restrictive, all these regulations actually encourage innovation because as a result, investment in a European project is only subject to one set of specifications, thereby facilitating international cooperation.
What’s the current situation?
WM: We’re currently all trying to understand the 144 pages of the AI Act! My colleague Thomas Le Goff has created an educational tool, the AI Act Game, which provides a fun way of studying the text. Other than understanding the regulation, CEN-CENELEC still has a lot of work to do with standardization bodies because it is going to be challenging to reach a consensus with all stakeholders on the technical specifications surrounding trustworthy AI. Europe has opted for a regulatory approach based on harmonized standards, but we still don’t know whether this approach will be able to address issues related to fundamental rights. It will be a first!
AST: What really matters is to demonstrate the democratic processes adopted, and that the EU remains vigilant with regard to this issue.
Former associate at Hogan Lovells and attorney at law at the New York and Paris Bars, Winston Maxwell is now a law professor at Télécom Paris. His work in education and research is focused on data and artificial intelligence regulation. He co-directs the ‘Operational AI Ethics’ program at Télécom Paris.
Since 2015, Anne-Sophie Taillandier has been the director of TeraLab, a platform specializing in big data and AI, supported by Institut Mines-Télécom. Teralab is a trusted third-party that puts organizations in contact with research laboratories and innovative companies, to help them remove the scientific and technical obstacles they have to face. Teralab also plays a key role in the Gaia-X initiative, a project involving more than 300 organizations in the development of architecture, technical rules and a trusted ecosystem where digital data can be shared and made available.