After an intense negotiation, the Council presidency and the European Parliament have reached a provisional agreement on the so-called artificial intelligence act (AI Act). The final text is currently being prepared, but there are some principles that we already know.
The negotiation of the AI Act has not been an easy one. With the emergence and wide-spread adoption of foundation models, such as GPT, the negotiators found themselves faced with significantly differing views between European countries that did not want to hinder these innovations too much out of fear of being competed out of the market and an opposing view that strict(er) legislation was needed. Despite differing views, the negotiators have reached a provisional agreement on the whole of the AI Act, which will now be further drafted into a final text and subsequently translated.
The negotiators agreed on aligning the definition with the OECD proposed definition of an AI system. While the exact wording is still to be seen and can vary slightly from the OECD definition, the definition proposed by the OECD is the following: an AI system is “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” The main differences with the European Parliament definition are an explicit focus on “inference from input” and the addition that an AI system can have varying degrees of adaptiveness. It remains to be seen if these additions will also be included in the final text.
Prohibited AI practices and high-risk AI systems
The final text will keep the risk-based approach, based on three categories: (i) prohibited practices, (ii) high-risk AI systems and (iii) limited risk AI systems. Given the fact there was a lot to do about the classification of AI systems as high risk, it will remain to be seen which AI systems will fall in the high-risk category and how and if certain obligations have been modified or not. The obligations for different actors in the AI value chain have also been clarified.
General purpose AI systems
General purpose AI systems (GPAI), including foundation models and generative AI systems, will follow a separate classification system, where distinction is made between those GPAIs that pose systemic risks (where significant obligations will apply) and other GPAI (where limited transparency obligations will apply).
The AI Office, as proposed by the European Parliament, has been maintained as entity within the Commission and will be tasked to oversee the most advanced AI models. It will receive support from a scientific panel of independent experts.
In addition, an AI Board, as initially proposed by the Commission, comprising of member states’ representatives will be set up as coordination platform and advisory body to the Commission.
Finally, an advisory forum will be set up for stakeholders (such as industry representatives, SMEs, start-ups, civil society, and academia), to provide technical expertise to the AI Board.
From what we know, the fines have been modified slightly. The fines are as follows (highest of):
– 35.000.000 EUR or 7% of the company’s global annual turnover in the previous financial year for use of banned applications (was 40.000.000 EUR or 7% in the European Parliament proposal);
– 15.000.000 EUR or 3% of the company’s global annual turnover for violations of the AI act’s other obligations (was 10.000.000 EUR or 2% in the European Parliament proposal);
– 7.500.000 EUR or 1,5% of the company’s global annual turnover for supply of incorrect information (was 5.000.000 EUR or 1% in the European Parliament proposal).
The European Parliament proposal also foresaw a fine for violation of articles relating to data governance and transparency (4.000.000 EUR or 4%). It seems this has not been withheld and violation of these articles would thus fall under the fine in the second bullet.
Entry into force
The AI Act Is expected to be formally approved and published in the Official Journal at some point in the first half of next year. Most obligations will take effect 24 months after entry into force of the AI Act, but there will be certain exceptions for specific provisions which are expected to enter into force at an earlier date.
Do not hesitate to contact us if you have questions!
Curious to learn more about AI and the law? Read our previous blogposts:
Artificial Intelligence: an introduction to our series of blogposts
AI and ethics: ethical challenges connected to AI
AI and ethics – is the EU fulfilling its own ambitions?
The current proposal of the AI act summarized
AI and (product) liability
Can Artificial Intelligence systems claim authorship under copyright law? – EY Law Belgium
The impact of the GDPR on AI – EY Law Belgium