Questions about this article?
Talk to the authors

Kelly Matthyssens
Counsel
Digital Law | ICT
On 12 July, the Artificial Intelligence Act (AI Act) was published in the Official Journal of the European Union. This means that the AI Act will enter into force on 1 August 2024.
The AI Act seeks to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights. Given the AI Act has taken the form of a Regulation, it will become applicable in all member states of the European Union to create a harmonized internal market. It classifies AI systems according to their risk and includes specific obligations, taking into account such risk and the role an operator plays in the AI value chain. It also includes specific obligations for general-purpose AI models and foresees the necessary governance on international and national level. In addition, it foresees penalties for infringement of the rules.
The AI Act foresees a phased application of obligations. The bulk of obligations will become applicable on 2 August 2026, with four important exceptions.
The general provisions (subject matter, scope, definitions and principle of AI literacy), and the prohibition to include certain AI practices into AI systems become applicable already on 2 February 2025.
The following provisions will become applicable on 2 August 2025:
Finally, on 2 August 2027, the classification as high-risk AI system for AI systems intended to be used as a safety component or where the AI system itself is the product, where such AI system falls under certain Union harmonization legislation, and such product then has to undergo a third-party conformity assessment will become applicable. As from that moment, operators will then also be subject to the corresponding obligations for high-risk AI systems. Note, however, that the AI Act also identifies key areas where AI systems will be considered high-risk. These provisions will enter into force on 2 August 2026.
Given the phased application, it is essential for companies developing AI systems, making use of AI systems in their internal way of working (e.g. for HR purposes) or as part of their business (e.g. AI systems to help make decisions about natural persons, or chatbots on websites) or importing or distributing AI systems (e.g. resellers) to already set out a roadmap to start working towards compliance, so that relevant timelines can be complied with.
Action Points
Talk to the authors
Kelly Matthyssens
Counsel
Digital Law | ICT