Newsfeed
October 27, 2025
The European Union’s Artificial Intelligence Act (AI Act) carries significant implications for General-Purpose AI (GPAI) models.
GPAI models are characterised by their ability to perform a wide range of distinct tasks and display significant generality. They are not designed to perform a specific task but, rather, can be adapted and integrated into various downstream applications.
Common examples include OpenAI’s GPT series, which can generate text, translate languages, and answer questions on multiple topics, and image and video generation models, such as DALL-E and Midjourney.
The versatility of GPAI is both its strength and a source of regulatory concern, which is the reason for the AI Act’s rules that regulate the development and deployment of these models.
The obligations for GPAI models came into effect on 2 August 2025, with those models that had been placed on the market before said date having until 2 August 2027 to comply.
These obligations follow the AI Act’s approach to risk mitigation where the majority of obligations are placed on the providers of AI models, as opposed to the mere deployers (i.e. users of a third-party model).
Moreover the Act distinguishes between so-called standard GPAI models and those that pose systemic risks, i.e. models that meet certain technical criteria that indicate that they could have a significant impact on society or the economy.
All GPAI models are subject to the following core obligations:
To the extent that a GPAI model is considered to have systemic risk, it must comply with the following additional obligations:
To navigate this new regulatory reality, providers of GPAI models can look to the European Commission’s voluntary Code of Practice, the Guidelines for providers of GPAI models and the detailed template for summarising training data.
The focus has now shifted decisively from policy to active enforcement and compliance, shaping the future of responsible GPAI in the EU. Failure to adhere to the AI Act’s obligations carries severe fines of up to €15 million or 3% of annual total worldwide turnover for the preceding financial year, whichever is higher. This underscores how crucial it is to fully grasp the risks and responsibilities involved in providing and deploying AI tools prior to their implementation.
This article was first published in “The Corporate Times” on 26/10/2025