Hot on the heels of the publication of the EU's voluntary Code of Practice for general purpose AI (GPAI) models, the European Commission has now published its guidelines, to clarify the scope of certain obligations set out in the EU AI Act, as required by Article 96(1) EU AI Act. This development comes ahead of the obligations in the EU AI Act for GPAI models becoming applicable on 2 August 2025, ahead of the phased implementation of the rest of the EU AI Act.
Importantly, the guidelines provide more colour about what a GPAI model is, who is a ‘provider’, when a GPAI model will be ‘placed’ on the EU market and how model creators can estimate the amount of computational resource used to train a GPAI model. It also provides clarity on the role the AI Office will play and how it will work with providers, from a compliance perspective. Whilst noting that the guidelines are not binding (with the Court of Justice of the EU having the ability to interpret the provisions of the EU AI Act definitively), these guidelines offer a useful insight into the current Commission thinking and complement the voluntary Code of Practice issued earlier this month (discussed in our recent insight here).
What is a GPAI model?
Recognising that the rate of innovation in the AI space is so rapid, the EU AI Act and the guidelines do not attempt to set out a precise checklist of capabilities that a model must possess/tasks it must perform to be considered a GPAI model. Instead, it uses the amount of computational resource used to train the model as a key parameter to determine if a model is GPAI model, measured in FLOP (floating-point operations). Models trained using an amount of resource greater than 1023 FLOP which can generate language (in text or audio), text-to-image or text-to-video are considered to be GPAI models. Those models trained using resource over 1025 FLOP will be presumed to be GPAI models with systemic risk. This threshold corresponds to the approximate amount of compute typically used to train a model with one billion parameters on a large amount of data (at the time of writing). The guidelines also include a range of examples of models likely to be in (and out) of scope of the GPAI model rules. The Commission also recognises that model development is an iterative process, which will impact: (i) the documentation required to be maintained over the lifecycle of a model (including the copyright policy and the summary of the content used to train the model); and (ii) the obligations to carry out systemic risk assessment and mitigation.
Model providers must also notify the Commission without undue delay and, in any event, within 2 weeks, when it becomes known that a model has (or it is foreseeable it might have) ‘high impact capabilities’ and should therefore be considered a model with systemic risk.
Who is a provider?
The guidelines include a list of indicative examples of the person who will be considered a provider, which includes the developer, a person who commissions the development of a model from a third party and places it on the market and a person who uploads a model to an online repository. Where a model is developed by a consortium, the coordinator of the consortium may be the provider, although this will be assessed on a case-by-case basis.
When is a GPAI model ‘placed’ on the market?
A model will be placed on the market in the EU when it is first made available for distribution or use, whether for payment or free of charge. This will include where the model comprises part of a software library or package, is made available via an API, uploaded to a public catalogue or hub for direct download, made available via a cloud computing service or is integrated into a chatbot or a mobile application. The guidelines also address the role of upstream actors, where a GPAI model is integrated into an AI system and the AI system is then made available downstream into the EU by a different entity. The upstream actor will likely then come within the scope of the EU AI Act as a provider of a GPAI model, unless it has clearly excluded the distribution and use of the model in the EU (including integration into AI systems that are intended to be placed on the EU market or put into service in the EU). If the upstream actor has done this, the downstream actor that integrates the model into a system and places the system on the EU market or puts it into service in the EU will be considered the provider of the model. Parties acting downstream as modifiers of GPAI models will be considered a ‘provider’ of the modified GPAI model only if the modification leads to a significant change in the model’s generality, capabilities, or systemic risk.
Note that there are some exemptions from EU AI Act obligations for providers of GPAI models released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available’ as long as the model is not a general-purpose AI model with systemic risk. The guidelines note that if a licence is limited to non-commercial or research-only use, then the model will not fall within the exemption. It also discusses what will count as a ‘free’ licence identifying that monetisation should be understood as including not only the provision of the model for a price but also other types of monetisation strategies (such as charging for technical support, training, maintenance).
Estimating compute resource
Annex 1 provides guidance for interpreting the thresholds for determining compute resource used to train a model, as well as devising a robust estimate (by tracking graphics processing unit usage or by estimating operations directly based on the relevant model’s architecture). It also includes a number of worked examples.
Enforcement approach
For providers of GPAI models that adhere to a code of practice that is assessed as adequate, the Commission will focus its enforcement activities on monitoring their adherence to the code. Providers who do not adhere to such a code of practice will be expected to demonstrate how they comply with their obligations under the AI Act via other adequate means and will have to report the measures they have implemented to the AI Office. The AI Office will take a ‘collaborative, staged, and proportionate’ approach and expects providers to facilitate compliance and ensure timely market placement and report on a proactive basis. Given timelines, there is a recognition that providers of models placed on the market before 2 August 2025 may face various challenges to comply with their obligations under the AI Act by 2 August 2027. It therefore notes that providers of AI models placed on the market before 2 August 2025 are not required to conduct retraining or unlearning of models, where it is not possible to do this for actions performed in the past, where some of the information about the training data is not available, or where its retrieval would cause the provider disproportionate burden. However, such instances must be clearly disclosed and justified in the copyright policy and in the summary of the content used for training. Importantly, Commission enforcement powers are only in force from 2 August 2026.