This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Client Intelligent insights

| 3 minute read

The EU's General-Purpose AI Code of Practice: Key Takeaways

Consisting of three chapters, covering transparency, copyright and safety and security, the EU has now issued its draft General-Purpose AI (GPAI) Code of Practice.  

This is a voluntary guidance tool, to supplement understanding of the obligations laid out in the EU's AI Act, to ensure that GPAI models placed on the EU market are safe and transparent (including the most powerful ones) and set out methods for those providers of GPAI models / GPAI models with systemic risk to demonstrate compliance with the AI Act’s relevant obligations. 

Serving as a starting point for EU AI Act compliance, following publication, the EU Commission will assess the adequacy of the Code and then supplement with its guidelines on GPAI models, which will be published before the rules applicable to providers of GPAI models come into force.  These guidelines will clarify: (i) what is a GPAI model; (ii) which GPAI models are models which pose a systemic risk; and (iii) who is a ‘provider’ of a GPAI model. 

Transparency

The transparency chapter illustrates how developers may choose to comply with Article 53 (obligation to keep current model documentation for GPAI models) and Article 55 of the EU AI Act (obligations on providers of GPAI models with systemic risk). 

It also includes a model documentation form for Code signatories to use to compile compliance information required by the AI Act, with fields for information on training data, amounts of computational resource used for training and energy consumption. 

Copyright

The Code sets out a reminder that Code users should:

  • maintain a copyright policy;
  • copy and extract only lawfully accessible copyright works when using web crawler technology to source data;
  • identify and comply with rights reservation notices when using web crawler technology;
  • implement appropriate and proportionate technical safeguards to mitigate the risk of copyright-infringing outputs and prohibit copyright-infringing uses of GPAI model in their terms of use; and
  • designate a point of contact and enable a complaints mechanism, so copyright holders can contact the model developer if concerned about use of their copyright works by a GPAI model  

Safety and Security

The longest chapter of the Code concerns safety and security.  It promotes the adoption of a state-of-the-art Safety and Security Framework, to involve a full systemic risk assessment and mitigation process, with a tiered approach to the measures to be implemented, by reference to the nature of the systemic risk identified during the risk analysis process. It reiterates the need for post-market monitoring as well as the need to flex the risk assessment process, in response to risks identified. The framework should be seen by users as a dynamic document, with the risk assessment to be undertaken at least every 12 months (and more frequently if there is concern about the adequacy of the framework or the user's adherence to it). It also provides granular detail about the systemic risk identification (to include modeling and systemic risk estimation) and development of systemic risk scenarios and systemic risk acceptance criteria. 

It also provides examples of safety mitigations which may be appropriate (such as the filtering and cleaning of training data, changing the behaviour of a model, staging access to the model and offering tools to other deployers of the model to mitigate systemic risks). 

Code signatories will need to provide the AI Office with a Safety and Security Model Report before placing a model on the market (to contain details of systemic risk identification, analysis and mitigations).   It also suggests clear allocations of responsibilities for systemic risk oversight and assurance, with oversight to be undertaken at senior management level, where appropriate. 

Another component of the safety obligation is the need to report serious incidents to the AI Office, in line with the timelines specified in the EU AI Act (i.e. within 2 days if the GPAI model has led to a serious and irreversible disruption of the management or operation of critical infrastructure or within 5 days of a serious cybersecurity breach). 

The appendix to this chapter also includes a non-exhaustive list of the types and nature of systemic risks which a GPAI model creator should consider, as well as the sources of systemic risks stemming from a model's capabilities (e.g. offensive cyber capabilities, ability to evade human oversight) or its propensities (e.g. misalignment with human intent, lack of performance reliability, lawlessness, discriminatory bias).   

EU AI Act implementation - Reminder of key deadlines

  • 2 August 2025: GPAI model providers placing these tools on the EU market must comply with EU AI Act obligations. This includes a notification requirement for providers of GPAI models with systemic risk, obliging providers to notify the EU's AI Office without delay.
  • August 2025 - August 2026: Implementation phase - with the AI Office to collaborate closely, in particular with providers who adhere to the Code to ensure that models can continue to be placed on the EU market without delays.
  • 2 August 2026: Full enforcement of the EU AI Act provisions by the EU authorities.
  • 2 August 2027: Models placed on the market before 2 August 2025 must comply with the AI Act obligations by 2 August 2027.

The AI Office will review the Code at least every two years and will consider the need for periodic updates in response to technological developments. 

Tags

copyrights, data privacy & security, digital speaks, digital transformation & emerging technology, london