The European Union has published a third draft of its Code of Practice for general purpose AI (GPAI) model makers, aimed at helping them comply with the EU AI Act's provisions. The revised guidelines, released on Tuesday, are expected to be the last revision before the final version is adopted in the coming months.
The Code of Practice is a critical component of the EU's risk-based rulebook for AI, which includes a subset of obligations specifically applicable to the most powerful AI model makers. These obligations cover areas such as transparency, copyright, and risk mitigation, with penalties for non-compliance reaching up to 3% of global annual turnover.
The latest draft is billed as having a "more streamlined structure with refined commitments and measures" compared to earlier iterations, based on feedback received on the second draft published in December. The revised Code is broken down into sections covering commitments for GPAIs, detailed guidance for transparency and copyright measures, and safety and security obligations applicable to the most powerful models.
On transparency, the guidance includes an example of a model documentation form GPAIs might be expected to fill in to ensure downstream deployers of their technology have access to key information for their own compliance. However, the copyright section remains a contentious area, with language such as "best efforts," "reasonable measures," and "appropriate measures" suggesting data-mining AI giants may have wiggle room to continue grabbing protected information to train their models.
Notably, language from an earlier iteration of the Code, which stated GPAIs should provide a single point of contact and complaint handling for rightsholders, has been removed. Instead, the current text merely requires signatories to designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.
The draft also suggests GPAIs may be able to refuse to act on copyright complaints by rightsholders if they are deemed "manifestly unfounded or excessive, in particular because of their repetitive character." This could lead to creatives using AI tools to detect copyright issues and automate filing complaints against Big AI being ignored.
In related news, the U.S. administration has been applying pressure on the EU to dilute its AI regulations. At the Paris AI Action summit, U.S. Vice President JD Vance dismissed the need for regulation, warning that overregulation could kill innovation. Meanwhile, the EU has moved to kill off one AI safety initiative and trailed an incoming "omnibus" package of simplifying reforms to existing rules.
Despite this pressure, the European Commission is producing clarifying guidance that will shape how the law applies, including definitions for GPAIs and their responsibilities. This could offer a pathway for lawmakers to respond to U.S. lobbying efforts to deregulate AI.
The EU is accepting written feedback on the latest draft until March 30, 2025, and has launched a website to boost the Code's accessibility. Further feedback, working group discussions, and workshops will feed into the process of turning the third draft into final guidance, with experts hoping to achieve greater "clarity and coherence" in the final adopted version of the Code.