Lawyers at William Fry say that the final text of an EU code of practice on general-purpose AI (GPAI) models leaves some important questions unresolved.
GPAI models are advanced AI systems trained on vast datasets, and examples include OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama family of models
The General-Purpose AI Code of Practice, published last week, covers transparency, safety and security, and copyright.
The European Commission describes the code as a “voluntary tool”, prepared by independent experts, aimed at helping the industry to comply with its legal obligations under the AI Act.
In a note on the firm’s website, however, William Fry partners Barry Scannell and Leo Moore raise questions about the voluntary nature of the code,
They point out that developers who fall short of compliance after signing will still be acting “in good faith” and that the EU’s AI office will support, rather than penalise, them.
This grace period runs until 2 August 2026, after which fines may be imposed under the AI Act.
“This potentially sets up a two-tier system. While signatories are shielded from regulatory scrutiny for a year, even if non-compliant, non-signatories have no such protection.
“Those who do not sign the ‘voluntary’ code face immediate legal risk,” William Fry states.
“Crucially, the EU has not yet published the detailed guidelines and templates to determine how many of these measures work in day-to-day operations,” the firm adds.
The commission is due to publish additional guidelines to clarify key concepts for GPAI models later this month.
The William Fry lawyers say that the code’s copyright chapter is the clearest example of how it has shifted from a voluntary promise to a quasi-regulatory framework.
They point out that earlier drafts required only “reasonable efforts” to exclude websites that routinely infringe copyright, but the final text states that providers must now actively exclude such sources.
“However, the final draft removes an earlier measure that would have required developers to check the provenance of protected content acquired through third-party datasets,” the lawyers add, describing the deletion of this check as leaving “a practical blind spot”.
“For rights holders, this means there is still a risk that unlawful content can enter training pipelines under the cover of third-party sourcing,” William Fry states.
The firm’s lawyers say that, while this presents a risk for right holders, they remain free to act if they believe their content has been misused, even where a provider is a signatory.
William Fry adds that the code’s chapter on transparency has transformed from general principles in earlier drafts to specific operational standards, while the section on safety and security also now contains “clear, enforceable expectations”.
The firm’s lawyers say that the gap between the code’s final wording and the missing implementation details creates practical challenges.
“Without the official summary template for training data, finalised technical standards, or clearer best-practice guidelines, developers and rights holders are left to interpret broad obligations where mistakes may have costly consequences,” they conclude.