Power move or roadblock? What the EUs Code of Practice means for AI Companies
Published on Jul 24, 2025 --- 0 min read
By Shalini Kurapati

Power move or roadblock? What the EUs Code of Practice means for AI Companies

Share this article

Power move or roadblock? What the EU’s Code of Practice means for AI Companies

If you work in AI, or even just follow the space, you’ve probably heard of the EU AI Act. August 2, 2025, marks a key deadline under this new regulation, as the first major obligations come into force. For a brief recap and timeline, see here.

These initial rules primarily affect General-purpose AI (GPAI) models such as GPT, Claude, LLaMa and Gemini family of models. In the weeks leading up to the deadline, the AI policy world has been active. Industry groups pushed for delays, some tech leaders raised concerns about regulatory overreach, and debate centered especially around the Code of Practice.

Despite calls to postpone, the European Commission held its ground. In July 2025, it released the Code as planned, along with detailed guidelines for how GPAI providers can align with the AI Act’s early requirements.

What is the Code of Practice for GPAI models?

The General Purpose AI Code of Practice is a voluntary compliance tool developed by the European Commission to guide AI providers ahead of binding requirements under the EU AI Act. Focused on transparency, risk management, copyright, and systemic risk mitigation, it provides a structured path to align with Articles such as 53 and 55 of the Act.

Although non-binding for now, the Code is designed to be a bridge between today’s unregulated foundation model development and the legally binding rules that will fully kick in between 2025 and 2027.

It complements the GPAI guidelines published in July 2025, which define providers, systemic risk, and open-source exemptions. Together, they offer the clearest picture yet of how Europe expects GPAI to be governed.

The AI Act’s GPAI obligations officially start applying on August 2, 2025, but companies will be given grace periods to meet them:

  • New models launched after this date must comply by August 2026
  • Existing models (like GPT-4, Claude, or LLaMA) must comply by August 2027

Who’s signing it, and who isn’t?

Several major AI companies have embraced the Code (last checked on 21 July):

  • OpenAI and Anthropic have publicly committed to sign
  • Mistral has signed the code
  • Microsoft has expressed willingness to sign

These signatories seem to see value in shaping the standards early and building regulatory goodwill.

In contrast, Meta has refused. Calling the Code “overreach,” Meta argues it imposes legal uncertainties and unnecessary constraints. This has set it apart as the first major holdout and one willing to gamble on a more confrontational approach.

Meanwhile, even companies like Google, who voiced early concerns, have not rejected the Code outright. Meta’s stance may prove risky if others fall in line and the Commission uses the Code as a compliance benchmark.

Data and Transparency at the core

A foundational aspect of the EU’s GPAI Code of Practice beyond safety and security is its requirement for data transparency and governance. Providers must publish high-level summaries of their training datasets, detailing sources, collection methods, and licensing terms to demonstrate lawful and responsible use. The Code also mandates clear compliance with EU copyright law: providers must avoid pirated or opt-out content and implement formal policies to respect rights holders.

Central to this framework is a standard Model Documentation Form, used to record key information such as model purpose, limitations, training data provenance, compute usage, and performance metrics. This documentation must be maintained and made available to regulators and downstream users.

By mandating regularly updated documentation and transparency around training data and model risks, the EU is setting a clear direction for how AI systems must be built and governed.

So, is the Code of Practice a power move or a roadblock?

For companies prepared to invest in responsible AI, it is clearly a power move. It offers structure, foresight, and the chance to shape how trust is built into the foundations of AI. For others, it will become the standard they scramble to meet.

At Clearbox AI, a company focused on responsible data solutions, we especially welcome the Code’s emphasis on data transparency and governance. It affirms what we have long stood for: that trustworthy AI begins with trustworthy data.

Tags:

blogpostnews
Picture of Shalini Kurapati
Shalini Kurapati, PhD, is an expert in data governance, privacy, and responsible AI. As co-founder and CEO of Clearbox AI, she focuses on building transparent and compliant data solutions.