The governance of Artificial Intelligence (AI) in Europe is marred by structural power imbalances. This is especially clear in the fourth and final version of the European Union’s Code of Practice for General Purpose AI (the Code), a voluntary compliance mechanism under the EU AI Act for companies that develop foundational models, like Google’s Gemini, OpenAI’s ChatGPT or Anthropic’s Claude. These models are designed to perform a wide range of tasks across domains, but their broad applicability and opaque nature increases the risk of large-scale harm. General Purpose AI I models can lead to unintended harmful consequences, enable disinformation or surveillance, and amplify existing biases, as ARTICLE 19 and other civil society organisations have documented repeatedly.
Recognising these potential harms, the EU AI Office launched a consultation process to develop an appropriate governance framework. As participants in the consultation and the drafting of the Code, ARTICLE 19 advocated for a framework grounded in international human rights law. We called for the inclusion of a comprehensive taxonomy of systemic risks, strong risk mitigation and governance throughout the model lifecycle, public-facing disclosure and transparency requirements to the EU AI Office, and a clear tiered system of accountability between model and downstream providers.
After three rounds of consultation, and an imbalanced process that disadvantaged civil society participation, the final code represents a compromise where corporate interests largely prevailed.
Fundamental tension: speed or safeguards?
The Code emerges amid a fundamental tension in AI governance: the push for rapid deployment and innovation (amplified by the Draghi report on Europe’s economic growth challenges) versus the equally vital need for transparency, accountability, and human rights protections. This push and pull creates a constant balancing act: move too quickly, and risk embedding harms before they’re understood or mitigated; move too cautiously, and risk leaving innovation in the hands of a few powerful actors without adequate oversight. The outcome will determine not only how AI is governed, but who it serves, who it harms, and whether its deployment upholds or undermines democratic values.
The signs are troubling. Before the ink on the code dried, Meta stated that they would not be signing it, calling it too onerous. On the other hand, companies such as Anthropic, Mistral and OpenAI have agreed to sign onto the Code. Now the Commission faces a stark choice between regulatory credibility and industry buy-in – both are needed for success.
At the same time, civil society consensus is emphatic that the Code does not go far enough. The EU’s Code reveals exactly how this balancing act between speed and safeguards plays out in practice – and the results favour corporate interests.
A process designed to exclude
A deep power imbalance shaped the Code’s development process from the start. Civil society organisations faced significant barriers: tight timelines, limited transparency around decision-making, and inadequate opportunities to contest or revise drafts once industry preferences were embedded. Many contributions, especially those rooted in human rights, were sidelined or softened. This made it difficult for civil society to meaningfully advocate for enforceable safeguards or robust mechanisms for transparency and accountability. In a process claiming to be ‘multi-stakeholder, civil society was relegated to a consultative not co-decisive role, with little meaningful weight given to their contributions.
With greater access, resources, and influence, large tech companies were not only well-represented in working groups but also granted informal channels of influence through closed-door meetings that continued even after formal consultations ended. This privileged access enabled them to influence key provisions and weaken stronger regulatory safeguards, evidenced by softened language and provisions favouring voluntary compliance, often framing such changes as necessary for ‘technical feasibility’ or to promote ‘innovation’.
As a result, the Code relies heavily on voluntary self-regulation by profit-driven organisations, placing excessive trust in companies’ goodwill even when it conflicts with preventing societal harm.
What the Code gets right, with caveats
- Fundamental rights as systemic risks: threats to fundamental rights are explicitly included as systemic risks in the appendix of the Code, meaning AI model providers must consider them when developing their models. This brings the Code in alignment with the EU AI Act, amended after previous iterations where these risks were only listed as an optional consideration. However, identifying these risks and implementing safeguards remains largely at providers’ discretion, raising concerns about consistency and enforcement. Without clear, enforceable guidelines, the protection of fundamental rights risks remains a secondary priority rather than a core responsibility.
- External evaluation requirements: signatories are required to include any available external evaluation in their Model Report. Despite strong pushback from model providers, they will need to present model reports that either: (1) include external evaluation results, or (2) provide written justification for the absence of an external evaluation that shows the qualifications of the evaluators used. In effect, this requirement allows providers to opt out from external evaluation by simply submitting a written justification – a clear loophole that weakens the effectiveness of the whole provision. The ‘justification’ isn’t clearly defined, which could easily lead to vague, inconsistent enforcement or rubber-stamping that allows companies to avoid independent scrutiny while still appearing compliant with the Code.
- Lifecycle risk mitigation: the Code introduces targeted measures designed to foster a safer AI model ecosystem: (1) It emphasises risk mitigation throughout the entire model lifecycle, requiring AI providers to view safety as an ongoing, iterative responsibility rather than a one-time checklist completed before market release. (2) It also mandates disclosure of uncertainties about model use and integration, encouraging proactive development practices.
Critical gaps and weaknesses
- Narrow scope: the Code applies only to models developed above specific computational thresholds, ignoring the reality that many smaller models can still cause significant harm. Many models falling below the threshold, such as the one used by DeepSeek, can still cause harm when deployed. By basing the regulation on arbitrary technical criteria rather than the real world impact on individuals, the Code risks leaving people unprotected against the harms and risks of AI.
- Weak incident reporting: the Code’s approach to serious incident reporting incentivises a race to the bottom. Rather than raising standards, the Code actually lets companies off the hook by demanding less than what many already have in place. For example, it includes vague language contending that ‘relevant information about serious incidents cannot be kept track of’, which gives providers an easy excuse to avoid adopting practices already recognised as industry best standards. Such measures risk undermining accountability, eroding public trust, and weakening the overall effectiveness of the Code.
- Weak whistleblower protections: the Code fails to provide meaningful protections for whistleblowers. Ultimately, by not grounding the Code in the EU’s Whistleblowing Directive, it lacks clear, enforceable safeguards that encourage and protect insiders who expose misconduct within AI companies. As such, the Code effectively discourages insiders from coming forward, perpetuating a culture of silence that endangers the right to freedom of expression. This gap undermines efforts to hold AI providers accountable.
The real test: standardisation
The ongoing tug-of-war between industry and human rights advocates is far from over. The EU now moves into the standardisation phase of the Code, a process which could either reinforce or undermine the EU’s commitment to upholding human rights.
Standardisation is inherently political – it will determine whose interests the rules ultimately serve. Well-resourced and strategically positioned industry actors, particularly large technology companies, will likely continue to dominate this process. They will steer the development of standards toward minimising the disruption of their current business models by advocating for flexibility and procedural checklists. These approaches allow them to appear compliant without making meaningful efforts to ensure that their services and products are safe and serve to uphold human rights. Civil society organisations, with far fewer resources, will push for human rights considerations to be embedded in technical standards. Without broad, diverse, and inclusive participation, these standards threaten to overlook critical risks, entrenching existing inequalities, and exacerbating systemic harms.
Moving beyond empty promises
The Code represents both progress and compromise – a more robust tool than we have had before, but one significantly weakened by corporate influence. Each time regulation is diluted, it signals a choice to prioritise rapid deployment over careful oversight. While this may accelerate AI adoption, it does so at the expense of public trust, safety, and the human rights principles that should guide ethical AI governance.
The test for European AI governance will be whether human rights advocates can gain lasting ground in the ongoing standardisation process, and whether industry’s push for speed and flexibility can be held in check. For AI companies claiming to respect fundamental rights, it is time to move beyond rhetoric and demonstrate transparent, verifiable action that matches their commitments.