Home » Blog » Ai Regulation Divide Google Embraces Eu Code Meta Rejects Rules
AI Regulation Divide: Google Embraces EU Code, Meta Rejects Rules

AI Regulation Divide: Google Embraces EU Code, Meta Rejects Rules

Aug 1, 2025 | 👀 17 views | đŸ’Ŧ 0 comments

A significant split is emerging among major tech players regarding compliance with the European Union's ambitious AI regulatory framework. As the August 2, 2025, deadline looms for initial compliance with the EU AI Act's provisions for general-purpose AI (GPAI) models, Google has publicly committed to signing the EU's voluntary Code of Practice, while Meta has firmly rejected it.

The EU AI Act, the world's first comprehensive AI legislation, aims to regulate AI systems based on their potential risk levels. To help companies prepare for and understand these regulations, the European Commission also developed a voluntary Code of Practice for General Purpose AI (GPAI). This Code serves as a practical guide, offering insights on how to implement the Act's principles, particularly concerning transparency, copyright compliance, and systemic risk assessment for advanced models.

Google's Stance: Cautious Support for Collaboration

Google, a leading developer of powerful AI models like Gemini, has announced its decision to sign the voluntary Code of Practice. This places Google alongside other major AI developers like OpenAI and Anthropic, who have already endorsed the framework.

Google's President of Global Affairs and Chief Legal Officer, Kent Walker, stated that the company hopes the Code will "promote European citizens' and businesses' access to secure, first-rate AI tools." However, Google's commitment is not without its reservations. Walker also warned that the AI Act and Code "risk slowing Europe's development and deployment of AI," specifically citing concerns over potential departures from EU copyright law, processes that could delay approvals, and requirements that might expose trade secrets, potentially harming Europe's competitiveness. Despite these concerns, Google appears committed to active participation and dialogue with the EU's AI Office to ensure a proportionate and responsive regulatory environment.

Meta's Rejection: "Overreach" and Innovation Fears

In stark contrast, Meta, the parent company of Facebook and Instagram and developer of the Llama AI models, has explicitly refused to sign the EU's voluntary Code of Practice. Joel Kaplan, Meta's Chief Global Affairs Officer, has been a vocal critic of the framework, arguing that "Europe is heading down the wrong path on AI."

Kaplan contends that the Code "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Meta's primary concern is that these rules will "throttle the development and deployment of frontier AI models in Europe" and "stunt European companies looking to build businesses on top of them." Essentially, Meta fears that the EU's approach is overly burdensome and will stifle innovation within the continent.

The Wider Implications of the Divide

This divergence highlights a fundamental tension in global AI governance: the balance between fostering innovation and ensuring safety and ethical development.

For the EU: Google's signing is a significant win, lending credibility to its regulatory efforts and suggesting that major players are willing to engage. It strengthens the EU's ambition to set a global standard for AI regulation. Companies that sign the voluntary code are likely to face less regulatory scrutiny.

For Tech Companies: The differing stances reflect diverse business models and risk appetites. Companies like Google and OpenAI, which are heavily invested in large-scale AI deployment, may see value in working with regulators to shape the rules. Meta, on the other hand, appears to be taking a more defiant stance, prioritizing unhindered innovation. Those who do not sign may face closer regulatory inspection under the binding provisions of the AI Act.

For the Global AI Landscape: The EU AI Act, with its tiered, risk-based approach, is being closely watched worldwide. The reactions of tech giants like Google and Meta will influence how other jurisdictions consider regulating AI, potentially leading to a fragmented global regulatory landscape or, conversely, inspiring similar frameworks.

As the August 2nd deadline passes, the EU's AI Act will begin to apply its first set of obligations for general-purpose AI models, including requirements for technical documentation, training data summaries, copyright compliance policies, and systemic risk assessments for the most powerful models. The coming months will reveal whether Meta's defiance leads to increased scrutiny or if the EU's framework proves adaptable enough to accommodate varied industry perspectives.

🧠 Related Posts


đŸ’Ŧ Leave a Comment