Safety in the Backseat: Dozens of Nations Decline Strict Safety Commitments in Latest Global AI Pledge
Feb 21, 2026 |
👀 40 views |
💬 0 comments
A major effort to establish global guardrails for artificial intelligence has hit a significant roadblock this week. While the India AI Impact Summit 2026 concluded with the adoption of the "New Delhi Declaration," a significant number of participating nations—and nearly all major military powers—steered clear of specific, binding safety commitments.
The move signals a definitive shift in the global AI narrative from the "safety-first" caution of the 2023 Bletchley Park era toward a "deployment-first" race for economic and military dominance.
1. The New Delhi Declaration: A "Generic" Consensus
While the Indian government hailed the summit as a "grand success" with 86 nations and two international organizations signing the final declaration, the document has been heavily criticized by safety advocates for its lack of teeth.
Voluntary Only: The declaration emphasizes "voluntary, non-binding initiatives" and knowledge-sharing rather than regulatory mandates.
The Participation Gap: Of the more than 110 countries that sent delegations to New Delhi, roughly 24 to 30 nations (dozens) opted not to sign the pledge at all, citing concerns over national sovereignty and "regulatory drag."
Shift to "Impact": The focus of the summit was deliberately moved away from "existential risk" to "societal impact," prioritizing AI for agriculture, healthcare, and economic growth in the Global South.
2. The REAIM Blowout: Military Safety Refused
Simultaneously, the third Responsible AI in the Military Domain (REAIM) summit in Spain (Feb 4–5, 2026) saw an even more dramatic divide.
The Opt-Outs: Only 35 out of 85 attending countries signed a commitment to 20 basic principles for AI in warfare.
Heavyweights Side-lined: Both the United States and China refused to sign the military safety pledge, with officials from both nations arguing that binding constraints would create a "prisoner’s dilemma" that could leave them vulnerable to adversaries.
Human Control: The rejected principles included a baseline requirement for "meaningful human control" over lethal autonomous weapons—a clause many nations were unwilling to formalize as a legal obligation.
3. The U.S. Stance: "Total Rejection" of Global Governance
The most vocal opposition to centralized safety standards came from the United States. Speaking at the New Delhi summit, White House technology adviser Michael Kratsios delivered a blunt message to the international community:
"The Trump Administration totally rejects global governance of AI. We will not allow international bureaucracies or centralized controls to strangle American innovation or sacrifice our national self-determination."
The U.S. has instead pivoted toward bilateral "Opportunity Partnerships," such as the one signed with India on Friday, which focuses on entrepreneurship and infrastructure rather than safety testing or risk mitigation.
4. The Scientific Warning: The 2026 Safety Report
This diplomatic retreat comes just weeks after the release of the International AI Safety Report 2026 (Feb 3), which warned that the gap between AI capabilities and safety measures is widening dangerously.
Biological Risks: The report found that newer models can now outperform domain experts in troubleshooting virology lab protocols, lowering the barrier for biological weapon design.
Deepfake Surge: Incidents of AI-generated fraud and non-consensual imagery have risen by over 300% in the last 12 months.
U.S. Withdrawal: In a symbolic blow, the U.S. government officially withheld its support from the report’s conclusions for the first time, labeling the findings "risk-obsessed."
🧠 Related Posts
💬 Leave a Comment