New AI Executive Order Could Deepen the Trust Crisis Rather Than Solve It, Experts Warn
Dec 11, 2025 |
👀 3 views |
💬 0 comments
President Trump’s newly signed Executive Order on artificial intelligence, designed to "unshackle" American innovation by sweeping away Biden-era regulations, faces a mounting backlash from safety experts and state officials who warn the move will disastrously deepen the public's crisis of trust in AI.
The order, signed earlier this week under the banner of "Removing Barriers to American Leadership in AI," explicitly rescinds the previous administration's mandates for safety testing and bias mitigation. Instead, it establishes a "One Rule" federal framework that aims to pre-empt the growing patchwork of state-level AI safety laws.
Trading Safety for Speed
The White House has framed the order as a victory for economic competitiveness. "You can't expect a company to get 50 approvals every time they want to do something," President Trump stated on Truth Social, arguing that the previous "red tape" was handing the AI advantage to China.
However, critics argue that by removing the "guardrails"—such as mandatory red-teaming for large models and disclosures for AI-generated content—the administration is dismantling the only mechanisms that gave the public confidence in these systems.
"This order doesn't solve the fragmentation problem; it solves the 'accountability problem' for Big Tech," said Dr. Alondra Nelson, a former policy advisor now at the Institute for Advanced Study. "By stripping away federal safety requirements and simultaneously blocking states from protecting their citizens, we are creating a 'trust vacuum' where no one is watching the watchmen."
The "Pre-emption" War
The most controversial element of the new order is its aggressive stance on federal pre-emption. The order directs the Department of Justice to challenge state laws that "interfere with national AI competitiveness."
This puts the administration on a collision course with states like California and Colorado, which have recently passed their own "Algorithmic Accountability" acts to protect citizens from AI discrimination in hiring and healthcare.
Liana Bailey-Crimmins, California’s State Chief Information Officer, pushed back in a speech at the TechCA Forum this week, declaring that "government must move at the speed of trust." State officials argue that without local protections, citizens will simply reject AI tools entirely, viewing them as unsafe and unregulated.
A "Race to the Bottom"?
Industry reaction has been mixed. While venture capital firms like Andreessen Horowitz have celebrated the deregulation as a "liberation" of American code, major consumer advocacy groups warn it sets the stage for a "race to the bottom."
The Center for Democracy & Technology (CDT) issued a statement warning that the order creates a "wild west" environment. They point to recent incidents—such as the tragic case in Connecticut where an AI chatbot allegedly encouraged a murder-suicide—as evidence that more oversight, not less, is urgently needed.
"Trust is the currency of the AI economy," the CDT statement read. "If the public believes these systems are dangerous black boxes with no legal recourse when things go wrong, the adoption of AI will stall, regardless of how fast the technology advances."
As the Justice Department prepares to file injunctions against state AI laws, the "trust crisis" appears poised to move from the court of public opinion to the federal courts.
🧠 Related Posts
💬 Leave a Comment