A new investigation published argues that the federal government is rushing into artificial intelligence the same way it rushed into cloud computing a decade ago, and with the same structural vulnerabilities still in place.
Summary
- ProPublica reporter Renee Dudley draws on years of federal cybersecurity reporting to outline three cautionary lessons as the Trump administration pushes agencies to rapidly adopt AI tools from OpenAI, Google, and xAI at cut-rate government pricing
- The first lesson: so-called free or cheap tech deals eventually lock agencies in; the second: oversight programs like FedRAMP have been gutted and lack resources to vet what they approve; the third: the third-party auditors rating AI providers are paid by those same providers
- The White House is framing AI adoption as urgent and competitive, mirroring language the Obama administration used to push cloud computing, a transition ProPublica’s reporting found was riddled with cybersecurity failures
ProPublica’s Renee Dudley published an investigation on April 6 arguing that as the Trump administration encourages federal agencies to rapidly adopt AI from major tech companies, it is repeating the patterns that plagued Washington’s transition to cloud computing, where speed trumped security, oversight was defunded, and the government eventually became deeply dependent on contractors it had little leverage over.
The White House has positioned AI as a national competitiveness imperative. Agencies can now access OpenAI’s ChatGPT for $1, Google’s Gemini for 47 cents per user, and xAI’s Grok for 42 cents. The framing, Dudley writes, closely mirrors the language used when the Obama administration declared cloud computing a transformational priority in the early 2010s.
Lesson one: There is no such thing as a free lunch. ProPublica’s investigation found that Microsoft’s pledge in 2021 to give the federal government $150 million in security services was, in practice, a lock-in mechanism. After agencies adopted the free upgrades, switching to a competitor would have been costly and disruptive. “It was successful beyond what any of us could have imagined,” one former Microsoft salesperson told ProPublica. As crypto.news has reported, Microsoft and OpenAI have since clashed over the terms of their own AI partnership, a signal of how fraught big-tech AI contracts can be even among the parties involved.
Lesson two: Oversight programs require actual resources. The Federal Risk and Authorization Management Program, known as FedRAMP, was created in 2011 to vet cloud computing services before federal agencies were allowed to use them. ProPublica found that the agency wore down FedRAMP over five years to get approval for a major cloud product despite serious cybersecurity reservations. That was before DOGE. FedRAMP now says it operates “with an absolute minimum of support staff” and “limited customer service.” A GSA spokesperson defended the program, saying it “operates with strengthened oversight and accountability mechanisms,” but former employees told ProPublica it functions as a rubber stamp.
Lesson three: Independent reviews are only so independent. As FedRAMP’s in-house capacity has shrunk, third-party auditing firms have assumed more of the vetting function. Those firms are paid by the same cloud companies they are rating. Agencies, often understaffed, lack the capacity to conduct their own thorough reviews and largely rely on those ratings. As crypto.news noted, the broader concern across observers is that governments are consistently slower to govern transformative technology than the companies deploying it.
A Pattern the White House Has Not Addressed
The GSA has acknowledged that AI “usage costs can grow quickly without proper monitoring and management controls” and has advised agencies to set usage limits and review consumption reports. But the underlying structural issues remain: underfunded oversight bodies, vendor-dependent reviews, and agencies with little leverage once adoption becomes entrenched.
Dudley’s conclusion is pointed: “The implications of this downsizing for federal cybersecurity are far-reaching” as agencies take on AI tools that process sensitive government data under the same weakened oversight framework that struggled to manage the cloud.
















English (US) ·