Federal Government Accelerates AI Use: Three Essential Cautionary Tales
The federal government is increasingly adopting artificial intelligence (AI) to enhance operations, but the push comes with significant lessons learned from past technology transitions. This article explores three cautionary tales regarding AI implementation that highlight the risks and considerations involved.
Cautionary Tale 1: Beware of Free Offers
The Biden administration faced a series of cyberattacks attributed to nation-state actors, prompting calls for enhanced federal cybersecurity. In 2020, Microsoft’s CEO, Satya Nadella, pledged $150 million in technical services aimed at improving government security, including a “free” security upgrade for federal customers.
Fast forward to last year, the Trump administration introduced agreements allowing agencies to access AI tools at reduced rates. Prices for AI services included:
- OpenAI’s ChatGPT for $1
- Google’s Gemini for 47 cents
- xAI’s Grok for 42 cents
This pricing strategy aimed to facilitate the acquisition of advanced AI capabilities. However, users should approach free offers with caution as hidden costs often emerge later. Agencies may find themselves locked into expensive subscriptions after utilizing these seemingly free services.
Cautionary Tale 2: Resource Limitations on Oversight
In 2011, the Obama administration initiated the Federal Risk and Authorization Management Program (FedRAMP) to ensure cloud service security for federal agencies. Yet, investigations suggest that FedRAMP struggled against powerful tech companies like Microsoft. Over several years, Microsoft received authorization for its GCC High product despite cybersecurity concerns, in part due to FedRAMP’s limited resources.
Currently, FedRAMP operates with minimal staff and has faced budget cuts, which raises concerns about its ability to oversee AI technologies effectively. The decrease in resources has transformed FedRAMP into a less effective oversight body, which poses risks as agencies increasingly adopt AI solutions.
Cautionary Tale 3: The Illusion of Independence in Reviews
The federal government relies on third-party assessors to validate the security claims of cloud service providers. However, these assessors are often compensated by the companies they evaluate, creating potential conflicts of interest. Investigations revealed that some assessors endorsed products without fully vetting them, raising serious questions about the reliability of their evaluations.
With FedRAMP’s oversight capabilities diminished, the importance of independent assessments has grown. However, the reliance on assessors who are financially tied to the companies they evaluate can lead to biased findings and has resulted in a return to a pre-FedRAMP era. This situation necessitates that agencies thoroughly vet products, an often neglected responsibility due to lack of personnel and resources.
As federal agencies move towards AI adoption, these cautionary tales underline the necessity of vigilance and scrutiny. Policymakers must ensure that adequate oversight and resources are allocated to facilitate safe and efficient technology integration, preventing history from repeating itself.