Skip Navigation
Analysis

How Trump’s AI Policy Could Compromise the Technology

A new executive order says that the government will only buy artificial intelligence models that are neutral and nonpartisan, but in reality, the policy would require technology companies to conform to the administration’s ideology.

August 1, 2025
Qi Yang/Getty
View the entire Artificial Intelligence and National Security collection

The president’s executive order dictating what kind of artificial intelligence the government can buy will end up promoting censorship and degrading access to information online. It will also make AI less reliable and trustworthy.

Last week’s order is part of the administration’s strategy to “achieve global dominance in artificial intelligence.” Billed as “preventing woke AI in the federal government,” it directs agencies to withhold contracts from companies that don’t align their technology with the agency’s definition of “truth-seeking” and “ideological neutrality.”

The order specifically targets large language models, a form of AI trained on large volumes of text used to perform a wide range of language-related tasks, such as translation, and answering questions. These models power OpenAI’s ChatGPT, Amazon’s Alexa, Apple’s Siri, and content moderation on social media platforms, among other uses. The order directs federal agencies to purchase large language models only if they generate “truthful responses” to user prompts and prove to be “nonpartisan tools” that do not “intentionally encode partisan or ideological judgments.”

These restrictions feign a commitment to truth and impartiality while expecting tech companies to fall in line with the administration’s version of reality, which includes views that efforts to mitigate workplace discrimination are an attack on meritocracy, transgender people should not be recognized, and climate change is not real. The restriction on “partisan and ideological judgments” is also vague enough to accommodate the whims of agency leaders, who may invoke it to pressure companies to suppress reporting about intelligence leaks, security lapses, or conflicts of interest in their products.

Technology companies’ options for compliance

Compliance with the order will likely require technology companies to degrade model performance and undermine public trust in their technology.

One option they might choose is to introduce content filters or “gates” that prevent chatbots and the like from responding to prompts about subjects that may incur the administration’s wrath, such as diversity, equity, and inclusion, or climate change. Last year, ChatGPT users discovered that OpenAI had applied filters to the names of certain people. One such person is Harvard professor and computer scientist Jonathan Zittrain, who found that ChatGPT refuses to answer prompts about him.  

These filters are imprecise and crude. Asking about Jonathan L. Zittrain, for example, is enough to elicit a response. This kind of slippage sets up an impossible line drawing exercise for tech companies: If you can’t query a chatbot about climate change, what about global warming? Or greenhouse gases? Or “climate harms”? The filters would require users to avoid blacklisted keywords — many likely unknown to the user — to conduct even simple searches of these topics, making the models more frustrating to use.

More worrying are the methods tech companies may use to steer their models towards answers the administration deems “truthful” or “unbiased.” These operate far less transparently than content filters, generating responses that sound objective and neutral but in fact affirm a particular ideology or worldview.   

A company might instruct the model to deny the effects of climate change, for example, or limit the definition of gender to two sexes. Another would be to program the model to treat particular sources of information — such as Trump’s executive orders or the Project 2025 document — as authoritative on these subjects. Finally, the model can be trained on datasets filled with articles and social media posts denying climate science or vilifying transgender people, in a bid to get the model to prioritize or affirm these views in its responses.

Such programming is among the common methods tech companies use to prevent their chatbots from teaching users how to build bombs, conduct cyberattacks, or engage in dangerous or illegal action. They are not foolproof: Researchers have found, for example, that they are still able to design prompts that manipulate chatbots into giving answers that promote harm, such as encouraging suicide or providing instructions on how to produce napalm. Similarly, models trained to amplify inaccuracies about climate change may still sometimes generate accurate information.   

Nevertheless, the goal is to get these models to consistently behave in a certain way, while leading people to believe that they are merely reflecting reality and not the result of deliberate engineering. Changing models in this fashion will not only affect the technology that companies sell to the government — it will also affect applications built on top of them for consumers and the public. A college student using ChatGPT to research an assignment on climate change, for example, is likely to get responses similar to those an Environmental Protection Agency employee elicits while evaluating applications for agency grants.

It is too early to say how tech companies might comply. It’s possible they may try to placate the administration with messaging about their models being neutral and nonpartisan, rather than meaningfully changing how their models are developed and used. The order does appear to provide companies with wiggle room, saying that agencies should “account for technical limitation” in enforcing compliance, and “avoid over-prescription and afford latitude for vendors.”

But the wild card here is the administration itself. If a chatbot response it finds offensive goes viral, will the government wield the threat of withholding federal contracts from the provider to compel more drastic corrective action? This could run afoul of the First Amendment, since it threatens to penalize companies unless they favor the administration’s viewpoints over others. But the risks of protracted litigation, along with the administration’s history of retaliation, may be enough to intimidate companies into compliance.  

The Trump administration has tried to kill federal contracts before in response to perceived slights. Trump recently threatened to cancel federal contracts with Elon Musk’s Space X, a key government supplier of critical satellite and space equipment, after falling out with the tech billionaire. The president eventually pulled back from the threat. In 2019, Amazon sued the first Trump administration for using “improper pressure” to redirect a $10 billion contract to build the military’s cloud computing systems to Microsoft. Amazon complained that the redirection was retaliation against founder Jeff Bezos. At the time, Trump was unhappy about reporting critical of him in The Washington Post, which Bezos owns. The suit was later dismissed after the Pentagon drew up a new contract and awarded Amazon a portion of it.

Risks for the Department of Defense

The Department of Defense, the largest federal agency and by far the government’s biggest spender on AI, is likely to have outsize influence on how the order against so-called “woke AI” is implemented. Leading providers of large language models are also pushing for a larger slice of the defense budget. In July, the Pentagon entered into prototype agreements with OpenAI, Anthropic, Google and XAI to explore the integration of their models into its administrative and warfighting operations, worth up to $200 million each. These agreements provide an on-ramp to more lucrative, multiyear contracts — opportunities that can become bargaining chips in the administration’s continuing effort to purge dissent.  

Ironically, the military is also among the agencies with the most to lose from enforcing the order. Large language models are prone to amplifying harmful stereotypes about minorities, such as associating identity terms like “Black” or “Muslim” with anger and other negative sentiments. These models also suffer from higher rates of inaccuracy when interpreting non-Latin derived languages that are poorly reflected in training data, such as Arabic and Urdu. Military operations integrating models that amplify these biases risks producing intelligence that misidentifies civilians as threats while overlooking genuine security concerns.

The executive order does permit exceptions for national security uses of large language models “as appropriate,” perhaps because the administration recognizes that attempting to bake particular ideologies into AI could be inadvertently harmful. But it is anyone’s guess how agencies will interpret such a malleable carveout. In fact, such discretion may only serve to heap pressure on industry to encode “truths” the administration deems acceptable. This would not only saddle the government with faulty AI that could harm national security, it will also burden the rest of us with technology that leaves us more divided and less informed.