From Iran to Venezuela, the U.S. military is rolling out an “AI-first” approach to warfare. But the recent dispute between the Defense Department and artificial intelligence company Anthropic raises questions about whether the military’s deployment of the technology is effective, safe, and lawful.
What is the dispute between the Pentagon and Anthropic?
Anthropic wanted the military to promise that it would not use its AI model, Claude, in weapons that can identify and fire on targets without human input — commonly referred to as “fully autonomous weapons.” The company also sought to prohibit the use of Claude to spy on Americans, particularly by analyzing location records, financial information, and other large datasets that the military has purchased on the commercial market.
The request came after the military reportedly used Claude in its attack on Venezuela and the capture of Nicolás Maduro in January. Claude is a foundation model, which means that it is trained on massive datasets to perform general tasks such as text synthesis, image manipulation, and audio generation.
The Pentagon refused to agree to these restrictions, then blacklisted Anthropic from defense contracting by designating the company a “supply chain risk.” The company has challenged this action as a violation of due process and the First Amendment, among other grounds.
How has the U.S. military used AI in Iran?
The Pentagon is reportedly using AI to generate hundreds of recommendations for targets in Iran, pinpoint their location, prioritize their importance, and even evaluate whether the targeting is legal.
One of the AI systems it is using, the Maven Smart System, is the culmination of a decade of collaboration between the Defense Department and the tech industry to enhance intelligence analysis, surveillance, and targeting. The system enables the military to comb through massive amounts of information from satellites, data brokers, the military’s own drones and sensors, and social media to pick out people and objects of interest.
The system also integrates Anthropic’s Claude, which the military is using not just to speed up this target analysis but also to generate other types of intelligence and to simulate battlefield scenarios.
How much is the military spending on AI?
The Defense Department has allocated at least some $75 billion to AI-driven programs since 2016, as explained in a Brennan Center report. The actual total could be far larger, as this number does not cover programs that are secret or those where the extent of AI use is unclear.
In addition to surveillance and targeting, the military is investing heavily in the development of autonomous weapons, which can select targets and take lethal action with varying degrees of human involvement. The Ukraine war has been transformed by rapid innovation in small drone warfare, increasing pressure on the U.S. military to keep up. The Pentagon requested $13.4 billion for these types of systems for 2026 alone.
The military’s spending also includes as much as $9 billion on data centers and computing capabilities customized for its security needs. This is the infrastructure that keeps the military’s AI and tech systems online.
The amounts will almost certainly grow as the Pentagon continues its AI-first approach.
Which companies have received contracts to develop military AI?
Much of the Pentagon’s AI-related procurement dollars have so far gone to data analytics giant Palantir and Anduril, which manufactures AI-powered drones.
Palantir and Anduril recorded their largest-ever annual defense revenue in 2025 — $903 million and $912 million, respectively. Palantir is the lead contractor on the Maven Smart System, which has been used in Iran, Iraq, Syria, Ukraine, and Yemen.
Anduril specializes in autonomous systems such as drones, surveillance towers, and technology to repel autonomous weapons deployed by adversaries. Its drones are powered by the company’s proprietary AI, enabling them to navigate hostile environments, communicate with each other, and potentially even strike targets with little to no human involvement.
Last July, the Pentagon struck deals with Anthropic and three other companies — OpenAI, xAI, and Google — to develop military applications of their foundation models.
What are the biggest concerns about the military’s use of AI?
The rush to adopt AI threatens to displace human expertise and judgment in life-and-death decisions, jeopardizing troops and civilians alike. Anyone who has used AI chatbots, for example, knows that they frequently make mistakes, both obvious errors and ones that are harder to detect.
AI is prone to inaccuracy in the military context as well. In 2024, Maven’s algorithms were reportedly able to correctly identify a tank in good weather about 60 percent of the time, and its accuracy dropped to only 30 percent in snowy conditions. Foundation models also generate false or misleading analysis while making it sound persuasive. This makes it more likely that commanders and analysts will accept their recommendations, especially during the heat of war.
Thus, even if humans are making final decisions, relying on AI for target selection or justification can lead to incorrect outcomes — and in military situations, these mistakes can have deadly consequences. Media investigations have found, for example, that Israel Defense Forces failed to sufficiently corroborate AI-recommended targets for strikes in Gaza, in part because their analysts were under immense pressure to approve them quickly.
AI also enables the military to collect and piece together location information, social media posts, and other data to recreate people’s movements, associations, and habits at scale. This form of mass surveillance threatens privacy and civil liberties. When it generates sensitive insights about Americans, it undermines their Fourth and First Amendment rights. The inferences the technology makes can be misleading and riddled with bias, such as when it flags satire or humor as genuine security threats, or associates protected characteristics, like Black and Muslim identities, with violence and other negative sentiments.
What are the risks of the military’s dependence on commercial AI technology?
Ceding ownership of technological capabilities to tech firms limits the Pentagon’s visibility and control over the inner workings of the software powering its most sensitive systems. The military routinely looks to industry for resources and expertise it doesn’t have, but it risks becoming too dependent on privately owned and managed technology for its AI needs.
This opacity makes it difficult for the military to inspect proprietary targeting algorithms for hidden biases that lead to the misidentification of civilians as military objectives. Also, in 2025, the army warned that a battlefield communications system designed by Palantir and Anduril was a “black box” that makes it impossible to tell whether unauthorized users can access its applications and data. The army has apparently mitigated the problem, but it raises the question of whether other systems suffer from similar vulnerabilities.
Are there safeguards to make sure the military uses AI responsibly?
Yes, but there are too few protections, and those that exist are inadequate.
While Congress has the power to regulate the military, it has done very little on AI use. The White House has tried to fill in the gaps, issuing a National Security Memorandum in 2024 that outlines guardrails on using AI in national security, such as testing to identify and minimize privacy risks. But the memorandum also gives agencies broad discretion to waive these safeguards, including if they “create an unacceptable impediment to critical agency operations.”
The Pentagon has its own directive on autonomous weapons, including weapons with AI-enabled functions. The directive does not ban weapons that can identify and fire on targets without human involvement. Instead, it requires senior Defense Department leaders to review whether these weapons allow for the “appropriate levels of human judgment over the use of force” before signing off on their use. This standard may be satisfied as long as there is broader human input in decisions about where and how to use such weapons.
The directive also requires testing, training, and other protocols to minimize operational failures and civilian harm. But the Pentagon’s cuts to oversight raise doubts about its ability to comply: It has, for example, halved the number of staff at the Office of the Director of Operational Test and Evaluation, which oversees much of this testing, and shuttered most of its civilian protection efforts.
Finally, the military is buying up bulk commercial datasets — which include the personal and sensitive information of Americans — without judicial oversight. Internal rules issued by the Office of the Director of National Intelligence do not meaningfully restrict this practice.
How can Congress better regulate the military’s AI use?
Congress should ensure that the Pentagon explains not just how it uses AI but also how much it’s spending on the technology, as well as known risks and failures of the systems it acquires. Lawmakers should mandate testing and evaluation of AI that poses risks to the safety of military personnel, to Americans’ privacy rights and civil liberties, and to civilians’ lives. These requirements should apply both before and during the technology’s use.
Congress must restrict the use of autonomous weapons, in accordance with the laws of war, such as the prohibition against weapons that are inherently indiscriminate. It should also enact stronger privacy protections. A good start would be to pass the Fourth Amendment Is Not For Sale Act, which would bar government agencies from buying certain types of sensitive data belonging to Americans without legal process.
These safeguards need strong enforcement and oversight. Congress should reverse staffing cuts and increase the budget of the Director of Operational Test and Evaluation. And it should seriously evaluate how outsourcing AI capabilities to a handful of tech companies could affect the nation’s security.