Skip Navigation
Illustration with a collage of binary code, the White Hose, drone bombers, a drone operator, and a satellite targeting image.
Lincoln Agnew
Research Report

The Business of Military AI

The Pentagon has been spending tens of billions of dollars to adopt new technologies at breakneck speed. Without oversight and safeguards, military applications of artificial intelligence could jeopardize civil liberties and lives.

Illustration with a collage of binary code, the White Hose, drone bombers, a drone operator, and a satellite targeting image.
Lincoln Agnew
March 11, 2026

In a world at war on five continents and amid intensifying great power competition, the U.S. Department of Defense (DOD) has forged extensive partnerships with the tech industry to modernize the military’s capabilities.1 Rapid technological advances have made artificial intelligence (AI) a compelling tool for mining data and speeding up decision-making in the battlefield. AI may also prove useful in a wide range of support functions, from helping the military forecast weapons maintenance and repair needs to organizing supply lines during conflict.

Military leaders have credited Project Maven — a pilot project with leading tech companies to develop AI that sifts through and interprets drone and satellite footage — with greatly reducing the time it takes to identify and strike targets.2 And the Defense Innovation Unit (DIU), the Pentagon organization serving as a bridge between Silicon Valley and the military, has fast-tracked contracts for experimental drone technology, which the DIU promises will make combat operations “less expensive” and “put fewer people in the line of fire.” 3

Many claims about the technology’s effectiveness, however, remain untested, and risks to soldiers and civilians remain unaddressed. AI errors can cascade into system failures that misidentify civilians as targets while overlooking genuine threats — undermining mission objectives, combat safety, and obligations under the laws of war. These failures could happen even with humans in the loop. Commanders and operators of weapons systems are generally supposed to independently verify and confirm AI-generated targets. In reality, they may become too willing to defer to algorithmic recommendations. Additionally, greater reliance on AI reduces the lives of individuals to blips and data points on a screen, which could desensitize soldiers to acts of killing and destruction.

Despite these dangers, tech and government leaders insist that the military is not moving fast enough. Shyam Sankar, Palantir Technologies’ chief technology officer, has expressed alarm that “the West has empirically lost deterrence” against China because of excessive regulation and bureaucracy.4 Secretary of Defense Pete Hegseth has said that the acquisition of new weapons technology is “unacceptably slow”; he has directed a slew of changes to put the military “on a wartime footing.”5 This push coincides with an unprecedented expansion of the department’s annual budget to $1 trillion for fiscal year 2026, more than a 13 percent increase from fiscal year 2025.6

The Pentagon’s race to adopt AI is poised to accelerate its substantial and increasing reliance on the technology. Since Project Maven’s launch in 2017 and particularly since 2020, Pentagon contracts awarded to tech companies that specialize in building and supporting AI systems have grown exponentially. The two companies leading this growth — data analytics giant Palantir Technologies and autonomous systems manufacturer Anduril Industries — have grown their share of defense revenue faster than most other comparable government contractors. Industry leaders are also taking on a greater policymaking role, particularly when it comes to acquisition, testing, and oversight of the very technologies they have a financial stake in.

Responsible innovation requires the government to strike the right balance between speed and caution. The military must be able to make levelheaded, evidence-based assessments about the proper role of any new technology in filling capability gaps. It must also conduct the due diligence and testing needed to ensure that newly adopted technologies are safe and effective and do not infringe on fundamental rights.

Yet the accelerating use of AI in warfighting has not been met with commensurate urgency to reckon with its dangers. It has been subject to minimal transparency, insulating it from meaningful public scrutiny and legislative oversight. Even the most basic information about the types of systems the Pentagon is adopting, the degree to which they are effective and safe, and the extent to which their use adheres to the laws of war and other guardrails is often hidden from Congress and the public. The proprietary nature of many of these systems also raises questions about whether the Pentagon itself has access to the data necessary to conduct meaningful due diligence and monitor performance.

Further, there are few safeguards to ensure a proper accounting of the costs and risks of AI warfare. In addition to the acquisition overhaul that Hegseth is leading, the Pentagon has sharply curtailed agency-wide efforts to test and evaluate major weapons systems and assess the risks of civilian harm, making it more difficult to assure that AI-augmented systems will work as promised and without excessive collateral damage. Rules that President Joe Biden’s administration introduced to manage AI risk — which were inadequate to begin with — may be further weakened under President Donald Trump.

Implementing regulations and oversight throughout the acquisition, training, refinement, and deployment of an AI program can spell the difference between success and failure. Autonomous drones sent by U.S. tech start-ups to help Ukraine in its fight against Russia, for example, proved to be error-prone, difficult to repair, and easily foiled by relatively basic electronic jamming techniques.7 Ukraine’s reliance on these drones has been limited, but if the U.S. military were to embed such risky and unreliable AI tools into its core combat functions, it could put civilians and humanitarian workers in the crosshairs. In Gaza, too, inaccuracies in AI-generated intelligence about the identity and location of militants have informed Israeli strikes that killed scores of civilians,8 while facial recognition errors have contributed to the wrongful arrest and interrogation of Palestinians.9

This report documents the military’s expanded use of AI, the tech industry’s role in pushing for even greater adoption, and the risks posed by ineffective regulation. Part I identifies the areas of warfighting where the DOD is making the heftiest investments in AI. Part II is a deep dive into defense procurement data to show how the department directs these investments increasingly toward a handful of tech firms with the resources to develop and support AI systems. Part III traces the growing policymaking impact and influence of these firms. Part IV analyzes gaps and loopholes in the existing patchwork of rules governing how the military acquires and uses AI. Part V examines how the rush to adopt AI without meaningful safeguards or independent oversight could burden the military with ineffective, unsafe systems that also inflict excessive civilian harm and infringe on privacy and civil liberties. Part VI offers a roadmap of checks and balances at this transformative moment for the future of war.