Cover Image for The Pentagon claims that artificial intelligence is accelerating its "kill chain."
Mon Jan 20 2025

The Pentagon claims that artificial intelligence is accelerating its "kill chain."

Major artificial intelligence developers, such as OpenAI and Anthropic, are carefully navigating the situation to market their software to the United States military: it is essential to get the Pentagon...

Prominent artificial intelligence developers, such as OpenAI and Anthropic, are seeking a delicate balance in selling software to the U.S. military. Their goal is to help the Pentagon become more efficient without their technology being used to cause harm. Currently, their tools are not used as weapons, but they are providing the Department of Defense with a "significant advantage" in identifying, tracking, and assessing threats, according to Dr. Radha Plumb, the Pentagon's Director of Digital and AI.

Plumb stated that they are improving the speed at which they can execute the "kill chain," a military process that involves identifying, tracking, and eliminating threats through a complex system of sensors and platforms. Generative artificial intelligence is proving useful in the planning and strategy phases of this process. The relationship between the Pentagon and AI developers is still new. In 2024, OpenAI, Anthropic, and Meta revised their usage policies to allow U.S. intelligence and defense agencies to employ their AI systems, although without permitting them to cause harm to humans.

This new openness has led to a "speed dating" scenario between AI companies and defense contractors. For example, Meta partnered with Lockheed Martin and Booz Allen to introduce its Llama models to defense agencies, while Anthropic teamed up with Palantir in November. Subsequently, OpenAI signed a deal with Anduril in December. As generative AI demonstrates its effectiveness in the military realm, it could prompt Silicon Valley to loosen its AI usage policies and allow for more military applications.

Plumb also noted that generative AI can be beneficial in simulating different scenarios, enabling commanders to creatively consider response options and potential compromises in threat situations. However, it is unclear what specific technology the Pentagon is using, as the use of generative AI in the "kill chain" could contradict the policies of several leading AI model companies.

Anthropic, for its part, prohibits the use of its models to develop systems that cause harm to human life. Its CEO, Dario Amodei, defended the need for a balanced approach regarding the use of AI in defense contexts, arguing that neither total prohibition nor unrestricted use are sensible options.

In recent months, a debate has arisen over whether AI weapons should be allowed to make vital decisions. While some argue that similar types of weapons are already in use, Plumb dismissed the idea that the Pentagon would opt for fully autonomous weapons, reaffirming that there will always be humans involved in use-of-force decision-making.

The notion of "autonomy" in this context is a topic of debate, as it raises questions about at what point automated systems, whether AI coding agents or autonomous vehicles, operate independently. Plumb indicated that thinking of these systems as capable of making life-or-death decisions autonomously is an overly simplistic view. In reality, the use of these systems by the Pentagon is perceived as an active collaboration between humans and machines.

Military partnerships have not always been well received in Silicon Valley. In the past, Amazon and Google employees were fired after protesting military contracts. However, the response from the AI community has been relatively subdued. Some researchers, like Evan Hubinger from Anthropic, argue that the use of AI in military contexts is inevitable, emphasizing the importance of directly collaborating with the military to ensure it is used appropriately.

Interaction and regulation in this area will continue to be crucial topics as technology and its applications in defense advance.