How AI Companies Shifted from Opposing to Supporting Military Collaboration
In 2024, top AI firms changed course, allowing military use of their tools, revealing a surprising shift in tech ethics.
In January, OpenAI quietly allowed its AI to be used for military purposes. Soon after, reports came out that they were working on many projects with the Pentagon. In November, when Donald Trump was reelected as US president, Meta said the U.S. and its allies could use its Llama AI for defense. A few days later, Anthropic also said its AI tools could be used by the military and that it would work with the defense company Palantir. By the end of the year, OpenAI announced a partnership with another defense startup called Anduril. Finally, in February 2025, Google changed its guidelines to allow the use of its AI tools in ways that could harm people. In just one year, concerns about the risks of advanced AI seemed to disappear, and military use of AI became normal.
One reason for this change is the high costs of developing AI models. Economists have noted that the defense sector can help technology grow faster. David J. Teece wrote in 2018 that when the U.S. Defense Department buys technology, it helps to develop those technologies quickly. Defense contracts are often more flexible with money and need long-term investments, making the military a great customer for tech companies. That’s why AI startups might find working with the military appealing. However, this doesn’t explain why all the top AI labs made similar decisions so quickly.
The past few years have changed how businesses compete. We have moved from a focus on free markets to one filled with global concerns. To understand this change, we must look at how governments and big tech companies have connected. In the past, tech companies and governments worked well together, which helped growth and innovation.
However, in recent times, this teamwork between tech and political leaders has started to break apart. Many events in the 2010s have changed this strong connection, leading to new gaming rules in the U.S. and China.
The Silicon Valley Consensus: Until the mid-2010s, there was a shared belief called the Silicon Valley Consensus. This meant that political and tech leaders agreed on the best ways for technology to help the world. They believed that global communication and data would benefit everyone.
This consensus was good for both tech and political leaders because they thought technology could create a world that favored the U.S. While tech leaders may have had hopeful ideas, government leaders mostly thought about practical power and state interests. This led to fewer regulations, letting tech companies act without much control.
During President Bill Clinton’s time, rules were made to avoid taxing and regulating the digital economy. Instead, they encouraged businesses to self-regulate. For many, the idea was clear: any regulation could slow down innovation and harm U.S. power in technology.