A growing coalition of employees from Google and OpenAI are publicly supporting Anthropic’s refusal to grant the U.S. Department of Defense unrestricted access to its artificial intelligence technology. Over 300 Google workers and 60 OpenAI employees have signed an open letter urging their companies’ leadership to stand in solidarity with Anthropic.
The Core Dispute: Surveillance and Autonomous Weapons
The standoff centers on the Pentagon’s demand for full access, which Anthropic has resisted due to concerns over the potential misuse of AI for domestic mass surveillance and the development of fully autonomous weaponry. This isn’t simply a business disagreement; it’s a fundamental ethical clash over how powerful AI should be deployed.
The signatories of the letter argue that the Pentagon is attempting to pit these tech giants against each other through fear, hoping one will cave while the others remain silent. The letter explicitly calls on executives at Google and OpenAI to uphold Anthropic’s “red lines” against these controversial applications.
Internal Support and Public Statements
While formal responses from Google and OpenAI leadership remain pending, indications suggest internal sympathy for Anthropic’s position. OpenAI CEO Sam Altman stated in a CNBC interview that he doesn’t believe the Pentagon should be using coercive measures like the Defense Production Act (DPA) against tech companies. An OpenAI spokesperson has further confirmed the company aligns with Anthropic’s stance against autonomous weapons and mass surveillance.
Google DeepMind’s Chief Scientist Jeff Dean also voiced opposition on X, stating that mass surveillance violates the Fourth Amendment and risks political misuse.
Existing Military Access and the Threat of Force
The military currently uses AI tools like X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT for non-classified tasks. Negotiations are underway to extend access to classified operations. However, Anthropic, despite having an existing partnership with the Pentagon, has drawn a firm line against its AI being used for surveillance or fully autonomous weapons systems.
Defense Secretary Pete Hegseth has threatened Anthropic with designation as a “supply chain risk” or forced compliance via the DPA if the company doesn’t yield. This signals a willingness to use significant leverage to secure access to cutting-edge AI capabilities.
The situation highlights the increasing tension between the tech industry’s ethical considerations and the military’s demand for advanced tools. This isn’t just about one company; it’s a test case for how AI development will be governed in the future, and whether technological progress will prioritize profit or ethical constraints.






























