OpenAI CEO Sam Altman used a company-wide meeting this week to address one of the most uncomfortable realities of his firm’s new relationship with the U.S. military. Speaking candidly to employees, Altman made clear that while the Defense Department values OpenAI’s technical expertise, it has no interest in receiving the company’s moral or strategic opinions about how that technology is applied on the battlefield. The message was direct: OpenAI does not get to make operational decisions.
The all-hands gathering was Altman’s first opportunity to respond to employee concerns following a late-Friday agreement that granted the Pentagon access to OpenAI’s artificial intelligence models for use within its classified network. The deal moved quickly and caught many inside the company off guard, prompting a wave of internal questions about what exactly OpenAI had signed up for.
The Anthropic rivalry looms large
The agreement came on the heels of a public standoff involving Anthropic, OpenAI’s rival in the AI industry. Anthropic had pushed back against the Defense Department over how its technology could be used, objecting specifically to applications involving mass surveillance of American citizens and the deployment of fully autonomous weapons systems operating without human oversight.
Reports also surfaced suggesting Anthropic had raised concerns about whether its technology played any role in a military operation involving Venezuelan President Nicolas Maduro. Anthropic denied engaging in discussions about specific operations with the Defense Department, but the episode sharpened the contrast between the two companies and their approaches to government partnerships.
Altman indicated during the staff meeting that this tension may have contributed to the Pentagon’s frustration with Anthropic and, by extension, shaped the terms under which OpenAI entered its own agreement. He also said he is continuing to push the Defense Department to drop its designation of Anthropic as a supply-chain risk, an unusual label that has historically been reserved for foreign adversaries rather than American companies.
Altman’s deal came with conditions
When the Pentagon agreement was first announced, Altman stated publicly that it reflected OpenAI’s core principles, which include a prohibition on domestic mass surveillance and a requirement that human beings remain responsible for decisions involving the use of force. That framing suggested the deal carried meaningful guardrails.
But Altman later acknowledged the rollout appeared rushed, describing the situation in candid terms that hinted at internal dissatisfaction. He confirmed that OpenAI was working with the department to sharpen the language of the agreement and make the company’s principles more explicit within it. That includes protections meant to prevent intelligence agencies from using OpenAI’s services in ways the company considers off-limits, particularly when it comes to surveillance of Americans on domestic soil.
What this means for AI in the military
The broader implications of OpenAI’s Pentagon partnership extend well beyond the company itself. As artificial intelligence becomes increasingly embedded in national security infrastructure, the question of who governs its use and under what conditions is becoming one of the defining debates of the moment.
Altman’s message to staff reflects the tension at the heart of that debate. Technology companies building powerful AI systems are being pulled deeper into government and military work, often faster than their internal cultures are prepared for. Employees who joined AI labs to build products for consumers or researchers are now grappling with the reality that their work may inform decisions made in classified environments they will never see.
For Altman, the challenge is not only managing the Pentagon relationship but also holding together a workforce that has strong and varied opinions about where the line should be drawn. His willingness to speak directly to staff, even when the answers are uncomfortable, suggests he understands that silence on these questions carries its own risks.

