There’s one other query beneath all this: Should it’s all the way down to tech corporations to ban things which are legal but that they find morally objectionable? The federal government actually viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the federal government to stop working with the AI company after Anthropic sought to maintain its model Claude from getting used for autonomous weapons or mass domestic surveillance. “The Department of War should have full, unrestricted access to Anthropic’s models for each LAWFUL purpose,” Hegseth wrote.
But unless OpenAI’s full contract will reveal more, it’s hard to not see the corporate as sitting on an ideological seesaw, promising that it have leverage it’s going to proudly use to do what it sees as the appropriate thing while deferring to the law because the most important backstop for what the Pentagon can do with its tech.
There are three things to be watching here. One is whether or not this position might be adequate for OpenAI’s most important employees. With AI corporations spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there may be the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the federal government’s contract with the corporate, he announced that it will be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the US military may conduct any business activity with Anthropic.” There is critical debate about whether this death blow is legally possible, and Anthropic has said it’s going to sue if the threat is pursued. OpenAI has also come out against the move.
Lastly, how will the Pentagon swap out Claude—the one AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to accomplish that, during which the military will phase in OpenAI’s models in addition to those from Elon Musk’s xAI.
But Claude was reportedly used within the strikes on Iran hours after the ban was issued, suggesting that a phase-out might be anything but easy. Even when the months-long feud between Anthropic and the Pentagon is over (which I doubt it’s), we are actually seeing the Pentagon’s AI acceleration plan put pressure on corporations to relinquish lines within the sand that they had once drawn, with latest tensions within the Middle East as the first testing ground.
If you will have information to share about how that is unfolding, reach out to me via Signal (username: jamesodonnell.22).
