I imagine deeply within the existential importance of using AI to defend the US and other democracies, and to defeat our autocratic adversaries.
Anthropic has due to this fact worked proactively to deploy our models to the Department of War and the intelligence community. We were the primary frontier AI company to deploy our models within the US government’s classified networks, the primary to deploy them on the National Laboratories, and the primary to offer custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, reminiscent of intelligence evaluation, modeling and simulation, operational planning, cyber operations, and more.
Anthropic has also acted to defend America’s lead in AI, even when it’s against the corporate’s short-term interest. We selected to forgo several hundred million dollars in revenue to chop off the use of Claude by firms linked to the Chinese Communist Party (a few of whom have been designated by the Department of War as Chinese Military Firms), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to make sure a democratic advantage.
Anthropic understands that the Department of War, not private corporations, makes military decisions. Now we have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Nevertheless, in a narrow set of cases, we imagine AI can undermine, somewhat than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we imagine they mustn’t be included now:
- Mass domestic surveillance. We support the usage of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is simply since the law has not yet caught up with the rapidly growing capabilities of AI. For instance, under current law, the federal government can buy detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data right into a comprehensive picture of any person’s life—robotically and at massive scale.
- Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (people who take humans out of the loop entirely and automate choosing and interesting targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We is not going to knowingly provide a product that puts America’s warfighters and civilians in danger. Now we have offered to work directly with the Department of War on R&D to enhance the reliability of those systems, but they’ve not accepted this offer. As well as, without proper oversight, fully autonomous weapons can’t be relied upon to exercise the critical judgment that our highly trained, skilled troops exhibit daily. They must be deployed with proper guardrails, which don’t exist today.
To our knowledge, these two exceptions haven’t been a barrier to accelerating the adoption and use of our models inside our armed forces so far.
The Department of War has stated they are going to only contract with AI corporations who accede to “any lawful use” and take away safeguards within the cases mentioned above. They’ve threatened to remove us from their systems if we maintain these safeguards; they’ve also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the opposite labels Claude as essential to national security.
Regardless, these threats don’t change our position: we cannot in good conscience accede to their request.
It’s the Department’s prerogative to pick contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to proceed to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department decide to offboard Anthropic, we are going to work to enable a smooth transition to a different provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will likely be available on the expansive terms we’ve got proposed for so long as required.
We remain able to proceed our work to support the national security of the US.
