Thoughts on America’s AI Motion Plan Anthropic

-


Today, the White House released “Winning the Race: America’s AI Motion Plan”—a comprehensive strategy to keep up America’s advantage in AI development. We’re encouraged by the plan’s deal with accelerating AI infrastructure and federal adoption, in addition to strengthening safety testing and security coordination. Most of the plan’s recommendations reflect Anthropic’s response to the Office of Science and Technology Policy’s (OSTP) prior request for information. While the plan positions America for AI advancement, we imagine strict export controls and AI development transparency standards remain crucial next steps for securing American AI leadership.

Accelerating AI infrastructure and adoption

The Motion Plan prioritizes AI infrastructure and adoption, consistent with Anthropic’s submission to OSTP in March.

We applaud the Administration’s commitment to streamlining data center and energy permitting to handle AI’s power needs. As we stated in our OSTP submission and at the Pennsylvania Energy and Innovation Summit, without adequate domestic energy capability, American AI developers could also be forced to relocate operations overseas, potentially exposing sensitive technology to foreign adversaries. Our recently published “Construct AI in America” report details the steps the Administration can take to speed up the buildout of our nation’s AI infrastructure, and we stay up for working with the Administration on measures to expand domestic energy capability.

The Plan’s recommendations to extend the federal government’s adoption of AI also includes proposals which can be closely aligned with Anthropic’s policy priorities and proposals to the White House. These include:

  • Tasking the Office of Management and Budget (OMB) to handle resource constraints, procurement limitations, and programmatic obstacles to federal AI adoption.
  • Launching a Request for Information (RFI) to discover federal regulations that impede AI innovation, with OMB coordinating reform efforts.
  • Updating federal procurement standards to remove barriers that prevent agencies from deploying AI systems.
  • Promoting AI adoption across defense and national security applications through public-private collaboration.

Democratizing AI’s advantages

We’re aligned with the Motion Plan’s deal with ensuring broad participation in and profit from AI’s continued development and deployment.

The Motion Plan’s continuation of the National AI Research Resource (NAIRR) pilot ensures that students and researchers across the country can take part in and contribute to the advancement of the AI frontier. We’ve long supported the NAIRR and are happy with our partnership with the pilot program. Further, the Motion Plan’s emphasis on rapid retraining programs for displaced staff and pre-apprenticeship AI programs recognizes the errors of prior technological transitions and demonstrates a commitment to delivering AI’s advantages to all Americans.

Complementing these proposals are our efforts to grasp how AI is transforming, and the way it’s going to transform, our economy. The Economic Index and the Economic Futures Program aim to supply researchers and policymakers with the info and tools they should ensure AI’s economic advantages are broadly shared and risks are appropriately managed.

Promoting secure AI development

Powerful AI systems are going to be developed in the approaching years. The plan’s emphasis on defending against the misuse of powerful AI models and preparing for future AI related risks is suitable and excellent. Specifically, we commend the administration’s prioritization of supporting research into AI interpretability, AI control systems, and adversarial robustness. These are essential lines of research that should be supported to assist us cope with powerful AI systems.

We’re glad the Motion Plan affirms the National Institute of Standards and Technology’s Center for AI Standards and Innovation’s (CAISI) essential work to judge frontier models for national security issues and we stay up for continuing our close partnership with them. We encourage the Administration to proceed to speculate in CAISI. As we noted in our submission, advanced AI systems are demonstrating concerning improvements in capabilities relevant to biological weapons development. CAISI has played a number one role in developing testing and evaluation capabilities to handle these risks. We encourage focusing these efforts on essentially the most unique and acute national security risks that AI systems may pose.

The necessity for a national standard

Beyond testing, we imagine basic AI development transparency requirements, reminiscent of public reporting on safety testing and capability assessments, are essential for responsible AI development. Leading AI model developers must be held to basic and publicly-verifiable standards of assessing and managing the catastrophic risks posed by their systems. Our proposed framework for frontier model transparency focuses on these risks. We’d have liked to see the report do more on this topic.

Leading labs, including Anthropic, OpenAI, and Google DeepMind, have already implemented voluntary safety frameworks, which demonstrates that responsible development and innovation can coexist. The truth is, with the launch of Claude Opus 4, we proactively activated ASL-3 protections to forestall misuse for chemical, biological, radiological, and nuclear (CBRN) weapons development. This precautionary step shows that removed from slowing innovation, robust safety protections help us construct higher, more reliable systems.

We share the Administration’s concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws. Ideally, these transparency requirements would come from the federal government by means of a single national standard. Nonetheless, in keeping with our stated belief that a ten-year moratorium on state AI laws is simply too blunt an instrument, we proceed to oppose proposals aimed toward stopping states from enacting measures to guard their residents from potential harms attributable to powerful AI systems, if the federal government fails to act.

Maintaining strong export controls

The Motion Plan states that “denying our foreign adversaries access to [Advanced AI compute] . . . is a matter of each geostrategic competition and national security.” We strongly agree. That’s the reason we’re concerned with the Administration’s recent reversal on export of the Nvidia H20 chips to China.

AI development has been defined by scaling laws: the intelligence and capability of a system is defined by the size of its compute, energy, and data inputs during training. While these scaling laws proceed to carry, the latest and most capable reasoning models have demonstrated that AI capability scales with the quantity of compute made available to a system working on a given task, or “inference.” The quantity of compute made available during inference is restricted by a chip’s memory bandwidth. While the H20’s raw computing power is exceeded by chips made by Huawei, as Commerce Secretary Lutnick and Under Secretary Kessler recently testified, Huawei continues to struggle with production volume and no domestically-produced Chinese chip matches the H20’s memory bandwidth.

In consequence, the H20 provides unique and important computing capabilities that will otherwise be unavailable to Chinese firms, and can compensate for China’s otherwise major shortage of AI chips. To permit export of the H20 to China would squander a possibility to increase American AI dominance just as a brand new phase of competition is starting. Furthermore, exports of U.S. AI chips won’t divert the Chinese Communist Party from its quest for self-reliance within the AI stack.

To that end, we strongly encourage the Administration to keep up controls on the H20 chip. These controls are consistent with the export controls advisable by the Motion Plan and are essential to securing and growing America’s AI lead.

Looking ahead

The alignment between a lot of our recommendations and the AI Motion Plan demonstrates a shared understanding of AI’s transformative potential and the urgent actions needed to sustain American leadership.

We stay up for working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks and maintaining strong export controls. Together, we are able to be certain that powerful AI systems are developed safely in America, by American firms, reflecting American values and interests.

For more details on our policy recommendations, see our full submission to OSTP, and our ongoing work on responsible AI development and our recent report on increasing domestic energy capability.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x