Comments on U.S. National AI Research Resource Interim Report

-


Irene Solaiman's avatar

In late June 2022, Hugging Face submitted a response to the White House Office of Science and Technology Policy and National Science Foundation’s Request for Information on a roadmap for implementing the National Artificial Intelligence Research Resource (NAIRR) Task Force’s interim report findings. As a platform working to democratize machine learning by empowering all backgrounds to contribute to AI, we strongly support NAIRR’s efforts.

In our response, we encourage the Task Force to:

  • Appoint Technical and Ethical Experts as Advisors

    • Technical experts with a track record of ethical innovation ought to be prioritized as advisors; they will calibrate NAIRR on not only what’s technically feasible, implementable, and obligatory for AI systems, but in addition on easy methods to avoid exacerbating harmful biases and other malicious uses of AI systems. Dr. Margaret Mitchell, one of the distinguished technical experts and ethics practitioners within the AI field and Hugging Face’s Chief Ethics Scientist, is a natural example of an external advisor.
  • Resource (Model and Data) Documentation Standards

    • NAIRR-provided standards and templates for system and dataset documentation will ease accessibility and performance as a checklist. This standardization should ensure readability across audiences and backgrounds. Model Cards are a vastly adopted structure for documentation that is usually a strong template for AI models.
  • Make ML Accessible to Interdisciplinary, Non-Technical Experts

    • NAIRR should provide education resources in addition to easily comprehensible interfaces and low- or no-code tools for all relevant experts to conduct complex tasks, similar to training an AI model. For instance, Hugging Face’s AutoTrain empowers anyone no matter technical skill to coach, evaluate, and deploy a natural language processing (NLP) model.
  • Monitor for Open-Source and Open-Science for High Misuse and Malicious Use Potential

    • Harm have to be defined by NAIRR and advisors and continually updated, but should encompass egregious and harmful biases, political disinformation, and hate speech. NAIRR also needs to spend money on legal expertise to craft Responsible AI Licenses to take motion should an actor misuse resources.
  • Empower Diverse Researcher Perspectives via Accessible Tooling and Resources

    • Tooling and resources have to be available and accessible to different disciplines in addition to the various languages and perspectives needed to drive responsible innovation. This implies at minimum providing resources in multiple languages, which might be based on essentially the most spoken languages within the U.S. The BigScience Research Workshop, a community of over 1000 researchers from different disciplines hosted by Hugging Face and the French government, is a superb example of empowering perspectives from over 60 countries to construct one of the powerful open-source multilingual language models.

Our memo goes into further detail for every advice. We’re anticipating more resources to make AI broadly accessible in a responsible manner.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x