On June twelfth, Hugging Face submitted a response to the US Department of Commerce NTIA request for information on AI Accountability policy. In our response, we stressed the role of documentation and transparency norms in driving AI accountability processes, in addition to the need of counting on the total range of experience, perspectives, and skills of the technology’s many stakeholders to deal with the daunting prospects of a technology whose unprecedented growth poses more questions than any single entity can answer.
Hugging Face’s mission is to “democratize good machine learning”. We understand the term “democratization” on this context to mean making Machine Learning systems not only easier to develop and deploy, but in addition easier for its many stakeholders to grasp, interrogate, and critique. To that end, we’ve worked on fostering transparency and inclusion through our education efforts, concentrate on documentation, community guidelines and approach to responsible openness, in addition to developing no- and low-code tools to permit individuals with all levels of technical background to investigate ML datasets and models. We imagine this helps everyone interested to higher understand the constraints of ML systems and the way they will safely be leveraged to best serve users and people affected by these systems. These approaches have already proven their utility in promoting accountability, especially within the larger multidisciplinary research endeavors we’ve helped organize, including BigScience (see our blog series on the social stakes of the project), and the newer BigCode project (whose governance is described in additional details here).
Concretely, we make the next recommendations for accountability mechanisms:
- Accountability mechanisms should concentrate on all stages of the ML development process. The societal impact of a full AI-enabled system will depend on selections made at every stage of the event in ways which are unimaginable to totally predict, and assessments that only concentrate on the deployment stage risk incentivizing surface-level compliance that fails to deal with deeper issues until they’ve caused significant harm.
- Accountability mechanisms should mix internal requirements with external access and transparency. Internal requirements akin to good documentation practices shape more responsible development and supply clarity on the developers’ responsibility in enabling safer and more reliable technology. External access to the interior processes and development selections remains to be crucial to confirm claims and documentation, and to empower the various stakeholders of the technology who reside outside of its development chain to meaningfully shape its evolution and promote their interest.
- Accountability mechanisms should invite participation from the broadest possible set of contributors, including developers working directly on the technology, multidisciplinary research communities, advocacy organizations, policy makers, and journalists. Understanding the transformative impact of the rapid growth in adoption of ML technology is a task that’s beyond the capability of any single entity, and would require leveraging the total range of skills and expertise of our broad research community and of its direct users and affected populations.
We imagine that prioritizing transparency in each the ML artifacts themselves and the outcomes of their assessment might be integral to meeting these goals. You could find our more detailed response addressing these points here.
