One of the necessary things to learn about “ethics” in AI is that it has to do with values. Ethics doesn’t let you know what’s right or improper, it provides a vocabulary of values – transparency, safety, justice – and frameworks to prioritize amongst them. This summer, we were in a position to take our understanding of values in AI to legislators within the E.U., U.K., and U.S., to assist shape the longer term of AI regulation. That is where ethics shines: helping carve out a path forward when laws aren’t yet in place.
Consistent with Hugging Face’s core values of openness and accountability, we’re sharing a set of what we’ve said and done here. This includes our CEO Clem’s testimony to U.S. Congress and statements on the U.S. Senate AI Insight Forum; our advice on the E.U. AI Act; our comments to the NTIA on AI Accountability; and our Chief Ethics Scientist Meg’s comments to the Democratic Caucus. Common to lots of these discussions were questions on why openness in AI will be helpful, and we share a set of our answers to this query here.
Consistent with our core value of democratization, we’ve also spent quite a lot of time speaking publicly, and have been privileged to talk with journalists with the intention to help explain what’s happening on this planet of AI straight away. This includes:
- Comments from Sasha on AI’s energy use and carbon emissions (The Atlantic, The Guardian, (twice), Recent Scientist, The Weather Network, the Wall Street Journal, (twice)), in addition to penning a part of a Wall Street Journal op-ed on the subject; thoughts on AI doomsday risk (Bloomberg, The Times, Futurism, Sky News); details on bias in generative AI (Bloomberg, NBC, Vox); addressing how marginalized employees create the info for AI (The Globe and Mail, The Atlantic); highlighting effects of sexism in AI (VICE); and providing insights in MIT Technology Review on AI text detection, open model releases, and AI transparency.
- Comments from Nathan on the state-of-the-art on language models and open releases (WIRED, VentureBeat, Business Insider, Fortune).
- Comments from Meg on AI and misinformation (CNN, al Jazeera, the Recent York Times); the necessity for just handling of artists’ work in AI (Washington Post); advancements in generative AI and their relationship to the greater good (Washington Post, VentureBeat); how journalists can higher shape the evolution of AI with their reporting (CJR); in addition to explaining the basic statistical concept of perplexity in AI (Ars Technica); and highlighting patterns of sexism (Fast Company).
- Comments from Irene on understanding the regulatory landscape of AI (MIT Technology Review, Barron’s).
- Comments from Yacine on open source and AI laws (VentureBeat, TIME) in addition to copyright issues (VentureBeat).
- Comments from Giada on the concepts of AI “singularity” (Popular Mechanics) and AI “sentience” (RFI, Radio France); thoughts on the perils of artificial romance (Analytics India Magazine); and explaining value alignment (The Hindu).
A few of our talks released this summer include Giada’s TED presentation on whether “ethical” generative AI is feasible (the automated English translation subtitles are great!); Yacine’s presentations on Ethics in Tech on the Markkula Center for Applied Ethics and Responsible Openness on the Workshop on Responsible and Open Foundation Models; Katie’s chat about generative AI in health; and Meg’s presentation for London Data Week on Constructing Higher AI within the Open.
After all, we’ve also made progress on our regular work (our “work work”). The elemental value of approachability has emerged across our work, as we have focused on how one can shape AI in a way that’s informed by society and human values, where everyone feels welcome. This includes a brand new course on AI audio from Maria and others; a resource from Katie on Open Access clinical language models; a tutorial from Nazneen and others on Responsible Generative AI; our FAccT papers on The Gradient of Generative AI Release (video) and Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML (video); in addition to workshops on Mapping the Risk Surface of Text-to-Image AI with a participatory, cross-disciplinary approach and Assessing the Impacts of Generative AI Systems Across Modalities and Society (video).
We have now also moved forward with our goals of fairness and justice with bias and harm testing, recently applied to the brand new Hugging Face multimodal model IDEFICS. We have worked on how one can operationalize transparency responsibly, including updating our Content Policy (spearheaded by Giada). We have advanced our support of language diversity on the Hub by using machine learning to enhance metadata (spearheaded by Daniel), and our support of rigour in AI by adding more descriptive statistics to datasets (spearheaded by Polina) to foster a greater understanding of what AI learns and the way it may well be evaluated.
Drawing from our experiences this past season, we now provide a set of most of the resources at Hugging Face which might be particularly useful in current AI ethics discourse straight away, available here: https://huggingface.co/society-ethics.
Finally, we’ve been surprised and delighted by public recognition for most of the society & ethics regulars, including each Irene and Sasha being chosen in MIT’s 35 Innovators under 35 (Hugging Face makes up ¼ of the AI 35 under 35!); Meg being included in lists of influential AI innovators (WIRED, Fortune); and Meg and Clem’s selection in TIME’s 100 under 100 in AI. We’re also very sad to say goodbye to our colleague Nathan, who has been instrumental in our work connecting ethics to reinforcement learning for AI systems. As his parting gift, he has provided further details on the challenges of operationalizing ethical AI in RLHF.
Thanks for reading!
— Meg, on behalf of the Ethics & Society regulars at Hugging Face
