In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that includes social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the faculty convened to pick the winning projects that received as much as $100,000 in funding.
“SERC is committed to driving progress on the intersection of computing, ethics, and society. The seed grants are designed to ignite daring, creative considering across the complex challenges and possibilities on this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it vital to not only showcase the breadth and depth of the research that’s shaping the long run of ethical computing, but to ask the community to be a part of the conversation as well.”
“What you’re seeing here is sort of a collective community judgment about probably the most exciting work in terms of research, within the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around 4 key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on all year long as SERC Scholars.
Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, a lot of which can be found to look at on YouTube, included:
Making the kidney transplant system fairer
Policies regulating the organ transplant system in the US are made by a national committee that always takes greater than six months to create, after which years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ latest algorithm examines criteria like geographic location, mortality, and age in only 14 seconds, a monumental change from the same old six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages many of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the brand new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a pair months to take a look at a handful of various policy scenarios, and now it takes a matter of minutes to take a look at hundreds and hundreds of scenarios. We’re capable of make these changes rather more rapidly, which ultimately signifies that we are able to improve the system for transplant candidates rather more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of revealing (or not disclosing) that any a part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student within the Department of Political Science, explored this query in a session that examined recent studies on the impact of assorted labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers checked out how specific words and descriptions impacted users’ perception of deception, their intent to have interaction with the post, and ultimately if the post was true or false.
“The large takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in each false and true posts. This is kind of problematic, as labeling intends to scale back people’s belief in false information, not necessarily true information. This implies that labels combining each process and veracity may be higher at countering AI-generated misinformation.”
Using AI to extend civil discourse online
“Our research goals to deal with how people increasingly wish to have a say within the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the long run of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Science, and a bigger team.
Online deliberative platforms have recently been rising in popularity across the US in each public- and private-sector settings. Tsai explained that with technology, it’s now possible for everybody to have a say — but doing so may be overwhelming, and even feel unsafe. First, an excessive amount of information is on the market, and secondly, online discourse has develop into increasingly “uncivil.”
The group focuses on “how we are able to construct on existing technologies and improve them with rigorous, interdisciplinary research, and the way we are able to innovate by integrating generative AI to reinforce the advantages of online spaces for deliberation.” They’ve developed their very own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out 4 initial modules. All studies have been within the lab thus far, but also they are working on a set of forthcoming field studies, the primary of which will probably be in partnership with the federal government of the District of Columbia.
Tsai told the audience, “In case you take nothing else from this presentation, I hope that you simply’ll take away this — that we should always all be demanding that technologies which can be being developed are assessed to see in the event that they have positive downstream outcomes, slightly than simply specializing in maximizing the variety of users.”
A public think tank that considers all facets of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc on the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t meaning to develop a think tank, but a framework — one which articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the long run, they created Liberatory AI, which they describe as a “rolling public think tank about all facets of AI.” D’Ignazio and Stevens gathered 25 researchers from a various array of institutions and disciplines who authored greater than 20 position papers examining probably the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the company AI landscape, dead ends, and ways forward.
“As a substitute of waiting for Open AI or Google to ask us to take part in the event of their products, we’ve come together to contest the establishment, think bigger-picture, and reorganize resources in this method in hopes of a bigger societal transformation,” said D’Ignazio.