Matthew Kearney: Bringing AI and philosophy into dialogue


Matthew Kearney was drawn to MIT by the culture of its cross-country team. Growing up in Austin, Texas, he loved spending time outdoors and playing soccer, but by highschool running had turn out to be his primary sport. While colleges, he wanted to seek out a spot with each strong academics and a powerful team community. After an official visit with the cross-country team, he knew MIT was the place for him.

“It’s been truly a defining a part of my MIT experience,” says Kearney. “I really like how quirky and fun and peculiar in the easiest way everyone seems to be, and that atmosphere of doing things a bit of bit otherwise. That’s what sold me beyond the apparent academic and research reasons.”

Now a senior, and a team captain, Kearney has made probably the most of his athletic and academic experiences. He arrived at MIT expecting to major in electrical engineering and computer science but fell in love with philosophy after taking 24.02 (Moral Problems and the Good Life). He’s majoring in each while also completing a master’s degree in computer science and engineering.

“The a part of philosophy that interests me is occupied with how we wish to live our lives as people, what matters to us, what’s helpful to us, and the way can we do it in a way that respects the values that matter to other people,” says Kearney. “I’ve really enjoyed more abstract but purposeful considering to enrich the technical rigor that’s gone on with my computer science major.”

Kearney’s interests intersect in the sector of artificial intelligence ethics, where he hopes to leverage his interdisciplinary education to thoughtfully examine and design artificial intelligence systems. Following graduation, he’ll pursue a DPhil in computer science at Oxford University as a Rhodes Scholar.

“There’s not a number of dialogue that goes from the abstract tenets of ethical philosophy all of the solution to the sensible constructing of an AI tool to resolve an issue,” says Kearney. “In my DPhil, I would like to ask how we will start off with the goal of constructing certain ethical principles into AI, and the way can we bring that down layers of abstraction until we understand what technical tools we will construct to assist realize those goals.”

Outside of the classroom, Kearney has been excelling as well. The cross-country team captured a national title in the autumn — not only the primary in program history but additionally the primary NCAA team championship by an MIT athletic team — with Kearney also picking up individual All-American honors.

Human-centered AI design

Kearney is currently working on two research efforts. The primary is a project with the Human Systems Lab, where he’s designing downscaling methods to use to climate data. Most models are on a world scale, but with the ability to predict how local regions shall be affected could help guide effective policy and supply insight to people living within the region.

For his master’s thesis, Kearney helps to develop a deeper understanding of enormous language models, that are used to construct tools similar to ChatGPT. Beyond gaining the technical knowledge, Kearney can also be all the time occupied with the moral ramifications of those tools.

“Under the hood, people aren’t entirely sure why these models ensure decisions,” says Kearney. “They understand mathematically how it really works, but they don’t know why models make individual decisions. The main target of my research has been trying to grasp where concepts are situated within the networks, and the way are they recognized and transformed throughout the network. Then we will start to grasp each the fairness and ethical questions on the model.”

At a high level, Kearney is all in favour of picking apart these models to grasp them from all angles. He recognizes the immense potential artificial intelligence has to affect many alternative fields, but he also recognizes the necessity to wield technology thoughtfully. This insight was sparked by the category 6.882 (Ethical Machine Learning in Human Deployments), a special subject offered by Assistant Professor Marzyeh Ghassemi in spring 2022.

“My technical education taught me that any problem will be solved if we throw enough engineering and technology at it,” says Kearney, adding that he believes he and plenty of others have blind spots of their technical research. “This class really helped me exit that headspace to see that these problems can only be solved through social-centered, or economic, or political approaches. We want to take into consideration how we will use tools from other disciplines to be thoughtful about how we’re using those technologies.”

Kearney sees a possibility for his work to make an impact in a spread of areas, from health care to bank loans.

“In application areas where we all know there may be already bias built into the models systemically, they’re susceptible to it carrying over into model decisions which can be made,” says Kearney. “Nevertheless, these models are going to proceed for use and made, and it’s vital that they’re made in the best way that one, we will understand how they’re actually working, and two, we will guarantee fairer outcomes.”

Kearney finds computer science and philosophy to be in constant dialogue with one another, and is inspired by the pioneers in the sector of AI ethics to proceed constructing deliberate systems that make a positive impact on the world.

As he wraps up his time at MIT, Kearney can also be looking forward to closing out his final track seasons strong, following the success of the cross-country team.

“This fall was the closest and most tight-knit the team’s ever been,” says Kearney. “We’ve got such incredible talent in coaching this yr, and already so many more national qualifiers than we’ve ever had. I’m excited to see what happens with it and to exit with a bang.”


What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x