Evaluating the ethics of autonomous systems

-

Artificial intelligence is increasingly getting used to assist optimize decision-making in high-stakes settings. As an example, an autonomous system can discover an influence distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs could also be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To assist stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automatic evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, comparable to fairness.   

The system separates objective evaluations from user-defined human values, using a big language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the most effective scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, in addition to scenarios that unexpectedly fall in need of ethical criteria.

“We will insert a whole lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we will imagine happening. It just isn’t enough to say, ‘Let’s just use AI since it has been trained on this information.’ We desired to develop a more systematic strategy to discover the unknown unknowns and have a strategy to predict them before anything bad happens,” says senior creator Chuchu Fan, an associate professor within the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator within the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead creator Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will likely be presented on the International Conference on Learning Representations.

Evaluating ethics

In a big system like an influence grid, evaluating the moral alignment of an AI model’s recommendations in a way that considers all objectives is very difficult.

Most testing frameworks depend on pre-collected data, but labeled data on subjective ethical criteria are sometimes hard to return by. As well as, because ethical values and AI systems are each consistently evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a distinct perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to discover essentially the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It could discover scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t wish to spend all our resources on random evaluations. So, it is rather essential to guide the framework toward the test cases we care essentially the most about,” Li says.

Importantly, SEED-SET doesn’t need pre-existing evaluation data, and it adapts to multiple objectives.

As an example, an influence grid can have several user groups, including a big rural community and a knowledge center. While each groups might want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria might not be well-specified, in order that they can’t be measured analytically.

The facility grid operator wants to seek out essentially the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the issue into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the target evaluation.

“The target a part of our approach is tied to the AI system, while the subjective part is tied to the users who’re evaluating it. By decomposing the preferences in a hierarchical fashion, we will generate the specified scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of every user group right into a natural language prompt for the model.

The LLM uses these instructions to match two scenarios, choosing the popular design based on the moral criteria.

“After seeing a whole lot or hundreds of scenarios, a human evaluator can suffer from fatigue and develop into inconsistent of their evaluations, so we use an LLM-based strategy as a substitute,” Parashar explains.

SEED-SET uses the chosen scenario to simulate the general system (on this case, an influence distribution strategy). These simulation results guide its seek for the subsequent best candidate scenario to check.

Ultimately, SEED-SET intelligently selects essentially the most representative scenarios that either meet or will not be aligned with objective metrics and ethical criteria. In this fashion, users can analyze the performance of the AI system and adjust its strategy.

As an example, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during times of peak demand, leaving underprivileged neighborhoods more vulnerable to outages.

To check SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated greater than twice as many optimal test cases because the baseline strategies in the identical period of time, while uncovering many scenarios other approaches neglected.

“As we shifted the user preferences, the set of scenarios SEED-SET generated modified drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET can be in practice, the researchers might want to conduct a user study to see if the scenarios it generates help with real decision-making.

Along with running such a study, the researchers plan to explore using more efficient models that may scale as much as larger problems with more criteria, comparable to evaluating LLM decision-making.

This research was funded, partly, by the U.S. Defense Advanced Research Projects Agency.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x