How we actually judge AI

-

Suppose you were shown that a man-made intelligence tool offers accurate predictions about some stocks you own. How would you are feeling about using it? Now, suppose you might be applying for a job at an organization where the HR department uses an AI system to screen resumes. Would you be comfortable with that?

A brand new study finds that individuals are neither entirely enthusiastic nor totally averse to AI. Reasonably than falling into camps of techno-optimists and Luddites, individuals are discerning concerning the practical upshot of using AI, case by case.

“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of those conditions isn’t met, and AI appreciation occurs only when each conditions are satisfied.”

The paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” appears in . The paper has eight co-authors, including Lu, who’s the Profession Development Associate Professor of Work and Organization Studies on the MIT Sloan School of Management.

Recent framework adds insight

People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that individuals are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that individuals preferred advice from AI, in comparison with advice from humans.

To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the information supported their proposed “Capability–Personalization Framework” — the concept that in a given context, each the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.

Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — as an example, whether or not participants would feel comfortable with AI getting used in cancer diagnoses. The evaluation confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.

“The meta-analysis supported our theoretical framework,” Lu says. “Each dimensions are necessary: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the duty calls for personalization. People will prefer AI only in the event that they think the AI is more capable than humans and the duty is nonpersonal.”

He adds: “The important thing idea here is that prime perceived capability alone doesn’t guarantee AI appreciation. Personalization matters too.”

For instance, people are likely to favor AI on the subject of detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization isn’t required. But they’re more immune to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is healthier capable of recognize their unique circumstances.

“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is commonly viewed as impersonal and operating in a rote manner. Even when the AI is trained on a wealth of knowledge, people feel AI can’t grasp their personal situations. They desire a human recruiter, a human doctor who can see them as distinct from other people.”

Context also matters: From tangibility to unemployment

The study also uncovered other aspects that influence individuals’ preferences for AI. For example, AI appreciation is more pronounced for tangible robots than for intangible algorithms.

Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.

“It makes intuitive sense,” Lu says. “When you worry about being replaced by AI, you’re less prone to embrace it.”  

Lu is constant to look at people’s complex and evolving attitudes toward AI. While he doesn’t view the present meta-analysis because the last word on the matter, he hopes the Capability–Personalization Framework offers a useful lens for understanding how people evaluate AI across different contexts.

“We’re not claiming perceived capability and personalization are the one two dimensions that matter, but in line with our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide selection of studies,” Lu concludes.

Along with Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.

The research was supported, partly, by grants to Qin and Wu from the National Natural Science Foundation of China. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x