AI can now create a reproduction of your personality

-

Led by Joon Sung Park, a Stanford PhD student in computer science, the team recruited 1,000 individuals who varied by age, gender, race, region, education, and political ideology. They were paid as much as $100 for his or her participation. From interviews with them, the team created agent replicas of those individuals. As a test of how well the agents mimicked their human counterparts, participants did a series of personality tests, social surveys, and logic games, twice each, two weeks apart; then the agents accomplished the identical exercises. The outcomes were 85% similar. 

“In case you can have a bunch of small ‘yous’ running around and truly making the selections that you just would have made—that, I feel, is ultimately the long run,” Joon says. 

Within the paper the replicas are called simulation agents, and the impetus for creating them is to make it easier for researchers in social sciences and other fields to conduct studies that will be expensive, impractical, or unethical to do with real human subjects. In case you can create AI models that behave like real people, the considering goes, you should utilize them to check every thing from how well interventions on social media combat misinformation to what behaviors cause traffic jams. 

Such simulation agents are barely different from the agents which can be dominating the work of leading AI corporations today. Called tool-based agents, those are models built to do things for you, not converse with you. For instance, they could enter data, retrieve information you’ve gotten stored somewhere, or—someday—book travel for you and schedule appointments. Salesforce announced its own tool-based agents in September, followed by Anthropic in October, and OpenAI is planning to release some in January, in keeping with

The 2 kinds of agents are different but share common ground. Research on simulation agents, just like the ones on this paper, is prone to result in stronger AI agents overall, says John Horton, an associate professor of knowledge technologies on the MIT Sloan School of Management, who founded a company to conduct research using AI-simulated participants. 

“This paper is showing how you possibly can do a type of hybrid: use real humans to generate personas which might then be used programmatically/in-simulation in ways you would not with real humans,” he told in an email. 

The research comes with caveats, not the least of which is the danger that it points to. Just as image generation technology has made it easy to create harmful deepfakes of individuals without their consent, any agent generation technology raises questions on the benefit with which individuals can construct tools to personify others online, saying or authorizing things they didn’t intend to say. 

The evaluation methods the team used to check how well the AI agents replicated their corresponding humans were also fairly basic. These included the General Social Survey—which collects information on one’s demographics, happiness, behaviors, and more—and assessments of the Big Five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. Such tests are commonly utilized in social science research but don’t pretend to capture all of the unique details that make us ourselves. The AI agents were also worse at replicating the humans in behavioral tests just like the “dictator game,” which is supposed to light up how participants consider values comparable to fairness. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x