More humanlike behaviors emerged in a series of 30-agent simulations. Despite all of the agents starting with the identical personality and same overall goal—to create an efficient village and protect the community against attacks from other in-game creatures—they spontaneously developed specialized roles throughout the community, with none prompting. They diversified into roles akin to builder, defender, trader, and explorer. Once an agent had began to specialize, its in-game actions began to reflect its latest role. For instance, an artist spent more time picking flowers, farmers gathered seeds and guards built more fences.
“We were surprised to see that if you happen to put [in] the correct type of brain, they’ll have really emergent behavior,” says Yang. “That is what we expect humans to have, but don’t expect machines to have.”
Yang’s team also tested whether agents could follow community-wide rules. They introduced a world with basic tax laws and allowed agents to vote for changes to the in-game taxation system. Agents prompted to be pro or anti tax were capable of influence the behavior of other agents around them, enough that they might then vote to cut back or raise tax depending on who they’d interacted with.
The team scaled up, pushing the variety of agents in each simulation to the utmost the Minecraft server could handle without glitching, as much as 1000 directly in some cases. In certainly one of Altera’s 500-agent simulations, they watched how the agents spontaneously got here up with after which spread cultural memes (akin to a passion for pranking, or an interest in eco-related issues) amongst their fellow agents. The team also seeded a small group of agents to attempt to spread the (parody) religion, Pastafarianism, around different towns and rural areas that made up the in-game world, and watched as these Pastafarian priests converted most of the agents they interacted with. The converts went on to spread Pastafarianism (the word of the Church of the Flying Spaghetti Monster) to nearby towns in the sport world.
The best way the agents acted might sound eerily lifelike, but really all they’re doing is regurgitating patterns the LLMshave learned from being trained on human-created data on the web. “The takeaway is that LLMs have a classy enough model of human social dynamics [to] mirror these human behaviors,” says Altera co-founder Andrew Ahn.
In other words, the info makes them excellent mimics of human behavior, but they’re on no account “alive”.
But Yang has grander plans. Altera plans to expand into Roblox next, but Yang hopes to eventually move beyond game worlds altogether. Ultimately, his goal is a world through which humans don’t just play alongside AI characters, but in addition interact with them of their day-to-day lives. His dream is to create an unlimited variety of “digital humans” who actually take care of us and can work with us to assist us solve problems, in addition to keep us entertained. “We wish to construct agents that may really love humans (like dogs love humans, for instance),” he says.
This viewpoint—that AI could love us—is pretty controversial in the sphere, with many experts arguing it is not possible to recreate emotions in machines using current techniques. AI veteran Julian Togelius, for instance, who runs games testing company Modl.ai, says he likes Altera’s work, particularly since it lets us study human behavior in simulation.