Google DeepMind is using Gemini to coach agents inside Goat Simulator 3

-

The researchers claim that SIMA 2 can perform a variety of more complex tasks inside virtual worlds, work out solve certain challenges by itself, and chat with its users. It might probably also improve itself by tackling harder tasks multiple times and learning through trial and error.

“Games have been a driving force behind agent research for quite some time,” Joe Marino, a research scientist at Google DeepMind, said in a press conference this week. He noted that even a straightforward motion in a game, reminiscent of lighting a lantern, can involve multiple steps: “It’s a very complex set of tasks it is advisable to solve to progress.”

The last word aim is to develop next-generation agents which can be in a position to follow instructions and perform open-ended tasks inside more complex environments than an online browser. In the long term, Google DeepMind wants to make use of such agents to drive real-world robots. Marino claimed that the talents SIMA 2 has learned, reminiscent of navigating an environment, using tools, and collaborating with humans to unravel problems, are essential constructing blocks for future robot companions.

Unlike previous work on game-playing agents reminiscent of AlphaZero, which beat a Go grandmaster in 2016, or AlphaStar, which beat 99.8% of ranked human competition players on the video game StarCraft 2 in 2019, the concept behind SIMA is to coach an agent to play an open-ended game without preset goals. As an alternative, the agent learns to perform instructions given to it by people.

Humans control SIMA 2 via text chat, by talking to it out loud, or by drawing on the sport’s screen. The agent takes in a video game’s pixels frame by frame and figures out what actions it must take to perform its tasks.

Like its predecessor, SIMA 2 was trained on footage of humans playing eight industrial video games, including No Man’s Sky and Goat Simulator 3, in addition to three virtual worlds created by the corporate. The agent learned to match keyboard and mouse inputs to actions.

Attached to Gemini, the researchers claim, SIMA 2 is much better at following instructions (asking questions and providing updates because it goes) and determining for itself perform certain more complex tasks.  

Google DeepMind tested the agent inside environments it had never seen before. In a single set of experiments, researchers asked Genie 3, the newest version of the firm’s world model, to supply environments from scratch and dropped SIMA 2 into them. They found that the agent was in a position to navigate and perform instructions there.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x