Home Artificial Intelligence A four-legged robotic system for taking part in soccer on various terrains

A four-legged robotic system for taking part in soccer on various terrains

A four-legged robotic system for taking part in soccer on various terrains

Should you’ve ever played soccer with a robot, it’s a well-known feeling. Sun glistens down in your face because the smell of grass permeates the air. You go searching. A four-legged robot is hustling toward you, dribbling with determination. 

While the bot doesn’t display a Lionel Messi-like level of ability, it’s a formidable in-the-wild dribbling system nonetheless. Researchers from MIT’s Improbable Artificial Intelligence Lab, a part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a legged robotic system that may dribble a soccer ball under the identical conditions as humans. The bot used a combination of onboard sensing and computing to traverse different natural terrains resembling sand, gravel, mud, and snow, and adapt to their varied impact on the ball’s motion. Like every committed athlete, “DribbleBot” could rise up and get well the ball after falling. 

Programming robots to play soccer has been an lively research area for a while. Nonetheless, the team desired to mechanically learn actuate the legs during dribbling, to enable the invention of hard-to-script skills for responding to diverse terrains like snow, gravel, sand, grass, and pavement. Enter, simulation. 

A robot, ball, and terrain are contained in the simulation — a digital twin of the natural world. You may load within the bot and other assets and set physics parameters, after which it handles the forward simulation of the dynamics from there. 4 thousand versions of the robot are simulated in parallel in real time, enabling data collection 4,000 times faster than using only one robot. That is a number of data. 

The robot starts without knowing dribble the ball — it just receives a reward when it does, or negative reinforcement when it messes up. So, it’s essentially attempting to work out what sequence of forces it should apply with its legs. “One aspect of this reinforcement learning approach is that we must design an excellent reward to facilitate the robot learning a successful dribbling behavior,” says MIT PhD student Gabe Margolis, who co-led the work together with Yandong Ji, research assistant within the Improbable AI Lab. “Once we have designed that reward, then it’s practice time for the robot: In real time, it’s a few days, and within the simulator, a whole bunch of days. Over time it learns to recuperate and higher at manipulating the soccer ball to match the specified velocity.” 

The bot could also navigate unfamiliar terrains and get well from falls attributable to a recovery controller the team built into its system. This controller lets the robot get back up after a fall and switch back to its dribbling controller to proceed pursuing the ball, helping it handle out-of-distribution disruptions and terrains. 

“Should you go searching today, most robots are wheeled. But imagine that there is a disaster scenario, flooding, or an earthquake, and we wish robots to assist humans within the search-and-rescue process. We want the machines to go over terrains that are not flat, and wheeled robots cannot traverse those landscapes,” says Pulkit Agrawal, MIT professor, CSAIL principal investigator, and director of Improbable AI Lab.” The entire point of studying legged robots is to go terrains outside the reach of current robotic systems,” he adds. “Our goal in developing algorithms for legged robots is to offer autonomy in difficult and complicated terrains which are currently beyond the reach of robotic systems.” 

The fascination with robot quadrupeds and soccer runs deep — Canadian professor Alan Mackworth first noted the concept in a paper entitled “On Seeing Robots,” presented at VI-92, 1992. Japanese researchers later organized a workshop on “Grand Challenges in Artificial Intelligence,” which led to discussions about using soccer to advertise science and technology. The project was launched because the Robot J-League a yr later, and global fervor quickly ensued. Shortly after that, “RoboCup” was born. 

In comparison with walking alone, dribbling a soccer ball imposes more constraints on DribbleBot’s motion and what terrains it might traverse. The robot must adapt its locomotion to use forces to the ball to  dribble. The interaction between the ball and the landscape may very well be different than the interaction between the robot and the landscape, resembling thick grass or pavement. For instance, a soccer ball will experience a drag force on grass that isn’t present on pavement, and an incline will apply an acceleration force, changing the ball’s typical path. Nonetheless, the bot’s ability to traverse different terrains is commonly less affected by these differences in dynamics — so long as it doesn’t slip — so the soccer test could be sensitive to variations in terrain that locomotion alone is not. 

“Past approaches simplify the dribbling problem, making a modeling assumption of flat, hard ground. The motion can be designed to be more static; the robot isn’t attempting to run and manipulate the ball concurrently,” says Ji. “That is where tougher dynamics enter the control problem. We tackled this by extending recent advances which have enabled higher outdoor locomotion into this compound task which mixes points of locomotion and dexterous manipulation together.”

On the hardware side, the robot has a set of sensors that allow it perceive the environment, allowing it to feel where it’s, “understand” its position, and “see” a few of its surroundings. It has a set of actuators that lets it apply forces and move itself and objects. In between the sensors and actuators sits the pc, or “brain,” tasked with converting sensor data into actions, which it can apply through the motors. When the robot is running on snow, it doesn’t see the snow but can feel it through its motor sensors. But soccer is a trickier feat than walking — so the team leveraged cameras on the robot’s head and body for a recent sensory modality of vision, along with the brand new motor skill. After which — we dribble. 

“Our robot can go within the wild since it carries all its sensors, cameras, and compute on board. That required some innovations when it comes to getting the entire controller to suit onto this onboard compute,” says Margolis. “That is one area where learning helps because we are able to run a light-weight neural network and train it to process noisy sensor data observed by the moving robot. That is in stark contrast with most robots today: Typically a robot arm is mounted on a hard and fast base and sits on a workbench with an enormous computer plugged right into it. Neither the pc nor the sensors are within the robotic arm! So, the entire thing is weighty, hard to maneuver around.”

There’s still an extended solution to go in making these robots as agile as their counterparts in nature, and a few terrains were difficult for DribbleBot. Currently, the controller isn’t trained in simulated environments that include slopes or stairs. The robot is not perceiving the geometry of the terrain; it’s only estimating its material contact properties, like friction. If there is a step up, for instance, the robot will get stuck — it won’t find a way to lift the ball over the step, an area the team desires to explore in the long run. The researchers are also excited to use lessons learned during development of DribbleBot to other tasks that involve combined locomotion and object manipulation, quickly transporting diverse objects from place to put using the legs or arms.

“DribbleBot is a formidable demonstration of the feasibility of such a system in a fancy problem space that requires dynamic whole-body control,” says Vikash Kumar, a research scientist at Facebook AI Research who was not involved within the work. “What’s impressive about DribbleBot is that each one sensorimotor skills are synthesized in real time on a low-cost system using onboard computational resources. While it exhibits remarkable agility and coordination, it’s merely ‘kick-off’ for the subsequent era. Game-On!”

The research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. The paper can be presented on the 2023 IEEE International Conference on Robotics and Automation (ICRA).


Please enter your comment!
Please enter your name here