A greater approach to control shape-shifting soft robots

-

Imagine a slime-like robot that may seamlessly change its shape to squeeze through narrow spaces, which may very well be deployed contained in the human body to remove an unwanted item.

While such a robot doesn’t yet exist outside a laboratory, researchers are working to develop reconfigurable soft robots for applications in health care, wearable devices, and industrial systems.

But how can one control a squishy robot that doesn’t have joints, limbs, or fingers that will be manipulated, and as an alternative can drastically alter its entire shape at will? MIT researchers are working to reply that query.

They developed a control algorithm that may autonomously learn move, stretch, and shape a reconfigurable robot to finish a selected task, even when that task requires the robot to vary its morphology multiple times. The team also built a simulator to check control algorithms for deformable soft robots on a series of difficult, shape-changing tasks.

Their method accomplished each of the eight tasks they evaluated while outperforming other algorithms. The technique worked especially well on multifaceted tasks. For example, in a single test, the robot had to cut back its height while growing two tiny legs to squeeze through a narrow pipe, after which un-grow those legs and extend its torso to open the pipe’s lid.

While reconfigurable soft robots are still of their infancy, such a way could someday enable general-purpose robots that may adapt their shapes to perform diverse tasks.

“When people take into consideration soft robots, they have an inclination to take into consideration robots which are elastic, but return to their original shape. Our robot is like slime and may actually change its morphology. It is extremely striking that our method worked so well because we’re coping with something very recent,” says Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author of a paper on this approach.

Chen’s co-authors include lead creator Suning Huang, an undergraduate student at Tsinghua University in China who accomplished this work while a visiting student at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior creator Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group within the Computer Science and Artificial Intelligence Laboratory. The research shall be presented on the International Conference on Learning Representations.

Controlling dynamic motion

Scientists often teach robots to finish tasks using a machine-learning approach often known as reinforcement learning, which is a trial-and-error process during which the robot is rewarded for actions that move it closer to a goal.

This will be effective when the robot’s moving parts are consistent and well-defined, like a gripper with three fingers. With a robotic gripper, a reinforcement learning algorithm might move one finger barely, learning by trial and error whether that motion earns it a reward. Then it might move on to the subsequent finger, and so forth.

But shape-shifting robots, that are controlled by magnetic fields, can dynamically squish, bend, or elongate their entire bodies.

The researchers built a simulator to check control algorithms for deformable soft robots on a series of difficult, shape-changing tasks. Here, a reconfigurable robot learns to elongate and curve its soft body to weave around obstacles and reach a goal.

Image: Courtesy of the researchers

“Such a robot could have 1000’s of small pieces of muscle to manage, so it is rather hard to learn in a standard way,” says Chen.

To resolve this problem, he and his collaborators needed to give it some thought in another way. Fairly than moving each tiny muscle individually, their reinforcement learning algorithm begins by learning to manage groups of adjoining muscles that work together.

Then, after the algorithm has explored the space of possible actions by specializing in groups of muscles, it drills down into finer detail to optimize the policy, or motion plan, it has learned. In this manner, the control algorithm follows a coarse-to-fine methodology.

“Coarse-to-fine implies that once you take a random motion, that random motion is prone to make a difference. The change within the consequence is probably going very significant since you coarsely control several muscles at the identical time,” Sitzmann says.

To enable this, the researchers treat a robot’s motion space, or how it may possibly move in a certain area, like a picture.

Their machine-learning model uses images of the robot’s environment to generate a 2D motion space, which incorporates the robot and the world around it. They simulate robot motion using what’s often known as the material-point-method, where the motion space is roofed by points, like image pixels, and overlayed with a grid.

The identical way nearby pixels in a picture are related (just like the pixels that form a tree in a photograph), they built their algorithm to grasp that nearby motion points have stronger correlations. Points across the robot’s “shoulder” will move similarly when it changes shape, while points on the robot’s “leg” may also move similarly, but differently than those on the “shoulder.”

As well as, the researchers use the identical machine-learning model to take a look at the environment and predict the actions the robot should take, which makes it more efficient.

Constructing a simulator

After developing this approach, the researchers needed a approach to test it, in order that they created a simulation environment called DittoGym.

DittoGym features eight tasks that evaluate a reconfigurable robot’s ability to dynamically change shape. In a single, the robot must elongate and curve its body so it may possibly weave around obstacles to succeed in a goal point. In one other, it must change its shape to mimic letters of the alphabet.

Animation of orange blob shifting into shapes such as a star, and the letters “M,” “I,” and “T.”
On this simulation, the reconfigurable soft robot, trained using the researchers’ control algorithm, must change its shape to mimic objects, like stars, and the letters M-I-T.

Image: Courtesy of the researchers

“Our task selection in DittoGym follows each generic reinforcement learning benchmark design principles and the particular needs of reconfigurable robots. Each task is designed to represent certain properties that we deem vital, equivalent to the potential to navigate through long-horizon explorations, the flexibility to research the environment, and interact with external objects,” Huang says. “We imagine they together can provide users a comprehensive understanding of the flexibleness of reconfigurable robots and the effectiveness of our reinforcement learning scheme.”

Their algorithm outperformed baseline methods and was the one technique suitable for completing multistage tasks that required several shape changes.

“We’ve a stronger correlation between motion points which are closer to one another, and I feel that is essential to creating this work so well,” says Chen.

While it could be a few years before shape-shifting robots are deployed in the true world, Chen and his collaborators hope their work inspires other scientists not only to check reconfigurable soft robots but in addition to take into consideration leveraging 2D motion spaces for other complex control problems.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x