Train a Quadruped Locomotion Policy and Simulate Cloth Manipulation with NVIDIA Isaac Lab and Newton

-


Physics plays a vital role in robotic simulation, providing the muse for accurate virtual representations of robot behavior and interactions inside realistic environments. With these simulators, researchers and engineers can train, develop, test, and validate robotic control algorithms and prototype designs in a secure, accelerated, and cost-effective manner. 

Nevertheless, simulation often fails to match reality, an issue generally known as the sim-to-real gap. Robotics developers need a unified, scalable, and customizable solution to model real-world physics, including support for several types of solvers. 

This post walks you thru methods to train a quadruped robot to maneuver from one point to a different and methods to arrange a multiphysics simulation with an industrial manipulator to fold clothes. This tutorial uses Newton inside NVIDIA Isaac Lab. 

What’s Newton? 

Newton is an open source, extensible physics engine being developed by NVIDIA, Google DeepMind, and Disney Research, and managed by the Linux Foundation, to advance robot learning and development.

Built on NVIDIA Warp and OpenUSD, Newton enables robots to learn methods to handle complex tasks with greater precision, speed, and extensibility. Newton is compatible with robot learning frameworks resembling MuJoCo Playground and Isaac Lab. The Newton Solver API provides an interface for various physics engines, including MuJoCo Warp, to operate on the tensor-based data model, allowing easy integration with training environments in Isaac Lab.

Architecture diagram including sections labeled Isaac Lab, Newton, MuJoCo, and Warp.
Architecture diagram including sections labeled Isaac Lab, Newton, MuJoCo, and Warp.
Figure 1. Newton is a standalone Python package that gives GPU-accelerated interfaces for describing the physical model and state of robotic systems

On the core of Newton are the solver modules for numerical integration and constraint solving. Solvers could also be constraint- or force-based, use direct or iterative methods, and will use maximal or reduced coordinate representations. 

The usage of a standard interface and shared data model mean that whether you run MuJoCo Warp, the Disney Research Kamino solver, or a custom solver, you interact with Newton consistently. This modular approach also enables you to re-use collision handling, inverse kinematics, state management, and time-stepping logic without rewriting application code.

For training, Newton provides a tensor-based API that exposes physics states as PyTorch- and NumPy-compatible arrays, enabling efficient batching and seamless integration with robot learning frameworks resembling Isaac Lab. Through the Newton Selection API, training scripts can query joint states, apply actions, and feed results back into learning algorithms—all through a single, consistent interface.

MuJoCo Warp, developed by Google DeepMind, is fully integrated as a Newton solver and in addition powers MJX and Playground within the DeepMind stack. This permits models and benchmarks to maneuver seamlessly across Newton, Isaac Lab, and MuJoCo environments with minimal friction. 

Finally, Newton and its associated solvers are released under the Apache 2.0 license, ensuring the community can adopt, extend, and contribute.

What are the highlights of the Newton Beta release?

Highlights of the Newton Beta release include: 

  • MuJoCo Warp, the essential Newton solver, is as much as 152x faster for locomotion and 313x for manipulation than MJX on GeForce RTX 4090. The NVIDIA RTX PRO 6000 Blackwell Series adds as much as 44% more speed for MuJoCo Warp and 75% for MJX.
  • Used because the next-generation Isaac Lab backend, Newton Beta achieves as much as 65% faster in-hand dexterous manipulation with MuJoCo Warp versus PhysX.
  • Prolonged performance and stability of Vortex Block Descent (VBD) solver for skinny deformables resembling clothing in addition to implicit Material Point Method (MPM) solver for granular materials. 

Easy methods to train a locomotion policy for a quadruped using Newton in Isaac Lab

The brand new Newton physics engine integration in Isaac Lab unlocks a faster, more robust workflow for robotics research and development. 

This section showcases an end-to-end example of coaching a quadruped locomotion policy, validating its performance across simulators, and preparing it for real-world deployment. We’ll use the ANYmal robot as our case study to display this powerful train, validate, and deploy process.

Step 1: Train a locomotion policy with Newton

Step one is to arrange the repository and train a policy from scratch using one in every of the Reinforcement Learning scripts in Isaac Lab. This instance trains the ANYmal-D robot to walk on flat rigid terrain using the rsl_rl framework. GPU parallelization enables training across hundreds of simultaneous environments for rapid policy convergence.

To start out training in headless mode for max performance, run the next command:

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py 
--task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless

With the Newton Beta release, you possibly can now use the brand new lightweight Newton Visualizer to watch training progress without the performance overhead of the complete Omniverse GUI. Simply add the --newton_visualizer flag:

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py 
--task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless 
--newton_visualizer

After training, you’ll have a policy checkpoint (.pt file) ready for the subsequent stage.

Gif showing a time-lapse of RL training visualized using the Newton Visualizer, with quadrupeds on the right and code on the left.Gif showing a time-lapse of RL training visualized using the Newton Visualizer, with quadrupeds on the right and code on the left.
Figure 2. Time-lapse of RL training visualized using the Newton Visualizer

Step 2: Validate the policy with Sim2Sim transfer

Sim2Sim transfer is a critical sanity check to make sure a policy will not be overfit to a single physics engine’s specific characteristics. A policy that may successfully transfer between simulators, like PhysX and Newton, has a much higher likelihood of working on a physical robot.

A key challenge is that different physics engines may parse a robot’s USD and order its joints in a different way. We solve this by remapping the policy’s observations and actions using an easy YAML mapping file.

To run a policy trained in Newton with PhysX-based Isaac Lab, use the provided transfer script:

./isaaclab.sh -p scripts/newton_sim2sim/rsl_rl_transfer.py 
    --task=Isaac-Velocity-Flat-Anymal-D-v0 
    --num_envs=32 
    --checkpoint  
    --policy_transfer_file
scripts/sim2sim_transfer/config/newton_to_physx_anymal_d.yaml

This transfer script is obtainable through the isaac-sim / IsaacLab GitHub repo.

Step 3: Prepare for Sim2Real deployment

The ultimate step of the workflow is to transfer the policy trained in simulation to a physical robot.

For this instance, a policy was trained for the ANYmal-D robot entirely inside the standard Isaac Lab environment using the Newton backend. The training process was intentionally limited to using only observations that may be available on the physical robot’s sensors, resembling data from the IMU and joint encoders, (that’s, no privileged information was used during training).

With the assistance of NVIDIA partners at ETH Zurich Robotic Systems Lab (RSL), this policy was then deployed on to their physical ANYmal robot. The resulting hardware test showed the robot successfully executing a walking gait, demonstrating a direct pathway from training in Isaac Lab to testing on a real-world system (Video 2).

This whole train, validate, and deploy process demonstrates how Newton enables the trail from simulation to real-world robotics success.

Multiphysics with the Newton standalone engine

Multiphysics simulation captures coupled interactions between rigid bodies (robot hands, for instance) and deformable objects (cloth, for instance) inside a single framework. This permits more realistic evaluation and data-driven optimization of robot design, control, and task performance.

While Newton works with Isaac Lab, developers can use it directly from Python in standalone mode to experiment with complex physical systems. 

This walkthrough showcases a key feature of Newton: Simulating mixed systems with different physical properties. We’ll explore an example of a rigid robot arm manipulating a deformable cloth, highlighting how the Newton API lets you easily mix multiple physics solvers in a single, real-time simulation.

Step 1: Launch the interactive demo

Newton comes with a collection of examples which can be easy to run. The Franka robot arm and cloth demo might be launched with a single command from the foundation of the Newton repository.

First, ensure your environment is ready up:

# Arrange the uv environment for running Newton examples
uv sync --extra examples

Now, run the material manipulation example:

# Launch the Franka arm and cloth demo
uv run -m newton.examples cloth_franka

This opens an interactive viewer where you possibly can watch the GPU-accelerated simulation in real time. The Franka-cloth demo includes a GPU-based VBD Cloth solver. It runs at around 30 FPS on an RTX 4090, while guaranteeing penetration-free contact throughout the simulation. 

In comparison with other GPU-based simulators that also implement penetration-free dynamics—resembling GPU-IPC (GPU-based Incremental Potential Contact solver)—this instance achieves over 300x higher performance, making it one in every of the fastest fully penetration-free cloth manipulation demos currently available.

Step 2: Understanding the multiphysics coupling

This demo is an incredible example of multiphysics, where systems with different dynamical behaviors interact. That is achieved by assigning a specialized solver to every component. Taking a look at the example_cloth_franka.py file, you possibly can see how the solvers are initialized:

# Initialize a Featherstone solver for the robot
self.robot_solver = SolverFeatherstone(self.model, ...)

# Initialize a Vertex-Block Descent (VBD) solver for the material
self.cloth_solver = SolverVBD(self.model, ...)

You’ll be able to easily switch out the robot solver just by changing SolverFeatherstone to another solver that supports rigid body simulation, resembling SolverMuJoCo.

The magic happens within the simulation loop, where these solvers are coordinated. This instance uses a one-way coupling, where the rigid body affects the deformable, but not the opposite way around, which is suitable in cloth manipulation use case where the consequences of material on robot dynamics might be neglected. The simulation loop logic is easy:

  • Update the material: The cloth_solver simulates the material’s movement, reacting to the collisions from the robot.
  • Update the robot: The robot_solver advances the Franka arm’s state. The arm acts as a kinematic object.
  • Detect collisions: The engine checks for collisions between the newly positioned robot and the material particles.
# A simplified view of the simulation loop in example_cloth_franka.py

def simulate(self):
    for _step in range(self.sim_substeps):
        
        # 1. Step the robot solver forward
        self.robot_solver.step(self.state_0, self.state_1, ...)

        # 2. Check for contacts between the robot and the material
        self.contacts = self.model.collide(self.state_0, ...)

        # 3. Step the material solver, passing in robot contact information
        self.cloth_solver.step(self.state_0, self.state_1, ..., self.contacts, ...)

This explicit, user-controlled loop demonstrates the ability of the Newton API, giving researchers fine-grained control over how different physical systems are coupled.

The team plans to increase Newton with deeper, more integrated coupling. This includes exploring two-way coupling, in scenarios where the dynamics effects each system has on the opposite is considerable—robot locomoting on deformable materials resembling soil or mud, for instance, where the soil can even exert forces back on rigid bodies in walking scenarios. The team can be envisioning implicit coupling for select solver mixtures to more routinely manage the exchange of forces between systems.

How is the ecosystem adopting Newton? 

The Newton open ecosystem is rapidly expanding, with leading universities and corporations integrating specialized solvers and workflows. From tactile sensing to cloth simulation and from dexterous manipulation to rough terrain locomotion, these collaborations highlight how Newton provides a standard foundation for advancing robotic learning and bridging the sim-to-real gap.

The ETH Zurich Robotic Systems Lab (RSL) has been actively leveraging Newton for multiphysics simulation in earthmoving applications, particularly for heavy equipment automation. They use the Newton Implicit Material Point Method (MPM) solver to capture granular interactions resembling soil, gravel, and stones colliding with rigid machinery. 

In parallel, ETH has applied Warp more broadly in robotics and graphics research, including differentiable simulation for deployable locomotion control, trajectory optimization with Gaussian splats (FOCI), and large-scale 3D garment modeling through the GarmentCodeData dataset.

Lightwheel is actively contributing to Newton through SimReady asset development and solver optimization, particularly on deformables resembling soil and cables in multiphysics scenarios. The demonstration below shows the Implicit MPM solver applied across a big environment to model ANYmal quadruped locomotion over non-rigid terrain composed of multiple materials.

Peking University (PKU) is extending Newton into tactile domains by integrating their IPC-based solver, Taccel, to simulate vision-based tactile sensing for robotic manipulators. By leveraging the Newton GPU-accelerated, differentiable architecture, PKU researchers can model fine-grained contact interactions which can be critical for tactile and deformable manipulation.

Style3D is bringing its deep expertise in cloth and soft-body simulation to Newton, enabling high-fidelity modeling of clothes and deformable objects with complex interactions. A simplified version of the Style3D solver has already been integrated into Newton, with plans to reveal APIs that allow advanced users to run full-scale simulations involving tens of millions of vertices.

Technical University of Munich (TUM) is leveraging Newton to run trained dexterous manipulation policies-validated on real robots back in simulation, marking a crucial first step toward closing the loop between sim and real. Also training of policies with 4,000 parallel environments in MuJoCo Warp already works. The following milestone is to transfer policies to hardware, before extending the framework to high-quality manipulation using a spatially resolved tactile skin. 

Read more on how the TUM AIDX Lab leveraged Warp to speed up their robotics research on learning tactile in-hand manipulation agents. Learn more about how AIDX Lab is using Newton to advance their robot learning research.

Video 8. Newton is used to run trained dexterous manipulation policies validated on real robots—back in simulation

Start with Newton 

The Newton physics engine delivers the simulation fidelity robotics researchers need, with a modular, extensible, and simulator‑agnostic design that makes it straightforward to couple diverse solvers for robot learning. 

As an open source, community‑driven project, developers can use, distribute, and extend Newton—adding custom solvers and contributing back to the ecosystem.

Learn more in regards to the research being showcased at CoRL and Humanoids, happening September 27–October 2 in Seoul, Korea.

Also, join the 2025 BEHAVIOR Challenge, a robotics benchmark for testing reasoning, locomotion, and manipulation, featuring 50 household tasks and 10,000 tele-operated demonstrations.

Not sleep up to now by subscribing to our newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. Explore NVIDIA documentation and YouTube channels, and join the NVIDIA Developer Robotics forum. To start out your robotics journey, enroll in our free NVIDIA Robotics Fundamentals courses today.





Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x