Training robot policies from real-world demonstrations is dear, slow, and susceptible to overfitting, limiting generalization across tasks and environments. A sim-first approach streamlines development, lowers risk and value, and enables safer, more adaptable deployment.
The most recent version of Isaac Lab 2.3, now generally available , improves humanoid robot capabilities with advanced whole-body control, enhanced imitation learning, and higher locomotion. The update also expands teleoperation for data collection by supporting more devices, like Meta Quest VR and Manus gloves, to speed up the creation of demonstration datasets. Moreover, it features a motion planner-based workflow for generating data in manipulation tasks.
Latest reinforcement and imitation learning samples and examples
Isaac Lab 2.3 offers latest features that support dexterous manipulation tasks, including dictionary statement space for perception and proprioception, and Automatic Domain Randomization (ADR) and Population Based Training (PBT) techniques to enable higher scaling for RL training. These latest features extend on environments implemented in DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training and Visuomotor Policies to Grasp Anything with Dexterous Hands.
To launch training for the dexterous environment, use the next script:
./isaaclab.sh -p -m torch.distributed.run --nnodes=1 --nproc_per_node=4
scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Dexsuite-Kuka-Allegro-Reorient-v0
--num_envs 40960 --headless --distributed
Expanding on prior releases, Isaac Lab 2.3 introduces latest benchmarking environments with suction grippers, enabling manipulation across each suction and traditional gripper setups. The previous version included a surface gripper sample within the direct workflow. This update adds CPU-based surface gripper support to the manager-based workflow for imitation learning.
To record demonstrations with this sample, use the next command:
./isaaclab.sh -p scripts/tools/record_demos.py --task Isaac-Stack-Cube-UR10-Long-Suction-IK-Rel-v0
--teleop_device keyboard --device cpu
For more details, see the tutorial on interacting with a surface gripper.
Improved teleoperation for dextrous manipulation
Teleoperation in robotics is the handheld remote control of an actual or simulated robot by a human operator with an input device over a communication link, enabling distant manipulation and locomotion control.
Isaac Lab 2.3 includes teleoperation support for the Unitree G1 robot, with dexterous retargeting for each the Unitree three-finger hand and the Encourage five-finger hand.
Dexterous retargeting is the technique of translating human hand configurations to robot hand joint positions for manipulation tasks. This permits efficient, human‑to‑robot skill transfer, improves performance on contact‑wealthy in‑hand tasks, and yields wealthy demonstrations to coach robust manipulation policies.
The dextrous retargeting workflow takes advantage of the retargeter teleoperation framework built into Isaac Lab which enables per-task teleoperation device configuration.
Additional improvements have also been made to upper body control across all bimanual robots, just like the Fourier GR1T2 and the Unitree G1. This has been done by improving the Pink IK (Inverse Kinematics) controller to maintain bimanual robot arms in a more natural posture, reducing unnecessary elbow flare. Latest environments that allow for the robot to rotate its torso have been included on this release, to extend robots’ reachable space. Additional tuning has been done to enhance speed and reduce errors ultimately effector and end effector goal.




The Isaac Lab 2.3 release moreover includes UI enhancements for more intuitive usage. UI elements have been added to alert teleoperators of inverse kinematic (IK) controller errors, like at-limit joints and no-solve states. A pop-up has also been added to tell teleoperators when demonstration collection has concluded.
Introducing collision-free motion planning for manipulation data generation
SkillGen is a workflow for generating adaptive, collision-free manipulation demonstrations. It combines human-provided subtask segments with GPU-accelerated motion planning to enable learning real-world contact-rich manipulation tasks from a handful of human demonstrations.
Developers can use SkillGen inside Isaac Lab Mimic to generate demonstrations on this latest version of Isaac Lab. SkillGen enables multiphase planning (approach, contact, retreat), supports dynamic object attachment and detachment with appropriate collision sphere management, and synchronizes the world state to respect kinematics and obstacles during skill stitching. Manual subtask “start” and “end” annotations separate contact-rich skills from motion planning segments, ensuring consistent trajectory synthesis for downstream users and reproducible results.
In previous releases, Isaac Lab Mimic used the MimicGen implementation for data generation. SkillGen has improved on limitations in MimicGen, and the Isaac Lab 2.3 release now lets you use either SkillGen or MimicGen inside Isaac Lab Mimic.
To run the pipeline using a pre-annotated dataset for 2 stacking tasks, use the next commands. You can too download the dataset.
Use the next command for launching the vanilla cube stacking task:
./isaaclab.sh -p scripts/imitation_learning/isaaclab_mimic/generate_dataset.py
--device cpu
--num_envs 1
--generation_num_trials 10
--input_file ./datasets/annotated_dataset_skillgen.hdf5
--output_file ./datasets/generated_dataset_small_skillgen_cube_stack.hdf5
--task Isaac-Stack-Cube-Franka-IK-Rel-Skillgen-v0
--use_skillgen
Use the next command for launching the cube stacking in a bin task:
./isaaclab.sh -p scripts/imitation_learning/isaaclab_mimic/generate_dataset.py
--device cpu
--num_envs 1
--generation_num_trials 10
--input_file ./datasets/annotated_dataset_skillgen.hdf5
--output_file ./datasets/generated_dataset_small_skillgen_bin_cube_stack.hdf5
--task Isaac-Stack-Cube-Bin-Franka-IK-Rel-Mimic-v0
--use_skillgen


For details about prerequisites and installation, see SkillGen for Automated Demonstration Generation. For policy training and inference, seek advice from the Imitation Learning workflow in Isaac Lab. For details about commands, see the SkillGen documentation.
Beyond manipulation, humanoids and mobile robots must navigate complex and dynamic spaces safely. Developers can now use the mobility workflow in Isaac Lab to post-train NVIDIA COMPASS, a vision-based mobility pipeline enabling navigation across robot types and environments. The workflow involves synthetic data generation (SDG) in Isaac Sim, mobility model training, and deployment with NVIDIA Jetson Orin or NVIDIA Thor. Cosmos Transfer improves synthetic data to scale back the sim-to-real gap.
By combining NVIDIA Isaac CUDA-accelerated libraries, a robot can know its location using cuVSLAM, learn find out how to construct a map using cuVGL and understand the scene to generate motion with COMPASS, enabling it to navigate changing environments and obstacles in real time. COMPASS also provides developers means to generate synthetic data for training advanced Vision Language Motion (VLA) foundation models like GR00T N. ADATA, UCR and Foxlink are integrating COMPASS into their workflows.
Loco-manipulation synthetic data generation for humanoids
Loco-manipulation is the coordinated execution of locomotion and manipulation—robots move their bodies (walk or roll) while concurrently acting on objects (grasping, pushing, pulling), treated as one coupled whole-body task.
This workflow synthesizes robot task demonstrations that couple manipulation and locomotion by integrating navigation with a whole-body controller (WBC). This allows robots to execute complex sequences, akin to picking up an object from a table, traversing an area, and placing the article elsewhere.
The system augments demonstrations by randomizing tabletop pick and place locations, destinations, and ground obstacles. The method restructures data collection into pick and place segments separated by locomotion, enabling large-scale loco-manipulation datasets from manipulation-only human demonstrations to coach humanoid robots for combined tasks.
An example of find out how to run this augmentation is shown below. Download the sample input dataset.
./isaaclab.sh -p
scripts/imitation_learning/disjoint_navigation/generate_navigation.py
--device cpu
--kit_args="--enable isaacsim.replicator.mobility_gen"
--task="Isaac-G1-Disjoint-Navigation"
--dataset ./datasets/generated_dataset_g1_locomanip.hdf5
--num_runs 1
--lift_step 70
--navigate_step 120
--enable_pinocchio
--output_file ./datasets/generated_dataset_g1_navigation.hdf5
The interface is flexible for users to modify to different embodiments, akin to humanoids and mobile manipulators with the controller users select.


Omni PhysX is now open source
Omni PhysX, an extension that gives a connection between USD physics content and the PhysX simulation engine, is now open source. While PhysX and its associated applications are already open source, the previously closed omni.physx layer blocked easy customization and integration. This transformation will enable developers to customize physics capabilities and enable flexible robotics simulation.
Policy evaluation framework
Evaluating learned robot skills—akin to manipulating objects or traversing an area—doesn’t scale when limited to real hardware. Simulation offers a scalable method to evaluate these skills against a mess of scenarios, tasks and environments.
Nonetheless, from sampling simulation-ready assets, to establishing and diversifying environments, to orchestrating and analyzing large-scale evaluations, users must hand-curate several components on top of Isaac Lab to attain desired results. This results in fragmented setups with limited scalability, high overhead, and a big entry barrier.
To deal with this problem, NVIDIA and Lightwheel are co-developing NVIDIA Isaac Lab – Arena, an open source policy evaluation framework for scalable simulation-based experimentation. Using the framework APIs, developers can streamline and execute complex, large-scale evaluations without system-building. This implies they will give attention to policy iteration while contributing evaluation methods to the community, accelerating robotics research and development.
This framework provides simplified, customizable task definitions and extensible libraries for metrics, evaluation and diversification. It features parallelized, GPU-accelerated evaluations using Isaac Lab and interoperates with data generation, training, and deployment frameworks for a seamless workflow.
Built on this foundation is a library of sample tasks for manipulation, locomotion and loco-manipulation. NVIDIA can be collaborating with policy developers and benchmark authors, in addition to simulation solution providers like Lightwheel, to enable their evaluations on this framework, while contributing evaluation methods back to the community.


For giant‑scale evaluation, workloads might be orchestrated with NVIDIA OSMO, a cloud‑native platform that schedules and scales robotics and autonomous‑machine pipelines across on‑prem and cloud compute. Isaac Lab – Arena will probably be available soon.
Infrastructure support
Isaac Lab 2.3 is supported on NVIDIA RTX PRO Blackwell Servers, and on NVIDIA DGX Spark, powered by the NVIDIA GB10 Grace Blackwell Superchip. Each RTX PRO and DGX Spark provide a wonderful platform for researchers to experiment, prototype, and run every robot development workload across training, SDG, robot learning, and simulation.
Note that teleoperation with XR/AVP and Imitation Learning in Isaac Lab Mimic should not supported in Isaac Lab 2.3 on DGX Spark. Developers are expected to have precollected data for humanoid environments, while Franka environments support standard devices just like the keyboard and spacemouse.
Ecosystem adoption
Leading robotics developers Agility Robotics, Boston Dynamics, Booster Robotics, Dexmate, Figure AI, Hexagon, Lightwheel, General Robotics, maxon, and Skild AI are tapping NVIDIA libraries and open models to advance robot development.
Start with Isaac Lab 2.3
Isaac Lab 2.3 accelerates robot learning by enhancing humanoid control, expanding teleoperation for easier data collection, and automating the generation of complex manipulation and locomotion data.
To start with the discharge of Isaac Lab 2.3, visit the GitHub repo and documentation.
To learn more about how Isaac Lab extends GPU-native robotics simulation into large-scale multimodal learning to drive the following wave of breakthroughs in robotics research, see Isaac Lab: A GPU-Accelerated Simulation Framework For Multi-Modal Robot Learning.
Learn more in regards to the research being showcased at CoRL and Humanoids, happening September 27–October 2 in Seoul, Korea.
Also, join the 2025 BEHAVIOR Challenge, a robotics benchmark for testing reasoning, locomotion, and manipulation, featuring 50 household tasks and 10,000 tele-operated demonstrations.
Not sleep thus far by subscribing to our newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. Explore NVIDIA documentation and YouTube channels, and join the NVIDIA Developer Robotics forum. To start out your robotics journey, enroll in free NVIDIA Robotics Fundamentals courses.
Start with NVIDIA Isaac libraries and AI models for developing physical AI systems.
