Simulation has been a cornerstone in medical imaging to handle the information gap. Nonetheless, in healthcare robotics until now, it’s often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With latest advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments – reducing prototyping time from months to days, improving model accuracy, and enabling safer, faster innovation before a single device reaches the operating room.
That is why NVIDIA introduced Isaac for Healthcare earlier this 12 months, a developer framework for AI healthcare robotics, that allows developers in solving these challenges via integrated data collection, training, and evaluation pipelines that work across each simulation and hardware. Specifically, the Isaac for Healthcare v0.4 release provides users with an end-to-end SO-ARM based starter workflow and the bring your individual operating room tutorial. The SO-ARM starter workflow lowers the barrier for MedTech developers to experience the total workflow from simulation to training to deployment and begin constructing and validating autonomously on real hardware immediately.
On this post, we’ll walk through the starter workflow and its technical implementation details to make it easier to construct a surgical assistant robot in less time than ever possible before.
SO-ARM Starter Workflow; Constructing an Embodied Surgical Assistant
The SO-ARM starter workflow introduces a brand new option to explore surgical assistance tasks, and provides developers with a whole end-to-end pipeline for autonomous surgical assistance:
- Collect real-world and artificial data with SO-ARM using LeRobot
- Post-train GR00T N1.5, evaluate in Isaac Lab, then deploy to hardware
This workflow gives developers a secure, repeatable environment to coach and refine assistive skills before moving into the Operating Room.
Technical Implementation
The workflow implements a three-stage pipeline that integrates simulation and real hardware:
- Data Collection: Mixed simulation and real-world teleoperation demonstrations using SO-101 and LeRobot
- Model Training: Post-training GR00T N1.5 on combined datasets with dual-camera vision
- Policy Deployment: Real-time inference on physical hardware with RTI DDS communication
Notably, over 93% of the information used for policy training was generated synthetically in simulation, underscoring the strength of simulation in bridging the robotic data gap.
Sim-to-Real Mixed Training Approach
The workflow combines simulation and real-world data to handle the elemental challenge that training robots in the true world is pricey and limited, while pure simulation often fails to capture real-world complexities. The approach uses roughly 70 simulation episodes for diverse scenarios and environmental variations, combined with 10-20 real-world episodes for authenticity and grounding. This mixed training creates policies that generalize beyond either domain alone.
Hardware Requirements
The workflow requires:
- GPU: RT Core-enabled architecture (Ampere or later) with ≥30GB VRAM for GR00T N1.5 inference
- SO-ARM101 Follower: 6-DOF precision manipulator with dual-camera vision (wrist and room). The SO-ARM101 features WOWROBO vision components, including a wrist-mounted camera with a 3D-printed adapter.
- SO-ARM101 Leader: 6-DOF Teleoperation interface for expert demonstration collection
Notably, developers could run all of the simulation, training and deployment (3 computers needed for physical AI) on one DGX Spark.
Data Collection Implementation
For real-world data collection with SO-ARM101 hardware or some other version supported in LeRobot:
python /path/to/lerobot-record
--robot.type=so101_follower
--robot.port=
--robot.cameras="{wrist: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, room: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}"
--robot.id=so101_follower_arm
--teleop.type=so101_leader
--teleop.port=
--teleop.id=so101_leader_arm
--dataset.repo_id=/surgical_assistance/surgical_assistance
--dataset.num_episodes=15
--dataset.single_task="Prepare and hand surgical instruments to surgeon"
For simulation-based data collection:
# With keyboard teleoperation
python -m simulation.environments.teleoperation_record
--enable_cameras
--record
--dataset_path=/path/to/save/dataset.hdf5
--teleop_device=keyboard
# With SO-ARM101 leader arm
python -m simulation.environments.teleoperation_record
--port=
--enable_cameras
--record
--dataset_path=/path/to/save/dataset.hdf5
Simulation Teleoperation Controls
For users without physical SO-ARM101 hardware, the workflow provides keyboard-based teleoperation with the next joint controls:
- Joint 1 (shoulder_pan): Q (+) / U (-)
- Joint 2 (shoulder_lift): W (+) / I (-)
- Joint 3 (elbow_flex): E (+) / O (-)
- Joint 4 (wrist_flex): A (+) / J (-)
- Joint 5 (wrist_roll): S (+) / K (-)
- Joint 6 (gripper): D (+) / L (-)
- R Key: Reset recording environment
- N Key: Mark episode as successful
Model Training Pipeline
After collecting each simulation and real-world data, convert and mix datasets for training:
# Convert simulation data to LeRobot format
python -m training.hdf5_to_lerobot
--repo_id=surgical_assistance_dataset
--hdf5_path=/path/to/your/sim_dataset.hdf5
--task_description="Autonomous surgical instrument handling and preparation"
# Post-train GR00T N1.5 on mixed dataset
python -m training.gr00t_n1_5.train
--dataset_path /path/to/your/surgical_assistance_dataset
--output_dir /path/to/surgical_checkpoints
--data_config so100_dualcam
The trained model processes natural language instructions comparable to “Prepare the scalpel for the surgeon” or “Hand me the forceps” and executes the corresponding robotic actions. With the most recent LeRobot release (v0.4.0) you’ll have the ability to post-train GR00T N1.5 natively in LeRobot!
End-to-End Sim Collect–Train–Eval Pipelines
Simulation is strongest when it’s a part of a loop: collect data → train → evaluate → deploy. Isaac Lab supports this full pipeline:
Generate Synthetic Data in Simulation
- Teleoperate robots using keyboard or hardware controllers
- Capture multi-camera observations, robot states, and actions
- Create diverse datasets with edge cases inconceivable to gather safely in real environments
Train and Evaluate Policies
- Deep integration with Isaac Lab’s RL framework for PPO training
- Parallel environments (1000’s of simulations concurrently)
- Built-in trajectory evaluation and success metrics
- Statistical validation across varied scenarios
Convert Models to TensorRT
- Automatic optimization for production deployment
- Support for dynamic shapes and multi-camera inference
- Benchmarking tools to confirm real-time performance
This reduces time from experiment to deployment and makes sim-to-real a practical a part of each day development.
Getting Began
Isaac for Healthcare SO-ARM Starter Workflow is obtainable now. To start:
- Clone the repository:
git clone https://github.com/isaac-for-healthcare/i4h-workflows.git - Select a workflow: Start with the SO-ARM Starter Workflow for surgical assistance or explore other workflows
- Run the setup: Each workflow includes an automatic setup script (for instance,
tools/env_setup_so_arm_starter.sh)
