Constructing realistic 3D environments for robotics simulation could be a labor-intensive process. Now, with NVIDIA Omniverse NuRec, you’ll be able to complete your entire process using only a smartphone. This post walks you thru each step—from capturing photos using an iPhone to rebuilding the scene in 3D using 3DGUT to loading it into NVIDIA Isaac Sim and inserting a robot. To skip the reconstruction (Steps 1-3) and explore this scene directly in Isaac Sim (Step 4), visit NVIDIA Physical AI on Hugging Face.
Step 1: Capture the real-world scene
Step one is to take photos of the true environment to be reconstructed. This now not requires special hardware—you should utilize a daily smartphone camera.
As you walk around your scene, take photos using your phone. Just a few photogrammetry best practices are outlined below:
Lighting and focus must be right and regular. Avoid fast motion and blur. Should you can, use a faster shutter speed (for instance, 1/100 second or faster)
- Within the built-in camera app, you’ll be able to’t directly set shutter speed, but you’ll be able to:
- Lock focus/exposure: Long-press on the topic to enable AE/AF Lock, then drag the exposure slider down barely (−0.3 to −0.7 EV) to maintain highlights crisp.
- Stabilize: Use a tripod or lean against a wall; the sharper each frame, the higher COLMAP tracks features.
- Avoid auto macro switching: To be certain that focal length doesn’t jump shot-to-shot on iPhone Pro models, turn off Settings → Camera → Auto Macro.
- For manual shutter/ISO control, try any of the next iOS apps that allow fixed shutter plus ISO: Lumina: Manual Camera, Halide, ProCamera, ProCam 8, Moment Pro Camera, Lightroom Mobile.
- Start with 1/120–1/250 seconds outdoors and ≥1/100 seconds indoors. Set ISO as little as possible while keeping exposure acceptable.
- Lock white balance to avoid color shifts between frames.
For coverage, make a slow loop around the realm and capture multiple heights and angles. A secure rule of thumb is 60% overlap.
Tip: COLMAP expects standard image formats. In case your iPhone saves HEIC, either switch to Settings → Camera → Formats → Most Compatible (shoots JPEG), or export/convert to JPG before COLMAP.
Step 2: Generate a sparse reconstruction with COLMAP
When you will have your photos, the subsequent step is to determine the 3D structure and camera positions from those images. This instance uses COLMAP, a preferred open source Structure-from-Motion (SfM) and Multi-View Stereo pipeline, to do that. COLMAP will create a sparse point cloud of the scene and estimate the camera parameters for every photo.
COLMAP also provides the next command-line path. Nonetheless, note that the GUI Automatic Reconstruction is the simpler solution to start. For compatibility with 3DGUT, select either the pinhole or easy pinhole camera model.
# Feature detection & extraction
$ colmap feature_extractor
--database_path ./colmap/database.db
--image_path ./images/
--ImageReader.single_camera 1
--ImageReader.camera_model PINHOLE
--SiftExtraction.max_image_size 2000
--SiftExtraction.estimate_affine_shape 1
--SiftExtraction.domain_size_pooling 1
# Feature matching
$ colmap exhaustive_matcher
--database_path ./colmap/database.db
--SiftMatching.use_gpu 1
# Global SFM
$ colmap mapper
--database_path ./colmap/database.db
--image_path ./images/
--output_path ./colmap/sparse
# Visualize for verification
$ colmap gui --import_path ./colmap/sparse/0
--database_path ./colmap/database.db
--image_path ./images/
COLMAP will output a project folder (often containing a database.db, an images/ folder, and a sparse/ directory with the reconstruction data). Once COLMAP completes, you’ll have:
- A sparse point cloud of the scene
- Camera pose data for all of your images
That is the data needed to feed into the 3D reconstruction with 3DGUT (Step 3).
Step 3: Train a dense 3D reconstruction with 3DGUT and export to USD
Now for the magic. The following step uses the 3DGUT algorithm to show the sparse model and pictures right into a dense, photorealistic 3D scene:
- Arrange the 3DGUT environment: The 3DGUT repository requires a Linux system with CUDA 11.8, GCC ≤ 11 and an NVIDIA GPU. To see the official 3DGUT code, visit nv-tlabs/3dgrut on GitHub and follow the instructions to put in the required libraries.
- Clone the 3DGUT repo: Using the next command, clone and install the 3DGUT repo. To confirm that the clone is successful before proceeding, run a test reconstruction on one among the sample datasets within the repo to make sure it’s working.
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrut
chmod +x install_env.sh
./install_env.sh 3dgrut
conda activate 3dgrut
- Prepare the COLMAP outputs: Ensure you already know the trail to your COLMAP output directory generated in Step 2. This instance uses the
apps/colmap_3dgut_mcmc.yamlconfig since it pairs 3DGUT with an MCMC (Markov Chain Monte Carlo) densification strategy. In practice, this samples and densifies Gaussians where the reconstruction is uncertain, sharpening thin structures and edges and improving overall fidelity with only a modest training time overhead in comparison with the baseline config. - Run the 3DGUT training script and export USDZ: With the environment lively, you’ll be able to start training by running the provided train.py script with the COLMAP config. For instance, the command might seem like just like the next:
$ conda activate 3dgrut
$ python train.py --config-name apps/colmap_3dgut_mcmc.yaml
path=/path/to/colmap/
out_dir=/path/to/out/
experiment_name=3dgut_mcmc
export_usdz.enabled=true
export_usdz.apply_normalizing_transform=true
When you run the command, 3DGUT will begin training. It should read in your images and COLMAP data and begin optimizing a 3D representation. This will likely take a while depending in your scene size and GPU—anywhere from just a few minutes for a small scene to just a few hours for a really detailed one.
When the method completes, it’s best to have a dense reconstruction of your scene. The output features a directory with model checkpoints. Setting the flags export_usdz.enabled=true and export_usdz.apply_normalizaing_transform=true also generates a USDZ file.
export_usdz.enabled=truewrites a USDZ of your reconstructed scene, so you’ll be able to load it straight into Isaac Sim.export_usdz.apply_normalizing_transform=trueapplies a primitive normalization (centers/scales the scene near the origin). It doesn’t guarantee the ground is strictly atz = 0. In Isaac Sim, you’ll be able to add a Ground Plane or nudge the scene root (translate/rotate) for alignment.
Exporting the reconstructed scene to USD makes it plug-and-play with Isaac Sim. The produced USDZ file uses a custom USD schema and essentially serves as a packaged USD scene containing all of the Gaussian-splatting data of the 3D reconstruction. Note that a standardized schema is under discussion inside AOUSD.
Step 4: Deploy the reconstructed scene in Isaac Sim and add a robot
Now, for the fun part. With the true world scene now fully reconstructed in USD, it may well be utilized in Isaac Sim to coach and test virtual robots.
To load your scene and insert a robot, follow these steps:
1. Launch Isaac Sim (version 5.0 or later, which supports the NuRec/3DGUT features). When Isaac Sim is open, start with an empty stage (File > Recent).
2. Import your USD scene: From the menu, navigate to File > Import and find the USDZ file of your scene. Alternatively, drag-and-drop the USDZ file from the Isaac Sim content browser into the scene. When Isaac Sim loads the file, it’s best to see your reconstructed environment appear within the viewport as a group of colourful points, or Gaussians. It might look almost like a photoreal 3D photo when viewed through the camera.
- Tip: You should utilize the Isaac Sim navigation (WASD keys or right-click and drag-and-drop) to fly across the scene and inspect it from different angles.
3. Add a ground plane for physics: Your reconstructed scene is just visual geometry (the points/voxels from 3DGUT) with no inherent collision properties. To have a robot move around, add a ground plane so the robot has something to face on. In Isaac Sim, this is simple: click Create > Physics > Ground Plane. A flat ground will appear (often at z = 0) covering your scene’s floor area. Chances are you’ll wish to scale it (for instance, x to 100, y to 100, as shown in Video 1). Adjust Translate / Rotate / Scale so it aligns to the reconstructed floor.
Next, connect a proxy mesh to receive shadows. A proxy mesh is an easy stand-in that grounds objects visually within the scene by providing a surface to solid shadows onto. You’ve already created a mesh, which is your ground plane. To attach it as a proxy, follow these steps:
- Select the NuRec prim (the quantity prim under the worldwide xform). Within the Raw USD Properties panel, locate the NuRec/Volume section and find the Proxy field.
- Click + so as to add your proxy mesh prim.
- Select your ground plane again. Be certain that the Geometry > Matte Object property is turned on.
4. Insert a robot from Isaac Sim assets: Isaac Sim includes many SimReady robot models. So as to add one to the scene, navigate to the highest menu bar and choose Create > Robots. Select a robot—the favored Franka Emika Panda robotic arm, for instance. You may also select a mobile robot resembling LeatherBack/Carter/TurtleBot, or perhaps a humanoid, depending on what’s available in your Isaac Sim library.
While you click the robot asset of your selection, it is going to be added to your scene. Use the Move/Rotate tools to position the robot where you wish it in your reconstructed scene.
5. Press Play and watch: At this point, the robot must be sitting or standing in your photoreal 3D scene. If it’s a manipulator arm, you’ll be able to animate it and even run reinforcement learning—depending in your use-case. Now that the real-world environment is in Isaac Sim, you’ll be able to treat it like several other simulation environment.
Start reconstructing a scene in NVIDIA Isaac Sim
This straightforward real-to-sim workflow turns on a regular basis iPhone photos into an interactive, robot-ready scene. It’s as easy as capture → COLMAP → 3DGUT reconstruction → USDZ export → load in NVIDIA Isaac Sim. The workflow replaces long photogrammetry loops with a straightforward, reproducible path to digital twins you could drive, plan, and test quickly.
Able to start?
- Open a ready-made scene: Grab a NuRec sample from the NVIDIA Physical AI collection on Hugging Face and open it in Isaac Sim.
- Try the real-to-sim reference workflow: To construct realistic 3D environments for robot development with Isaac Sim, Isaac ROS, 3DGUT, cuSFM, nvBlox, and FoundationStereo, follow the steps for Reconstructing Scenes from Stereo Camera Data.
- Generate synthetic data: To bring your reconstruction into Isaac Sim, add SimReady assets, generate synthetic data with MobilityGen, and augment data with NVIDIA Cosmos, see Construct Synthetic Data Pipelines to Train Smarter Robots with NVIDIA Isaac Sim.
Stay awake up to now by subscribing to NVIDIA news and following NVIDIA Omniverse on Discord and YouTube.
Start with developer starter kits to quickly develop and enhance your individual applications and services.
Join us for Physical AI and Robotics Day at NVIDIA GTC Washington, D.C. on October 29, 2025 as we bring together developers, researchers, and technology leaders to learn the way NVIDIA technologies are accelerating the subsequent era of AI.
