Scalable real2sim: Physics-aware asset generation via robotic pick-and-place setups

Objects are placed in the first bin, where the robot picks them up and reconstructs their geometries by moving them in front of a static camera while re-grasping to reduce occlusions. Next, the robot identifies the object's physical parameters by following a trajectory designed to be informative for the inertial parameters. Finally, it places the object into the second bin and repeats the process with the next object. The extracted geometric and physical parameters are combined to generate a complete, simulatable object description.

Obtaining both visually and physically accurate assets for physics-based simulation can be an expensive, labor-intensive process. In this work, we propose an automated Real2Sim pipeline that generates simulation-ready assets through routine pick-and-place operations. In particular, the robot’s joint torque sensors are used to infer the inertia of the manipulated object while an external camera combined with photometric reconstruction techniques (e.g. NeRF, Gaussian Splatting) reconstructs the visual appearance (mesh) of the object. This pipeline offers a route to scalable and efficient asset generation for robotics simulations.



Publications

Jeremy Binagia
Jeremy Binagia
Applied Scientist