Images of a real scene taken with a camera commonly differ from synthetic images of a virtual replica of the same scene, despite advances in light transport simulation and calibration. By explicitly co-developing the scanning hardware and rendering pipeline we are able to achieve negligible per-pixel difference between the real image taken by the camera and the synthesized image on geometrically complex calibration object with known material properties. This approach provides an ideal test-bed for developing data-driven algorithms in the area of 3D reconstruction, as the synthetic data is indistinguishable from real data and can be generated at large scale. Pixel-wise matching also provides an effective way to quantitatively evaluate data-driven reconstruction algorithms.
We introduce three benchmark problems using the data generated with our system: (1) a benchmark for surface reconstruction from dense point clouds, (2) a denoising procedure tailored to structured light scanning, and (3) a range scan completion algorithm for CAD models. We also provide a large collection of high-resolution scans that allow our system and benchmarks to be used without having to reproduce the hardware setup.More details on the datasets and the source code repository can be on the project's website.
Our datasets are available under CC BY 4.0 license except for the 3D models of 7 colored 3D printed objects which are licensed by their respective creators under various Creative Commons licenses. The work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise. This work was partially supported by the NSF CAREER award 1652515, the NSF Grants IIS-1320635, DMS-1436591, DMS-1821334, OAC-1835712, OIA-1937043, CHS-1908767, CHS-1901091, a gift from Adobe Research, a gift from nTopology, and a gift from Advanced Micro Devices, Inc.