Dip transform for 3D shape reconstruction

Kfir Aberman, Oren Katzir, Qiang Zhou, Zegang Luo, Andrei Sharf, Chen Greif, Baoquan Chen, Daniel Cohen-Or

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

The paper presents a novel three-dimensional shape acquisition and reconstruction method based on the well-known Archimedes equality between fluid displacement and the submerged volume. By repeatedly dipping a shape in liquid in different orientations and measuring its volume displacement, we generate the dip transform: a novel volumetric shape representation that characterizes the object's surface. The key feature of our method is that it employs fluid displacements as the shape sensor. Unlike optical sensors, the liquid has no line-of-sight requirements, it penetrates cavities and hidden parts of the object, as well as transparent and glossy materials, thus bypassing all visibility and optical limitations of conventional scanning devices. Our new scanning approach is implemented using a dipping robot arm and a bath of water, via which it measures the water elevation. We show results of reconstructing complex 3D shapes and evaluate the quality of the reconstruction with respect to the number of dips.

Original languageEnglish
Article number79
JournalACM Transactions on Graphics
Volume36
Issue number4
DOIs
StatePublished - 1 Jan 2017
EventACM SIGGRAPH 2017 - Los Angeles, United States
Duration: 30 Jul 20173 Aug 2017

Keywords

  • Data acquisition
  • Shape reconstruction
  • Volume

Fingerprint

Dive into the research topics of 'Dip transform for 3D shape reconstruction'. Together they form a unique fingerprint.

Cite this