Multimodal 3D Shape Reconstruction under Calibration Uncertainty Using Parametric Level Set Methods

Moshe Eliasof, Andrei Sharf, Eran Treister

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We consider the problem of 3D shape reconstruction from multimodal data, given uncertain calibration parameters. Typically, 3D data modalities can come in diverse forms such as sparse point sets, volumetric slices, and 2D photos. To jointly process these data modalities, we exploit a para-metric level set method that utilizes ellipsoidal radial basis functions. This method not only allows us to analytically and compactly represent the object; it also confers on us the ability to overcome calibration-related noise that originates from inaccurate acquisition parameters. This essentially implicit regularization leads to a highly robust and scalable reconstruction, surpassing other traditional methods. In our results we first demonstrate the ability of the method to compactly represent complex objects. We then show that our reconstruction method is robust both to a small number of measurements and to noise in the acquisition parameters. Finally, we demonstrate our reconstruction abilities from diverse modalities such as volume slices obtained from liquid displacement(similar to CT scans and X-rays) and visual measurements obtained from shape silhouettes as well as point clouds.
Original languageEnglish
Pages (from-to)265-290
Number of pages26
JournalSIAM Journal on Imaging Sciences
Volume13
Issue number1
DOIs
StatePublished - 2020

Keywords

  • 3D shape reconstruction
  • parametric level sets
  • dip transform
  • joint reconstruction
  • shape from silhouettes
  • point clouds
  • compactly supported radial basis functions

ASJC Scopus subject areas

  • Mathematics (all)
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Multimodal 3D Shape Reconstruction under Calibration Uncertainty Using Parametric Level Set Methods'. Together they form a unique fingerprint.

Cite this