TY - GEN
T1 - Grassmannian Dimensionality Reduction for Optimized Universal Manifold Embedding Representation of 3D Point Clouds
AU - Haitman, Yuval
AU - Francos, Joseph M.
AU - Scharf, Louis L.
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Consider a 3-D object and the orbit of equivalent objects turned out by the rigid transformation group. The set of possible observations on these equivalent objects is generally a manifold in the ambient space of observations. It has been shown that the rigid transformation universal manifold embedding (RTUME) provides a mapping from the orbit of observations on some object to a single low dimensional linear subspace of Euclidean space. This linear subspace is invariant to the geometric transformations and hence is a representative of the orbit. In the classification set-up the RTUME subspace extracted from an experimental observation is tested against a set of subspaces representing the different object manifolds, in search for the nearest class. We clarify the way in which level-set functions, computed at each quantization level in an observation, serve as a basis for the invariant subspaces in RTUME. In the presence of observation noise and random sampling patterns of the point clouds, the observations do not lie strictly on the manifold and the resulting RTUME subspaces are noisy. Inspired by the ideas of Locality Preserving Projections and Grassmannian dimensionality reduction, we derive an optimal companding of the level-set functions yielding the Grassmannian dimensionality reduction universal manifold embedding (GDRUME). We evaluate the proposed method in a classification task on a noisy version of the ModelNet40 dataset and compare its performance to that of PointNet classification DNN. We show that in the presence of noise, GDRUME provides highly accurate classification results, while the performance of PointNet is poor.
AB - Consider a 3-D object and the orbit of equivalent objects turned out by the rigid transformation group. The set of possible observations on these equivalent objects is generally a manifold in the ambient space of observations. It has been shown that the rigid transformation universal manifold embedding (RTUME) provides a mapping from the orbit of observations on some object to a single low dimensional linear subspace of Euclidean space. This linear subspace is invariant to the geometric transformations and hence is a representative of the orbit. In the classification set-up the RTUME subspace extracted from an experimental observation is tested against a set of subspaces representing the different object manifolds, in search for the nearest class. We clarify the way in which level-set functions, computed at each quantization level in an observation, serve as a basis for the invariant subspaces in RTUME. In the presence of observation noise and random sampling patterns of the point clouds, the observations do not lie strictly on the manifold and the resulting RTUME subspaces are noisy. Inspired by the ideas of Locality Preserving Projections and Grassmannian dimensionality reduction, we derive an optimal companding of the level-set functions yielding the Grassmannian dimensionality reduction universal manifold embedding (GDRUME). We evaluate the proposed method in a classification task on a noisy version of the ModelNet40 dataset and compare its performance to that of PointNet classification DNN. We show that in the presence of noise, GDRUME provides highly accurate classification results, while the performance of PointNet is poor.
UR - http://www.scopus.com/inward/record.url?scp=85123055970&partnerID=8YFLogxK
U2 - 10.1109/ICCVW54120.2021.00468
DO - 10.1109/ICCVW54120.2021.00468
M3 - Conference contribution
AN - SCOPUS:85123055970
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 4196
EP - 4204
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
PB - Institute of Electrical and Electronics Engineers
T2 - 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
Y2 - 11 October 2021 through 17 October 2021
ER -