Detection and registration of point cloud observations are elementary problems in 3-D vision. The Universal Manifold Embedding (UME) is a framework for mapping an observation to a matrix representation which is covariant with the rigid coordinate transformation, while its column space is invariant to the transformation. As point clouds are sets of coordinates with no functional relation imposed on them, adapting the UME framework for point cloud registration requires the definition of a function that assigns a value to each point, invariant to the action of the transformation group. Deep learning methods for point cloud semantic labeling have made it easier to incorporate semantic labels information into point cloud detection and registration. We derive analytic tools for evaluating and optimizing the UME performance in point cloud detection and registration tasks in the presence of labeling errors, when semantic labeling is employed as the transformation-invariant function defined on the point cloud.