Shape registration in the time of transformers

Abstract

In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformers architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two.In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation.Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences (10∼20% of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios.

Publication
Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS 2021)
Giovanni Trappolini
Giovanni Trappolini
PostDoctoral Researcher
Luca Cosmo
Luca Cosmo
Assistant Professor
Luca Moschella
Luca Moschella
PhD Student

PhD Student @SapienzaRoma CS | Intern @NVIDIA Toronto Lab | @NNAISENSE

Riccardo Marin
Riccardo Marin
PostDoctoral Researcher
Simone Melzi
Simone Melzi
Assistant Professor

Assistant Professor at the University of Milano-Bicocca

Emanuele Rodolà
Emanuele Rodolà
Full Professor