Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces

Abstract

In this work, we investigate the possibility of a posteriori alignment of representations obtained from uni-modal 3D encoders compared to text-based feature spaces. We show that naive post-training feature alignment of uni-modal text and 3D encoders results in limited performance. We then focus on extracting subspaces of the corresponding feature spaces and discover that by projecting learned representations onto well-chosen lower-dimensional subspaces the quality of alignment becomes significantly higher, leading to improved accuracy on matching and retrieval tasks. Our analysis further sheds light on the nature of these shared subspaces, which roughly separate between semantic and geometric data representations. Overall, ours is the first work that helps to establish a baseline for post-training alignment of 3D uni-modal and text feature spaces, and helps to highlight both the shared and unique properties of 3D data compared to other representations.

Publication
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025 (CVPR)
Luca Moschella
Luca Moschella
PhD Student

PhD Student @SapienzaRoma CS | Intern @NVIDIA Toronto Lab | @NNAISENSE

Andrea Santilli
Andrea Santilli
Alumni

PhD Student passionate about natural language processing, representation learning and machine intelligence.

Emanuele Rodolà
Emanuele Rodolà
Full Professor
Simone Melzi
Simone Melzi
Assistant Professor

Assistant Professor at the University of Milano-Bicocca