Universal Spectral Adversarial Attacks for Deformable Shapes

Abstract

Machine learning models are known to be vulnerable to adversarial attacks, namely perturbations of the data that lead to wrong predictions despite being imperceptible. However, the existence of “universal” attacks (i.e., unique perturbations that transfer across different data points) has only been demonstrated for images to date. Part of the reason lies in the lack of a common domain, for geometric data such as graphs, meshes, and point clouds, where a universal perturbation can be defined. In this paper, we offer a change in perspective and demonstrate the existence of universal attacks for geometric data (shapes). We introduce a computational procedure that operates entirely in the spectral domain, where the attacks take the form of small perturbations to short eigenvalue sequences; the resulting geometry is then synthesized via shape-from-spectrum recovery. Our attacks are universal, in that they transfer across different shapes, different representations (meshes and point clouds), and generalize to previously unseen data.

Publication
Proc. Int’l Conference on Computer Vision and Pattern Recognition (CVPR)
Arianna Rampini
Arianna Rampini
Senior Research Scientist
Luca Cosmo
Luca Cosmo
Assistant Professor
Simone Melzi
Simone Melzi
Assistant Professor

Assistant Professor at the University of Milano-Bicocca

Emanuele Rodolà
Emanuele Rodolà
Full Professor