Home
News
Publications
People
Contact
3
COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations
We present COCOLA (Coherence-Oriented Contrastive Learning for Audio), a contrastive learning method for musical audio representations …
Ruben Ciranni
,
Giorgio Mariani
,
Michele Mancusi
,
Emilian Postolache
,
Giorgio Fabbro
,
Emanuele Rodolà
,
Luca Cosmo
Cite
arXiv
GitHub
Task Singular Vectors: Reducing Task Interference in Model Merging
Task Arithmetic has emerged as a simple yet effective method to merge models without additional training. However, by treating entire …
Antonio Andrea Gargiulo
,
Donato Crisostomi
,
Maria Sofia Bucarelli
,
Simone Scardapane
,
Fabrizio Silvestri
,
Emanuele Rodolà
Cite
arXiv
GitHub
ATM: Improving Model Merging by Alternating Tuning and Merging
Model merging has recently emerged as a cost-efficient paradigm for multi-task learning. Among current approaches, task arithmetic …
Luca Zhou
,
Daniele Solombrino
,
Donato Crisostomi
,
Maria Sofia Bucarelli
,
Fabrizio Silvestri
,
Emanuele Rodolà
Cite
arXiv
GitHub
ResiDual Transformer Alignment with Spectral Decomposition
When examined through the lens of their residual streams, a puzzling property emerges in transformer networks: residual contributions …
Lorenzo Basile
,
Valentino Maiorca
,
Luca Bortolussi
,
Emanuele Rodolà
,
Francesco Locatello
Cite
arXiv
Detecting and Approximating Redundant Computational Blocks in Neural Networks
Deep neural networks often learn similar internal representations, both across different models and within their own layers. While …
Irene Cannistraci
,
Emanuele Rodolà
,
Bastian Rieck
Cite
arXiv
Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various …
Michele Miranda
,
Elena Sofia Ruzzetti
,
Andrea Santilli
,
Fabio Massimo Zanzotto
,
Sebastien Bratieres
,
Emanuele Rodolà
Cite
arXiv
Latent Space Translation via Inverse Relative Projection
The emergence of similar representations between independently trained neural models has sparked significant interest in the …
Valentino Maiorca
,
Luca Moschella
,
Marco Fumero
,
Francesco Locatello
,
Emanuele Rodolà
Cite
arXiv
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are …
Irene Cannistraci
,
Luca Moschella
,
Marco Fumero
,
Valentino Maiorca
,
Emanuele Rodolà
Cite
PDF
URL
ICLR 2024 spotlight
GSEdit: Efficient Text-Guided Editing of 3D Objects via Gaussian Splatting
We present GSEdit, a pipeline for text-guided 3D object editing based on Gaussian Splatting models. Our method enables the editing of …
Francesco Palandra
,
Andrea Sanchietti
,
Daniele Baieri
,
Emanuele Rodolà
Cite
arXiv
Implicit-ARAP: Efficient Handle-Guided Deformation of High-Resolution Meshes and Neural Fields via Local Patch Meshing
In this work, we present the local patch mesh representation for neural signed distance fields. This technique allows to discretize …
Daniele Baieri
,
Filippo Maggioli
,
Zorah Laehner
,
Simone Melzi
,
Emanuele Rodolà
Cite
arXiv
GitHub
»
Cite
×