Home
News
Publications
People
NeXuS
Contact
3
MERGE3: Efficient Evolutionary Merging on Consumer-grade GPUs
Evolutionary model merging enables the creation of high-performing multi-task models but remains computationally prohibitive for …
Tommaso Mencattini
,
Adrian R. Minut
,
Donato Crisostomi
,
Andrea Santilli
,
Emanuele Rodolà
Cite
arXiv
GitHub
Decoding RNA-RNA Interactions: The Role of Low-Complexity Repeats and a Deep Learning Framework for Sequence-Based Prediction
RNA-RNA interactions (RRIs) are fundamental to gene regulation and RNA processing, yet their molecular determinants remain unclear. In …
Adriano Setti
,
Giorgio Bini
,
Valentino Maiorca
,
Flaminia Pellegrini
,
Gabriele Proietti
,
Dimitrios Miltiadis-Vrachnos
,
Alexandros Armaos
,
Julie Martone
,
Michele Monti
,
Giancarlo Ruocco
,
Emanuele Rodolà
,
Irene Bozzoni
,
Alessio Colantoni
,
Gian Gaetano Tartaglia
Cite
bioRxiv
Code
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are …
More than 600 authors including
,
Donato Crisostomi
,
Emanuele Rodolà
Cite
URL
GitHub
arXiv
COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations
We present COCOLA (Coherence-Oriented Contrastive Learning for Audio), a contrastive learning method for musical audio representations …
Ruben Ciranni
,
Giorgio Mariani
,
Michele Mancusi
,
Emilian Postolache
,
Giorgio Fabbro
,
Emanuele Rodolà
,
Luca Cosmo
Cite
arXiv
GitHub
ATM: Improving Model Merging by Alternating Tuning and Merging
Model merging has recently emerged as a cost-efficient paradigm for multi-task learning. Among current approaches, task arithmetic …
Luca Zhou
,
Daniele Solombrino
,
Donato Crisostomi
,
Maria Sofia Bucarelli
,
Fabrizio Silvestri
,
Emanuele Rodolà
Cite
arXiv
GitHub
ResiDual Transformer Alignment with Spectral Decomposition
When examined through the lens of their residual streams, a puzzling property emerges in transformer networks: residual contributions …
Lorenzo Basile
,
Valentino Maiorca
,
Luca Bortolussi
,
Emanuele Rodolà
,
Francesco Locatello
Cite
arXiv
Detecting and Approximating Redundant Computational Blocks in Neural Networks
Deep neural networks often learn similar internal representations, both across different models and within their own layers. While …
Irene Cannistraci
,
Emanuele Rodolà
,
Bastian Rieck
Cite
arXiv
Latent Space Translation via Inverse Relative Projection
The emergence of similar representations between independently trained neural models has sparked significant interest in the …
Valentino Maiorca
,
Luca Moschella
,
Marco Fumero
,
Francesco Locatello
,
Emanuele Rodolà
Cite
arXiv
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are …
Irene Cannistraci
,
Luca Moschella
,
Marco Fumero
,
Valentino Maiorca
,
Emanuele Rodolà
Cite
PDF
URL
ICLR 2024 spotlight
GSEdit: Efficient Text-Guided Editing of 3D Objects via Gaussian Splatting
We present GSEdit, a pipeline for text-guided 3D object editing based on Gaussian Splatting models. Our method enables the editing of …
Francesco Palandra
,
Andrea Sanchietti
,
Daniele Baieri
,
Emanuele Rodolà
Cite
arXiv
»
Cite
×