Home
News
Publications
People
NeXuS
Contact
1
Latent Space Translation via Semantic Alignment
Different neural models often exhibit similar latent spaces when exposed to semantically similar data; however, this inherent …
Valentino Maiorca
,
Luca Moschella
,
Antonio Norelli
,
Marco Fumero
,
Francesco Locatello
,
Emanuele Rodolà
Cite
PDF
URL
GitHub
Leveraging sparse and shared feature activations for disentangled representation learning
Recovering the latent factors of variation of high dimensional data has so far focused on simple synthetic settings. Mostly building on …
Marco Fumero
,
Florian Wenzel
,
Luca Zancato
,
Alessandro Achille
,
Emanuele Rodolà
,
Stefano Soatto
,
Bernhard Scholkopf
,
Francesco Locatello
Cite
URL
PDF
NeurIPS 2023 spotlight
AVEN-GR: Attribute Value Extraction and Normalization using product GRaphs
Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer …
Donato Crisostomi
,
Thomas Ricatte
Cite
URL
Mitigating the Burden of Redundant Datasets via Batch-Wise Unique Samples and Frequency-Aware Losses
Datasets used to train deep learning models in industrial settings often exhibit skewed distributions with some samples repeated a …
Donato Crisostomi
,
Andrea Caciolai
,
Alessandro Pedrani
,
Kay Rottmann
,
Alessandro Manzotti
,
Enrico Palumbo
,
Davide Bernardi
Cite
URL
Fauno - The Italian Large Language Model that will leave you senza parole!
This paper presents Fauno, the first and largest open-source Italian conversational Large Language Model (LLM). Our goal with Fauno is …
Andrea Bacciu
,
Giovanni Trappolini
,
Andrea Santilli
,
Emanuele Rodolà
,
Fabrizio Silvestri
Cite
PDF
GitHub
Adversarial Permutation Invariant Training for Universal Sound Separation
Universal sound separation consists of separating mixes with arbitrary sounds of different types, and permutation invariant training …
Emilian Postolache
,
Jordi Pons
,
Santiago Pascual
,
Joan Serrà
Cite
URL
arXiv
Accelerating Transformer Inference for Translation via Parallel Decoding
Autoregressive decoding limits the efficiency of transformers for Machine Translation (MT). The community proposed specific network …
Andrea Santilli
,
Silvio Severino
,
Emilian Postolache
,
Valentino Maiorca
,
Michele Mancusi
,
Riccardo Marin
,
Emanuele Rodolà
PDF
Cite
arXiv
GitHub
Multimodal Neural Databases
The rise in loosely-structured data available through text, images, and other modalities has called for new ways of querying them. …
Giovanni Trappolini
,
Andrea Santilli
,
Emanuele Rodolà
,
Alon Halevy
,
Fabrizio Silvestri
PDF
Cite
arXiv
Relative representations enable zero-shot latent space communication
Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations. …
Luca Moschella
,
Valentino Maiorca
,
Marco Fumero
,
Antonio Norelli
,
Francesco Locatello
,
Emanuele Rodolà
PDF
Cite
URL
GitHub
Slides
ICLR 2023 notable top 5%
Latent Autoregressive Source Separation
Autoregressive models have achieved impressive results over a wide range of domains in terms of generation quality and downstream task …
Emilian Postolache
,
Giorgio Mariani
,
Michele Mancusi
,
Andrea Santilli
,
Luca Cosmo
,
Emanuele Rodolà
Cite
arXiv
GitHub
«
»
Cite
×