Multi-objective Evolutionary Merging Enables Efficient Reasoning Models

Abstract

Reasoning models have demonstrated remarkable capabilities in solving complex problems by leveraging long chains of thought. However, this more deliberate reasoning comes with substantial computational overhead at inference time. The Long-to-Short (L2S) reasoning problem seeks to maintain high accuracy using fewer tokens, but current training-free model merging approaches rely on scalarized, fixed-hyperparameter arithmetic methods that are highly brittle and force suboptimal compromises. To address this gap, we introduce Evo-L2S, a novel framework that formulates L2S reasoning as a multi-objective optimization challenge. By leveraging evolutionary model merging, Evo-L2S explicitly optimizes the trade-off between accuracy and output length to produce a robust Pareto front of merged models. To make this search computationally tractable for large language models, we propose an entropy-based subset sampling technique that drastically reduces the overhead of fitness estimation. Comprehensive experiments across 1.5B, 7B, and 14B parameter scales on six mathematical reasoning benchmarks demonstrate that Evo-L2S can reduce the length of generated reasoning traces by over 50% while preserving, or even improving, the problem-solving accuracy of the original reasoning models.

Publication
ArXiv preprint
Mario Iacobelli
Mario Iacobelli
Research Intern
Adrian R. Minut
Adrian R. Minut
PhD Student
Tommaso Mencattini
Tommaso Mencattini
Research Intern
Donato Crisostomi
Donato Crisostomi
PhD Student

PhD student @ Sapienza, University of Rome | former Applied Science intern @ Amazon Search, Luxembourg | former Research Science intern @ Amazon Alexa, Turin

Andrea Santilli
Andrea Santilli
Senior Research Engineer, NVIDIA

PhD Student passionate about natural language processing, representation learning and machine intelligence.

Emanuele Rodolà
Emanuele Rodolà
Full Professor