Interoperable Machine Learning with Universal Representations
Humans would give anything to seamlessly transfer their thoughts and knowledge. Entire civilizations have been built on the slow, painstaking process of passing down information through language, writing, and education. But machine learning (ML) models don’t have this limitation—they encode knowledge in structured representations that, if properly aligned, can be shared, transferred, and repurposed instantly. Yet, today’s AI landscape ignores this potential—each model operates in isolation, requiring costly retraining and vast amounts of data to adapt to new tasks.
NeXuS challenges this paradigm. Instead of treating AI models as independent silos, we introduce a new era of interoperability—one where models collaborate, build on each other’s knowledge, and become reusable building blocks for future innovation.
With the explosion of pretrained models and fine-tuned variants, most ML applications today do not require new training from scratch. Despite this, the dominant practice remains retraining and finetuning models over and over, wasting compute, energy, and resources.
NeXuS disrupts this inefficient cycle. By enabling models to be reused, merged, and repurposed without additional training, we promote a sustainable and scalable AI ecosystem. Instead of discarding existing knowledge, we embrace model composability, allowing AI systems to evolve and adapt without redundant training.
Traditional approaches to model transfer focus on parameters, but parameters are task-specific, modality-dependent, and even tied to random initialization. They do not generalize well across architectures, datasets, or domains.
NeXuS takes a different approach. We shift the focus from parameters to representations, treating the internal feature spaces of neural networks as first-class citizens. By aligning these learned representations across models, NeXuS enables:
AI research has long been driven by isolated, monolithic models. NeXuS changes this by unlocking a collaborative ecosystem where models no longer exist in isolation.
By aligning representations and enabling model interoperability without retraining, NeXuS:
The future of AI is not in training ever-larger, disconnected models—it is in making existing models interoperable, modular, and adaptive. NeXuS paves the way for a next-generation AI that truly builds on collective knowledge, just like human science does.
We are actively looking for PhD students and postdoctoral researchers eager to work on training-free model editing techniques, model merging, representation alignment, and more. If you are passionate about breaking the silos in AI and contributing to a future where machine learning models collaborate rather than compete, we encourage you to apply.
Our work spans multiple disciplines, including geometric deep learning, neuroscience-inspired AI, modular deep learning, and multi-modal learning—but we are also open to new perspectives and interdisciplinary contributions. If you are excited by these challenges and want to help redefine how AI models communicate and evolve, get in touch!