Explanatory learning: Beyond empiricism in neural networks

Abstract

At the crossroads of Program Synthesis and Meta-Learning, we introduce Explanatory Learning as the task of automatically discovering the symbolic explanation (PS) that enables few-shot sensible predictions on a novel environment given experience on other environments (M-L). Differently from PS, the program (explanation) interpreter in EL is not given and should be learned from a limited collection of associations explanation-observations. Unlike M-L, EL does not prescribe any adaptation at test time, seeking generalization in the broad meanings attributed to symbols by the learned interpreter. To exemplify the challenges of EL, we present the Odeen benchmark, which can also serve the PS and M-L paradigms. Finally, we introduce Critical Rationalist Networks, a deep learning approach to EL aligned with the Popperian view of knowledge acquisition. CRNs express several desired properties by construction; they are truly explainable, can adjust their processing at test-time for harder inferences, and can offer strong confidence guarantees on their predictions. Using Odeen as a testbed, we show how CRNs outperform empiricist end-to-end approaches of similar size and architecture (Transformers) in discovering explanations for unseen environments.

Publication
arXiv preprint arXiv:2201.10222
Antonio Norelli
Antonio Norelli
Alumni

PhD student in AI @ Sapienza University of Rome, CS dep. I love teaching, especially to machines.

Giorgio Mariani
Giorgio Mariani
PhD Student

Refactor specialist

Luca Moschella
Luca Moschella
PhD Student

PhD Student @SapienzaRoma CS | Intern @NVIDIA Toronto Lab | @NNAISENSE

Andrea Santilli
Andrea Santilli
PhD Student

PhD Student passionate about natural language processing, representation learning and machine intelligence.

Simone Melzi
Simone Melzi
Assistant Professor

Assistant Professor at the University of Milano-Bicocca

Emanuele Rodolà
Emanuele Rodolà
Full Professor