Uncovering a Universal Abstract Algorithm for Modular Addition in Neural Networks

0citations
0
Citations
#1830
in NeurIPS 2025
of 5858 papers
5
Authors
4
Data Points

Abstract

We propose a testable universality hypothesis, asserting that seemingly disparate neural network solutions observed in the simple task of modular addition actually reflect a common abstract algorithm. While prior work interpreted variations in neuron-level representations as evidence for distinct algorithms, we demonstrate---through multi-level analyses spanning neurons, neuron clusters, and entire networks---that multilayer perceptrons and transformers universally implement the abstract algorithm we call the approximate Chinese Remainder Theorem. Crucially, we introduce approximate cosets and show that neurons activate exclusively on them. Furthermore, our theory works for deep neural networks (DNNs). It predicts that universally learned solutions in DNNs with trainable embeddings or more than one hidden layer require only $\mathcal{O}(\log n)$ features, a result we empirically confirm. This work thus provides the first theory‑backed interpretation of \textit{multilayer} networks solving modular addition. It advances generalizable interpretability and opens a testable universality hypothesis for group multiplication beyond modular addition.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0