Interpretable Global Minima of Deep ReLU Neural Networks on Sequentially Separable Data
4citations
arXiv:2405.070984
Citations
#474
in NeurIPS 2025
of 5858 papers
2
Authors
4
Data Points
Topics
Abstract
We explicitly construct zero loss neural network classifiers. We write the weight matrices and bias vectors in terms of cumulative parameters, which determine truncation maps acting recursively on input space. The configurations for the training data considered are (i) sufficiently small, well separated clusters corresponding to each class, and (ii) equivalence classes which are sequentially linearly separable. In the best case, for $Q$ classes of data in $\mathbb{R}^M$, global minimizers can be described with $Q(M+2)$ parameters.
Citation History
Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Feb 1, 2026
4+4