From Kolmogorov to Cauchy: Shallow XNet Surpasses KANs

0citations
0
Citations
#1943
in NeurIPS 2025
of 5858 papers
3
Authors
4
Data Points

Abstract

We study a shallow variant of XNet, a neural architecture whose activation functions are derived from the Cauchy integral formula. While prior work focused on deep variants, we show that even a single-layer XNet exhibits near-exponential approximation rates—exceeding the polynomial bounds of MLPs and spline-based networks such as Kolmogorov–Arnold Networks (KANs). Empirically, XNet reduces approximation error by over 600× on discontinuous functions, achieves up to 20,000× lower residuals in physics-informed PDEs, and improves policy accuracy and sample efficiency in PPO-based reinforcement learning—while maintaining comparable or better computational efficiency than KAN baselines. These results demonstrate that expressive approximation can stem from principled activation design rather than depth alone, offering a compact, theoretically grounded alternative for function approximation, scientific computing, and control.

Citation History

Jan 25, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 31, 2026
0