LEVIS: Large Exact Verifiable Input Spaces for Neural Networks

0
citations
#766
in ICML 2025
of 3340 papers
6
Top Authors
1
Data Points

Abstract

The robustness of neural networks is crucial in safety-critical applications, where identifying a reliable input space is essential for effective model selection, robustness evaluation, and the development of reliable control strategies. Most existing robustness verification methods assess the worst-case output under the assumption that the input space is known. However, precisely identifying a verifiable input space $ \mathcal{C} $, where no adversarial examples exist, is challenging due to the possible high dimensionality, discontinuity, and non-convex nature of the input space. To address this challenge, we propose a novel framework, **LEVIS**, comprising **LEVIS-$\alpha$** and **LEVIS-$\beta$**. **LEVIS-$\alpha$** identifies a single, large verifiable ball that intersects at least two boundaries of a bounded region $ \mathcal{C} $, while **LEVIS-$\beta$** systematically captures the entirety of the verifiable space by integrating multiple verifiable balls. Our contributions are fourfold: we introduce a verification framework, **LEVIS**, incorporating two optimization techniques for computing nearest and directional adversarial points based on mixed-integer programming (MIP); to enhance scalability, we integrate complementary constrained (CC) optimization with a reduced MIP formulation, achieving up to a 17-fold reduction in runtime by approximating the verifiable region in a principled way; we provide a theoretical analysis characterizing the properties of the verifiable balls obtained through **LEVIS-$\alpha$**; and we validate our approach across diverse applications, including electrical power flow regression and image classification, demonstrating performance improvements and visualizing the geometric properties of the verifiable region.

Citation History

Jan 28, 2026
0