Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems

0citations
0
citations
#3347
in NEURIPS 2025
of 5858 papers
4
Top Authors
4
Data Points

Abstract

Despite being a key bottleneck in many machine learning tasks, the cost of solving large linear systems has proven challenging to quantify due to problem-dependent quantities such as condition numbers.To tackle this, we consider a fine-grained notion of complexity for solving linear systems, which is motivated by applications where the data exhibits low-dimensional structure, including spiked covariance models and kernel machines, and when the linear system is explicitly regularized, such as ridge regression. Concretely, let $\kappa_\ell$ be the ratio between the $\ell$th largest and the smallest singular value of $n\times n$ matrix $A$. We give a stochastic algorithm based on the Sketch-and-Project paradigm, that solves the linear system $Ax=b$ in time $\tilde O(\kappa_\ell\cdot n^2\log1/\epsilon)$ for any $\ell = O(n^{0.729})$.This is a direct improvement over preconditioned conjugate gradient, and it provides a stronger separation between stochastic linear solvers and algorithms accessing $A$ only through matrix-vector products.Our main technical contribution is the new analysis of the first and second moments of the random projection matrix that arises in Sketch-and-Project.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0