Leave-One-Out Stable Conformal Prediction

0citations
Project
0
Citations
#2033
in ICLR 2025
of 3827 papers
2
Authors
4
Data Points

Abstract

Conformal prediction (CP) is an important tool for distribution-free predictive uncertainty quantification.Yet, a major challenge is to balance computational efficiency and prediction accuracy, particularly for multiple predictions.We proposeLeave-One-OutStableConformalPrediction (LOO-StabCP), a novel method to speed up full conformal using algorithmic stability without sample splitting.By leveragingleave-one-outstability, our method is much faster in handling a large number of prediction requests compared to existing method RO-StabCP based onreplace-onestability.We derived stability bounds for several popular machine learning tools: regularized loss minimization (RLM) and stochastic gradient descent (SGD), as well as kernel method, neural networks and bagging.Our method is theoretically justified and demonstrates superior numerical performance on synthetic and real-world data.We applied our method to a screening problem, where its effective exploitation of training data led to improved test power compared to state-of-the-art method based on split conformal.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0