A Difference-of-Convex Functions Approach to Energy-Based Iterative Reasoning

0citations
Project
0
Citations
#1334
in NeurIPS 2025
of 5858 papers
3
Authors
4
Data Points

Abstract

While energy-based models have recently proven to be a powerful framework for learning to reason with neural networks, their practical application is still limited by computational cost. That is, existing methods for energy-based iterative reasoning suffer from computational bottlenecks by relying on expensive optimization routines during training and especially during inference. Furthermore, these routines may not always converge to minimal energy states, potentially leading to suboptimal reasoning. To address these limitations, we propose a novel and efficient algorithm for energy-based iterative reasoning based on a difference-of-convex (DC) functions approach. Our algorithm achieves a significant speedup compared to prior methods while offering theoretical convergence guarantees ensuring locally minimal energy states. In addition, we achieve state-of-the-art or superior performance on continuous reasoning tasks, as demonstrated by our experiments on multiple benchmark datasets from continuous algorithmic reasoning. As such, our method offers a leap in computational efficiency, enabling faster inference with theoretical guarantees, and hence unlocking the potential of energy-based models for iterative reasoning applications.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Feb 2, 2026
0