OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling

0citations
Project
0
Citations
#2033
in ICLR 2025
of 3827 papers
10
Authors
3
Data Points

Abstract

Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning. Solving realistic optimization (OPT) problems in application scenarios requires advanced and applied mathematics ability. However, current OPT benchmarks that merely solve linear programming are far from complex realistic situations. In this work, we proposeOptiBench, a benchmark for End-to-end optimization problem-solving with human-readable inputs and outputs.OptiBenchcontains rich optimization problems, including linear and nonlinear programming with or without tabular data, which can comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are required to call a code solver to provide precise numerical answers.Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e.g., Llama-3-8b) and closed-source LLMs (e.g., GPT-4), we further propose a data synthesis method namelyReSocratic. Unlike general data synthesis methods that proceed from questions to answers, \ReSocratic first incrementally synthesizes formatted optimization demonstration with mathematical formulations step by step and then back-translates the generated demonstrations into questions. Based on this, we synthesize theReSocratic-29kdataset. We further conduct supervised fine-tuning withReSocratic-29kon multiple open-source models. Experimental results show thatReSocratic-29ksignificantly improves the performance of open-source models.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0