Evaluating Program Semantics Reasoning with Type Inference in System $F$

1citations
Project
1
Citations
#1153
in NeurIPS 2025
of 5858 papers
4
Authors
3
Data Points

Abstract

Large Language Models (LLMs) are increasingly integrated into the software engineering ecosystem.Their test-time compute reasoning capabilities promise significant potential in understanding program logic and semantics beyond mere token recognition. However, current benchmarks evaluating reasoning LLMs for code lack a formal, program-centric deductive framework for the soundness of evaluation, incompetent in assessing of whether models genuinely reason about program semantics or merely associate superficial connections between natural language and code tokens. To bridge this gap, we introduce TF-Bench, a benchmark designed to evaluate LLM reasoning based on type inference in System F, a task we refer to as *program semantics reasoning*. By employing verified transformations to remove semantically irrelevant natural language,we construct TF-Bench_pure, a purely semantics-driven variant of TF-Bench. Our analysis reveals substantial limitations in state-of-the-art LLMs, with the best-performing LLM (Claude-3.7-sonnet) achieving only $55.85\%$ accuracy on TF-Bench_pure. Additionally, we propose two novel metrics to assess the robustness and effectiveness of extended reasoning,underscoring the critical limitation in current LLM capabilities and highlighting essential directions for future research.

Citation History

Jan 26, 2026
1
Jan 26, 2026
1
Jan 27, 2026
1