TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories

2citations
2
Citations
#504
in ICML 2025
of 3340 papers
7
Authors
1
Data Points

Abstract

Type inference for dynamic languages like Python is a persistent challenge in software engineering. While large language models (LLMs) have shown promise in code understanding, their type inference capabilities remain underexplored. We introduceTypyBench, a benchmark designed to evaluate LLMs' type inference across entire Python repositories.TypyBenchfeatures two novel metrics:TypeSim, which captures nuanced semantic relationships between predicted and ground truth types, andTypeCheck, which assesses type consistency across codebases. Our evaluation of various LLMs on a curated dataset of 50 high-quality Python repositories reveals that, although LLMs achieve decentTypeSimscores, they struggle with complex nested types and exhibit significant type consistency errors. These findings suggest that future research should shift focus from improving type similarity to addressing repository-level consistency.TypyBenchprovides a foundation for this new direction, offering insights into model performance across different type complexities and usage contexts. Our code and data are available at \href{https://github.com/typybench/typybench}.

Citation History

Jan 28, 2026
2