Measuring what Matters: Construct Validity in Large Language Model Benchmarks

0citations
Project
0
Citations
#1938
in NeurIPS 2025
of 5858 papers
42
Authors
3
Data Points

Abstract

Evaluating large language models (LLMs) is crucial for both assessing their capabilities and identifying safety or robustness issues prior to deployment. Reliably measuring abstract and complex phenomena such assafety' androbustness' requires strong construct validity, that is, having measures that represent what matters to the phenomenon. With a team of 29 expert reviewers, we conduct a systematic review of 445 LLM benchmarks from leading conferences in natural language processing and machine learning. Across the reviewed articles, we find patterns related to the measured phenomena, tasks, and scoring metrics which undermine the validity of the resulting claims. To address these shortcomings, we provide eight key recommendations and detailed actionable guidance to researchers and practitioners in developing LLM benchmarks.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0