Distinguishing the Knowable from the Unknowable with Language Models

0citations
PDF
0
Citations
#1
in ICML 2024
of 2635 papers
5
Authors
1
Data Points

Abstract

We study the feasibility of identifyingepistemicuncertainty (reflecting a lack of knowledge), as opposed toaleatoricuncertainty (reflecting entropy in the underlying distribution), in the outputs of large language models (LLMs) over free-form text. In the absence of ground-truth probabilities, we explore a setting where, in order to (approximately) disentangle a given LLM's uncertainty, a significantly larger model stands in as a proxy for the ground truth. We show that small linear probes trained on the embeddings of frozen, pretrained models accurately predict when larger models will be more confident at the token level and that probes trained on one text domain generalize to others. Going further, we propose a fully unsupervised method that achieves non-trivial accuracy on the same task. Taken together, we interpret these results as evidence that LLMs naturally contain internal representations of different types of uncertainty that could potentially be leveraged to devise more informative indicators of model confidence in diverse practical settings. Code can be found at: https://github.com/KempnerInstitute/llm_uncertainty

Citation History

Jan 28, 2026
0