2025 "model calibration" Papers

13 papers found

Calibrating LLMs with Information-Theoretic Evidential Deep Learning

Yawei Li, David Rügamer, Bernd Bischl et al.

ICLR 2025posterarXiv:2502.06351
3
citations

Confidence Elicitation: A New Attack Vector for Large Language Models

Brian Formento, Chuan Sheng Foo, See-Kiong Ng

ICLR 2025posterarXiv:2502.04643
2
citations

HaDeMiF: Hallucination Detection and Mitigation in Large Language Models

Xiaoling Zhou, Mingjie Zhang, Zhemg Lee et al.

ICLR 2025poster
9
citations

Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies Between Model Predictions and Human Responses in VQA

Jian Lan, Diego Frassinelli, Barbara Plank

AAAI 2025paperarXiv:2410.02773
3
citations

NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification

Mélodie Monod, Alessandro Micheli, Samir Bhatt

NEURIPS 2025posterarXiv:2505.11054

Performative Risk Control: Calibrating Models for Reliable Deployment under Performativity

Victor Li, Baiting Chen, Yuzhen Mao et al.

NEURIPS 2025posterarXiv:2505.24097

Predictive Uncertainty Quantification for Bird's Eye View Segmentation: A Benchmark and Novel Loss Function

Linlin Yu, Bowen Yang, Tianhao Wang et al.

ICLR 2025posterarXiv:2405.20986
3
citations

SteerConf: Steering LLMs for Confidence Elicitation

Ziang Zhou, Tianyuan Jin, Jieming Shi et al.

NEURIPS 2025posterarXiv:2503.02863
6
citations

The Illusion of Progress? A Critical Look at Test-Time Adaptation for Vision-Language Models

Lijun Sheng, Jian Liang, Ran He et al.

NEURIPS 2025posterarXiv:2506.24000
1
citations

Towards Certification of Uncertainty Calibration under Adversarial Attacks

Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz et al.

ICLR 2025posterarXiv:2405.13922
2
citations

Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It

Guoxuan Xia, Olivier Laurent, Gianni Franchi et al.

ICLR 2025posterarXiv:2403.14715
7
citations

Uncertainty Weighted Gradients for Model Calibration

Jinxu Lin, Linwei Tao, Minjing Dong et al.

CVPR 2025posterarXiv:2503.22725
3
citations

Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only

Jihan Yao, Wenxuan Ding, Shangbin Feng et al.

ICLR 2025posterarXiv:2410.11055
4
citations