NEURIPS 2025 "object hallucinations" Papers
2 papers found
Enhancing Vision-Language Model Reliability with Uncertainty-Guided Dropout Decoding
Yixiong Fang, Ziran Yang, Zhaorun Chen et al.
NEURIPS 2025posterarXiv:2412.06474
13
citations
The Mirage of Performance Gains: Why Contrastive Decoding Fails to Mitigate Object Hallucinations in MLLMs?
Hao Yin, Guangzong Si, Zilei Wang
NEURIPS 2025posterarXiv:2504.10020