ICLR 2025 "hallucination mitigation" Papers
7 papers found
Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent
Yangning Li, Yinghui Li, Xinyu Wang et al.
ICLR 2025posterarXiv:2411.02937
54
citations
DAMO: Decoding by Accumulating Activations Momentum for Mitigating Hallucinations in Vision-Language Models
Kaishen Wang, Hengrui Gu, Meijun Gao et al.
ICLR 2025poster
7
citations
Differential Transformer
Tianzhu Ye, Li Dong, Yuqing Xia et al.
ICLR 2025posterarXiv:2410.05258
Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval
Sheryl Hsu, Omar Khattab, Chelsea Finn et al.
ICLR 2025posterarXiv:2410.23214
15
citations
Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models
Ce Zhang, Zifu Wan, Zhehan Kan et al.
ICLR 2025posterarXiv:2502.06130
21
citations
Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models
Fushuo Huo, Wenchao Xu, Zhong Zhang et al.
ICLR 2025posterarXiv:2408.02032
61
citations
Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs
Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar et al.
ICLR 2025posterarXiv:2405.15683
15
citations