Poster "hallucination mitigation" Papers
11 papers found
Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent
Yangning Li, Yinghui Li, Xinyu Wang et al.
ICLR 2025posterarXiv:2411.02937
54
citations
DAMO: Decoding by Accumulating Activations Momentum for Mitigating Hallucinations in Vision-Language Models
Kaishen Wang, Hengrui Gu, Meijun Gao et al.
ICLR 2025poster
7
citations
Differential Transformer
Tianzhu Ye, Li Dong, Yuqing Xia et al.
ICLR 2025posterarXiv:2410.05258
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs
Hao Fang, Changle Zhou, Jiawei Kong et al.
NeurIPS 2025posterarXiv:2505.19678
6
citations
INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling
Xin Dong, Shichao Dong, Jin Wang et al.
ICCV 2025posterarXiv:2507.05056
3
citations
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding
feilong tang, Chengzhi Liu, Zhongxing Xu et al.
CVPR 2025posterarXiv:2505.16652
22
citations
Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models
Fushuo Huo, Wenchao Xu, Zhong Zhang et al.
ICLR 2025posterarXiv:2408.02032
61
citations
A Closer Look at the Limitations of Instruction Tuning
Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar et al.
ICML 2024poster
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen, Miao Xiong, Junteng Liu et al.
ICML 2024poster
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications
Zixuan Hu, Yongxian Wei, Li Shen et al.
ICML 2024poster
Toward Adaptive Reasoning in Large Language Models with Thought Rollback
Sijia Chen, Baochun Li
ICML 2024poster