2024 "hallucination mitigation" Papers
9 papers found
A Closer Look at the Limitations of Instruction Tuning
Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar et al.
ICML 2024posterarXiv:2402.05119
Benchmarking Large Language Models in Retrieval-Augmented Generation
Jiawei Chen, Hongyu Lin, Xianpei Han et al.
AAAI 2024paperarXiv:2309.01431
458
citations
Editing Language Model
Based Knowledge Graph Embeddings
AAAI 2024paperarXiv:2305.14908
57
citations
Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models
Minchan Kim, Minyeong Kim, Junik Bae et al.
ECCV 2024posterarXiv:2403.16167
10
citations
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen, Miao Xiong, Junteng Liu et al.
ICML 2024posterarXiv:2403.01548
PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine
Chenrui Zhang, Lin Liu, Chuyuan Wang et al.
AAAI 2024paperarXiv:2308.12033
41
citations
Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models
Jinrui Zhang, Teng Wang, Haigang Zhang et al.
ECCV 2024posterarXiv:2407.11422
10
citations
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications
Zixuan Hu, Yongxian Wei, Li Shen et al.
ICML 2024posterarXiv:2510.27186
Toward Adaptive Reasoning in Large Language Models with Thought Rollback
Sijia Chen, Baochun Li
ICML 2024posterarXiv:2412.19707