"retrieval augmented generation" Papers
13 papers found
Conference
Boosting Knowledge Utilization in Multimodal Large Language Models via Adaptive Logits Fusion and Attention Reallocation
Wenbin An, Jiahao Nie, Feng Tian et al.
NEURIPS 2025oral
Chain-of-Retrieval Augmented Generation
Liang Wang, Haonan Chen, Nan Yang et al.
NEURIPS 2025arXiv:2501.14342
28
citations
Collab-RAG: Boosting Retrieval-Augmented Generation for Complex Question Answering via White-Box and Black-Box LLM Collaboration
Ran Xu, Wenqi Shi, Yuchen Zhuang et al.
COLM 2025paperarXiv:2504.04915
17
citations
ColPali: Efficient Document Retrieval with Vision Language Models
Manuel Faysse, Hugues Sibille, Tony Wu et al.
ICLR 2025arXiv:2407.01449
94
citations
EAReranker: Efficient Embedding Adequacy Assessment for Retrieval Augmented Generation
Dongyang Zeng, Yaping Liu, Wei Zhang et al.
NEURIPS 2025
Inference Scaling for Long-Context Retrieval Augmented Generation
Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.
ICLR 2025arXiv:2410.04343
54
citations
MIR-Bench: Can Your LLM Recognize Complicated Patterns via Many-Shot In-Context Reasoning?
Kai Yan, Zhan Ling, Kang Liu et al.
NEURIPS 2025arXiv:2502.09933
1
citations
MMAT-1M: A Large Reasoning Dataset for Multimodal Agent Tuning
Tianhong Gao, Yannian Fu, Weiqun Wu et al.
ICCV 2025arXiv:2507.21924
1
citations
Retrieving Semantics from the Deep: an RAG Solution for Gesture Synthesis
M. Hamza Mughal, Rishabh Dabral, Merel CJ Scholman et al.
CVPR 2025arXiv:2412.06786
14
citations
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.
ICLR 2025arXiv:2407.08223
78
citations
Grounding Language Models for Visual Entity Recognition
Zilin Xiao, Ming Gong, Paola Cascante-Bonilla et al.
ECCV 2024arXiv:2402.18695
13
citations
Improving Medical Multi-modal Contrastive Learning with Expert Annotations
Yogesh Kumar, Pekka Marttinen
ECCV 2024arXiv:2403.10153
23
citations
PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation
Han Fu, Jian Tan, Pinhan Zhang et al.
ICML 2024