"cross-modal retrieval" Papers
13 papers found
CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features
Po-han Li, Sandeep Chinchali, ufuk topcu
ICLR 2025posterarXiv:2410.07610
5
citations
Dynamic Masking and Auxiliary Hash Learning for Enhanced Cross-Modal Retrieval
Shuang Zhang, Yue Wu, Lei Shi et al.
NeurIPS 2025poster
MM-EMBED: UNIVERSAL MULTIMODAL RETRIEVAL WITH MULTIMODAL LLMS
Sheng-Chieh Lin, Chankyu Lee, Mohammad Shoeybi et al.
ICLR 2025posterarXiv:2411.02571
78
citations
NeighborRetr: Balancing Hub Centrality in Cross-Modal Retrieval
Zengrong Lin, Zheng Wang, Tianwen Qian et al.
CVPR 2025posterarXiv:2503.10526
2
citations
SEGA: Shaping Semantic Geometry for Robust Hashing under Noisy Supervision
Yiyang Gu, Bohan Wu, Qinghua Ran et al.
NeurIPS 2025poster
SensorLM: Learning the Language of Wearable Sensors
Yuwei Zhang, Kumar Ayush, Siyuan Qiao et al.
NeurIPS 2025posterarXiv:2506.09108
16
citations
SIM: Surface-based fMRI Analysis for Inter-Subject Multimodal Decoding from Movie-Watching Experiments
Simon Dahan, Gabriel Bénédict, Logan Williams et al.
ICLR 2025posterarXiv:2501.16471
3
citations
Test-time Adaptation for Cross-modal Retrieval with Query Shift
Haobin Li, Peng Hu, Qianjun Zhang et al.
ICLR 2025posterarXiv:2410.15624
9
citations
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models
Young Kyun Jang, Ser-Nam Lim
ICCV 2025posterarXiv:2405.14715
2
citations
An Empirical Study of CLIP for Text-Based Person Search
Cao Min, Yang Bai, ziyin Zeng et al.
AAAI 2024paperarXiv:2308.10045
94
citations
Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning
Bang Yang, Yong Dai, Xuxin Cheng et al.
AAAI 2024paperarXiv:2401.17186
9
citations
Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation
Zhuohang Dang, Minnan Luo, Chengyou Jia et al.
AAAI 2024paperarXiv:2312.16478
11
citations
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models
Yifei Ming, Sharon Li
ICML 2024poster