Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models

0citations
PDFProject
0
Citations
#10
in ICML 2024
of 2635 papers
6
Authors
1
Data Points

Abstract

It has recently been discovered that using a pre-trainedvision-language model(VLM), e.g., CLIP, to align a whole query image with several finer text descriptions generated by a large language model can significantly enhance zero-shot performance. However, in this paper, we empirically find that the finer descriptions tend to align more effectively withlocal areas of the query imagerather than the whole image, and then we theoretically validate this finding. Thus, we present a method calledweighted visual-text cross alignment(WCA). This method begins with alocalized visual promptingtechnique, designed to identify local visual areas within the query image. The local visual areas are thencross-alignedwith the finer descriptions by creating a similarity matrix using the pre-trained VLM. To determine how well a query image aligns with each category, we develop a score function based on the weighted similarities in this matrix. Extensive experiments demonstrate that our method significantly improves zero-shot performance across various datasets, achieving results that are even comparable to few-shot learning methods.

Citation History

Jan 28, 2026
0