Text-Guided Video Masked Autoencoder

7citations
PDF
7
Citations
#702
in ECCV 2024
of 2387 papers
6
Authors
1
Data Points

Abstract

Recent video masked autoencoder (MAE) works have designed improved masking algorithms focused on saliency. These works leverage visual cues such as motion to mask the most salient regions. However, the robustness of visual cues depends on how often input videos match underlying statistical assumptions. On the other hand, natural language description is an information dense representation of video that implicitly captures saliency without requiring modality-specific assumptions, and has not been explored yet for video MAE. To this end, we introduce a novel text-guided masking strategy (TGM) that masks the video regions with highest correspondence to paired captions. Without leveraging any explicit visual cues for saliency, our text-guided masking is competitive with state-of-the-art masking algorithms such as motion-guided masking. To further benefit from the semantics of natural language for masked reconstruction, we next introduce a unified framework for joint MAE and masked video-text contrastive learning. We show that across existing masking algorithms, unifying MAE and masked video-text contrastive learning improves downstream performance compared to pure MAE on a variety of video recognition tasks, especially for linear probe. When our TGM is combined within this unified framework, we achieve the best relative performance on five action recognition and one egocentric datasets, highlighting the complementary nature of natural language captions for masked video modeling.

Citation History

Jan 26, 2026
7