InternVideo2: Scaling Foundation Models for Multimodal Video Understanding

214citations
PDF
214
Citations
#21
in ECCV 2024
of 2387 papers
18
Authors
1
Data Points

Abstract

We introduce InternVideo2, a new video foundation model (ViFM) that achieves state-of-the-art results in action recognition, video-text tasks, and video-centric dialogue. Our system design includes a progressive approach that unifies the learning of masked video token reconstruction, crossmodal contrastive learning, and next token prediction, scaling up the video encoder size to 6B parameters. At the data level, we prioritize spatiotemporal consistency by semantically segmenting videos and generating video-audio-speech captions. This improves the alignment between video and text. Through extensive experiments, we validate our designs and demonstrate state-of-the-art performance on over 60 out of 74 video and audio tasks. Notably, our model outperforms others on various video-related dialogue and long video understanding benchmarks, highlighting its ability to reason and comprehend longer contexts. Code and models will be released.

Citation History

Jan 25, 2026
214