One Token per Highly Selective Frame: Towards Extreme Compression for Long Video Understanding

0citations
0
Citations
#1630
in NeurIPS 2025
of 5858 papers
6
Authors
4
Data Points

Abstract

Long video understanding is inherently challenging for vision-language models (VLMs) because of the extensive number of frames. With each video frame typically expanding into tens or hundreds of tokens, the limited context length of large language models (LLMs) forces the VLMs to perceive the frames sparsely and lose temporal information. To address this, we explore extreme video token compression towardsone token per frameat the final LLM layer. Our key insight is that heuristic-based compression, widely adopted by previous methods, is prone to information loss, and this necessitates supervising LLM layers intolearnableandprogressivemodules fortoken-level compression(LP-Comp). Such compression enables our VLM to digest 2x-4x more frames with improved performance. To further increase the token efficiency, we investigate frame-level compression, which selects the frames most relevant to the queries via the internal attention scores of the LLM layers, namedquestion-conditioned compression(QC-Comp). As a notable distinction from previous studies, we mitigate the position bias of LLM attention in long contexts,i.e., the over-concentration on the beginning and end of a sequence, by splitting long videos into short segments and employing local attention. Collectively, our combinedtoken-levelandframe-levelleads to an extreme compression model for long video understanding, namedXComp, achieving a significantly larger compression ratio and enabling denser frame sampling. Our XComp is finetuned from VideoChat-Flash with a data-efficientsupervised compression tuningstage that only requires 2.5\% of the supervised fine-tuning data, yet boosts the accuracy from 42.9\% to 46.2\% on LVBench and enhances multiple other long video benchmarks.

Citation History

Jan 25, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 30, 2026
0