HIPPO-VIDEO : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting

0citations
PDFProject
0
citations
#206
in COLM 2025
of 263 papers
3
Top Authors
1
Data Points

Abstract

The exponential growth of video content has made personalized video highlighting an essential task, as user preferences are highly variable and complex. Existing video datasets, however, often lack personalization, relying on isolated videos or simple text queries that fail to capture the intricacies of user behavior. In this work, we introduce HIPPO-VIDEO, a novel dataset for personalized video highlighting, created using an LLM-based user simulator to generate realistic watch histories reflecting diverse user preferences. The dataset includes 2,040 (watch history, saliency score) pairs, covering 20,400 videos across 170 semantic categories. To validate our dataset, we propose HiPHer, a method that leverages these personalized watch histories to predict preference-conditioned segment-wise saliency scores. Through extensive experiments, we demonstrate that our method outperforms existing generic and query-based approaches, showcasing its potential for highly user-centric video highlighting in real-world scenarios. The code is publicly available at https://anonymous.4open.science/r/HIPPO-4EEE/README.md.

Citation History

Feb 12, 2026
0