2025 Oral "temporal consistency" Papers
9 papers found
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
Han Lin, Jaemin Cho, Abhay Zala et al.
ICLR 2025oralarXiv:2404.09967
48
citations
Depth Any Video with Scalable Synthetic Data
Honghui Yang, Di Huang, Wei Yin et al.
ICLR 2025oralarXiv:2410.10815
44
citations
Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models
Zeyu Yang, Zijie Pan, Chun Gu et al.
ICLR 2025oralarXiv:2404.02148
18
citations
EG4D: Explicit Generation of 4D Object without Score Distillation
Qi Sun, Zhiyang Guo, Ziyu Wan et al.
ICLR 2025oralarXiv:2405.18132
39
citations
FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation
Ariel Shaulov, Itay Hazan, Lior Wolf et al.
NeurIPS 2025oralarXiv:2506.01144
7
citations
Image as a World: Generating Interactive World from Single Image via Panoramic Video Generation
Dongnan Gui, Xun Guo, Wengang Zhou et al.
NeurIPS 2025oral
1
citations
Incremental Sequence Classification with Temporal Consistency
Lucas Maystre, Gabriel Barello, Tudor Berariu et al.
NeurIPS 2025oralarXiv:2505.16548
ReCon-GS: Continuum-Preserved Guassian Streaming for Fast and Compact Reconstruction of Dynamic Scenes
Jiaye Fu, Qiankun Gao, Chengxiang Wen et al.
NeurIPS 2025oral
WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception
Zhiheng Liu, Xueqing Deng, Shoufa Chen et al.
NeurIPS 2025oralarXiv:2508.15720
5
citations