ICLR "temporal consistency" Papers
9 papers found
3D StreetUnveiler with Semantic-aware 2DGS - a simple baseline
Jingwei Xu, Yikai Wang, Yiqun Zhao et al.
ICLR 2025oralarXiv:2405.18416
4
citations
ARLON: Boosting Diffusion Transformers with Autoregressive Models for Long Video Generation
Zongyi Li, Shujie HU, Shujie LIU et al.
ICLR 2025oralarXiv:2410.20502
27
citations
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
Han Lin, Jaemin Cho, Abhay Zala et al.
ICLR 2025oralarXiv:2404.09967
48
citations
Depth Any Video with Scalable Synthetic Data
Honghui Yang, Di Huang, Wei Yin et al.
ICLR 2025oralarXiv:2410.10815
44
citations
Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models
Zeyu Yang, Zijie Pan, Chun Gu et al.
ICLR 2025oralarXiv:2404.02148
18
citations
EG4D: Explicit Generation of 4D Object without Score Distillation
Qi Sun, Zhiyang Guo, Ziyu Wan et al.
ICLR 2025oralarXiv:2405.18132
39
citations
Glad: A Streaming Scene Generator for Autonomous Driving
Bin Xie, Yingfei Liu, Tiancai Wang et al.
ICLR 2025oralarXiv:2503.00045
11
citations
Infinite-Resolution Integral Noise Warping for Diffusion Models
Yitong Deng, Winnie Lin, Lingxiao Li et al.
ICLR 2025oralarXiv:2411.01212
4
citations
Rationalizing and Augmenting Dynamic Graph Neural Networks
Guibin Zhang, Yiyan Qi, Ziyang Cheng et al.
ICLR 2025oral