Xiaofeng Wang
16
Papers
196
Total Citations
Papers (16)
DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
CVPR 2025
83
citations
ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration
CVPR 2025
54
citations
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Videos Generation
NeurIPS 2025arXiv
25
citations
ReconDreamer++: Harmonizing Generative and Reconstructive Models for Driving Scene Representation
ICCV 2025
22
citations
Do Large Language Models Truly Understand Geometric Structures?
ICLR 2025
9
citations
Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection
CVPR 2025
2
citations
DynImg: Key Frames with Visual Prompts are Good Representation for Multi-Modal Video Understanding
ICCV 2025
1
citations
OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception
ICCV 2023arXiv
0
citations
HumanDreamer: Generating Controllable Human-Motion Videos via Decoupled Generation
CVPR 2025
0
citations
MVSTER: Epipolar Transformer for Efficient Multi-View Stereo
ECCV 2022
0
citations
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
CVPR 2025
0
citations
WonderTurbo: Generating Interactive 3D World in 0.72 Seconds
ICCV 2025
0
citations
DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation
AAAI 2025
0
citations
Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
CVPR 2018arXiv
0
citations
Are We Ready for Vision-Centric Driving Streaming Perception? The ASAP Benchmark
CVPR 2023arXiv
0
citations
CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification
ICCV 2023arXiv
0
citations