Xingang Wang

19
Papers
275
Total Citations

Papers (19)

DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation

CVPR 2025
83
citations

ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration

CVPR 2025
54
citations

DiffBEV: Conditional Diffusion Model for Bird’s Eye View Perception

AAAI 2024arXiv
36
citations

Relevant Intrinsic Feature Enhancement Network for Few-Shot Semantic Segmentation

AAAI 2024arXiv
30
citations

EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Videos Generation

NeurIPS 2025arXiv
25
citations

ReconDreamer++: Harmonizing Generative and Reconstructive Models for Driving Scene Representation

ICCV 2025
22
citations

Bayesian Prompt Flow Learning for Zero-Shot Anomaly Detection

CVPR 2025
22
citations

Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection

CVPR 2025
2
citations

DynImg: Key Frames with Visual Prompts are Good Representation for Multi-Modal Video Understanding

ICCV 2025
1
citations

Multi-Granularity Distillation Scheme towards Lightweight Semi-Supervised Semantic Segmentation

ECCV 2022arXiv
0
citations

MVSTER: Epipolar Transformer for Efficient Multi-View Stereo

ECCV 2022
0
citations

HumanDreamer: Generating Controllable Human-Motion Videos via Decoupled Generation

CVPR 2025
0
citations

DictAS: A Framework for Class-Generalizable Few-Shot Anomaly Segmentation via Dictionary Lookup

ICCV 2025
0
citations

DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation

AAAI 2025
0
citations

Attention-Guided Unified Network for Panoptic Segmentation

CVPR 2019
0
citations

Learning Dynamic Routing for Semantic Segmentation

CVPR 2020arXiv
0
citations

Are We Ready for Vision-Centric Driving Streaming Perception? The ASAP Benchmark

CVPR 2023arXiv
0
citations

FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation

CVPR 2023arXiv
0
citations

OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception

ICCV 2023arXiv
0
citations