2025 "novel view synthesis" Papers

28 papers found

4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos

Mengqi Guo, Bo Xu, Yanyan Li et al.

NeurIPS 2025posterarXiv:2511.05229

CATSplat: Context-Aware Transformer with Spatial Guidance for Generalizable 3D Gaussian Splatting from A Single-View Image

Wonseok Roh, Hwanhee Jung, JongWook Kim et al.

ICCV 2025posterarXiv:2412.12906
6
citations

Contact-Aware Amodal Completion for Human-Object Interaction via Multi-Regional Inpainting

Seunggeun Chi, Pin-Hao Huang, Enna Sachdeva et al.

ICCV 2025highlightarXiv:2508.00427
2
citations

Deep Gaussian from Motion: Exploring 3D Geometric Foundation Models for Gaussian Splatting

Yu Chen, Rolandos Alexandros Potamias, Evangelos Ververas et al.

NeurIPS 2025poster

Depth-Guided Bundle Sampling for Efficient Generalizable Neural Radiance Field Reconstruction

Li Fang, Hao Zhu, Longlong Chen et al.

CVPR 2025posterarXiv:2505.19793
1
citations

DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation

Jiazhe Guo, Yikang Ding, Xiwu Chen et al.

ICCV 2025posterarXiv:2503.15208
21
citations

Dynamic Gaussian Splatting from Defocused and Motion-blurred Monocular Videos

Xuankai Zhang, Junjin Xiao, Qing Zhang

NeurIPS 2025posterarXiv:2510.10691

EGGS: Exchangeable 2D/3D Gaussian Splatting for Geometry-Appearance Balanced Novel View Synthesis

Yancheng Zhang, Guangyu Sun, Chen Chen

NeurIPS 2025spotlightarXiv:2512.02932

Faster and Better 3D Splatting via Group Training

Chengbo Wang, Guozheng Ma, Yizhen Lao et al.

ICCV 2025posterarXiv:2412.07608
3
citations

FlowR: Flowing from Sparse to Dense 3D Reconstructions

Tobias Fischer, Samuel Rota Bulò, Yung-Hsu Yang et al.

ICCV 2025highlightarXiv:2504.01647
7
citations

GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control

Xuanchi Ren, Tianchang Shen, Jiahui Huang et al.

CVPR 2025highlightarXiv:2503.03751
138
citations

GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering

Hongze CHEN, Zehong Lin, Jun Zhang

ICLR 2025posterarXiv:2410.02619
21
citations

Harnessing Frequency Spectrum Insights for Image Copyright Protection Against Diffusion Models

Zhenguang Liu, Chao Shuai, Shaojing Fan et al.

CVPR 2025posterarXiv:2503.11071

Holistic Large-Scale Scene Reconstruction via Mixed Gaussian Splatting

Chuandong Liu, Huijiao Wang, Lei YU et al.

NeurIPS 2025posterarXiv:2505.23280
1
citations

HyRF: Hybrid Radiance Fields for Memory-efficient and High-quality Novel View Synthesis

Zipeng Wang, Dan Xu

NeurIPS 2025posterarXiv:2509.17083

IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera

Jian Huang, Chengrui Dong, Xuanhua Chen et al.

CVPR 2025highlightarXiv:2410.08107
15
citations

Learning 4D Embodied World Models

Haoyu Zhen, Qiao Sun, Hongxin Zhang et al.

ICCV 2025posterarXiv:2504.20995
43
citations

LITA-GS: Illumination-Agnostic Novel View Synthesis via Reference-Free 3D Gaussian Splatting and Physical Priors

Han Zhou, Wei Dong, Jun Chen

CVPR 2025posterarXiv:2504.00219
9
citations

Mani-GS: Gaussian Splatting Manipulation with Triangular Mesh

Xiangjun Gao, Xiaoyu Li, Yiyu Zhuang et al.

CVPR 2025posterarXiv:2405.17811
23
citations

MET3R: Measuring Multi-View Consistency in Generated Images

Mohammad Asim, Christopher Wewer, Thomas Wimmer et al.

CVPR 2025posterarXiv:2501.06336
43
citations

MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting

Yumeng He, Yunbo Wang

NeurIPS 2025spotlightarXiv:2405.20791
1
citations

MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild

Deming Li, Kaiwen Jiang, Yutao Tang et al.

NeurIPS 2025posterarXiv:2509.15548
1
citations

Multimodal LiDAR-Camera Novel View Synthesis with Unified Pose-free Neural Fields

Weiyi Xue, Fan Lu, Yunwei Zhu et al.

NeurIPS 2025poster

NoPo-Avatar: Generalizable and Animatable Avatars from Sparse Inputs without Human Poses

Jing Wen, Alex Schwing, Shenlong Wang

NeurIPS 2025posterarXiv:2511.16673

ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery

Yanzhe Lyu, Kai Cheng, Kang Xin et al.

ICCV 2025posterarXiv:2412.07494
4
citations

Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis

Chen Zhao, Xuan Wang, Tong Zhang et al.

ICCV 2025posterarXiv:2411.00144
3
citations

SfM-Free 3D Gaussian Splatting via Hierarchical Training

Bo Ji, Angela Yao

CVPR 2025posterarXiv:2412.01553
8
citations

Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction

Junyi Chen, Di Huang, Weicai Ye et al.

ICLR 2025posterarXiv:2410.18962
4
citations