"novel view synthesis" Papers
42 papers found
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
Mengqi Guo, Bo Xu, Yanyan Li et al.
CATSplat: Context-Aware Transformer with Spatial Guidance for Generalizable 3D Gaussian Splatting from A Single-View Image
Wonseok Roh, Hwanhee Jung, JongWook Kim et al.
Contact-Aware Amodal Completion for Human-Object Interaction via Multi-Regional Inpainting
Seunggeun Chi, Pin-Hao Huang, Enna Sachdeva et al.
Deep Gaussian from Motion: Exploring 3D Geometric Foundation Models for Gaussian Splatting
Yu Chen, Rolandos Alexandros Potamias, Evangelos Ververas et al.
Dynamic Gaussian Splatting from Defocused and Motion-blurred Monocular Videos
Xuankai Zhang, Junjin Xiao, Qing Zhang
FlowR: Flowing from Sparse to Dense 3D Reconstructions
Tobias Fischer, Samuel Rota Bulò, Yung-Hsu Yang et al.
GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
Xuanchi Ren, Tianchang Shen, Jiahui Huang et al.
GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering
Hongze CHEN, Zehong Lin, Jun Zhang
Harnessing Frequency Spectrum Insights for Image Copyright Protection Against Diffusion Models
Zhenguang Liu, Chao Shuai, Shaojing Fan et al.
Holistic Large-Scale Scene Reconstruction via Mixed Gaussian Splatting
Chuandong Liu, Huijiao Wang, Lei YU et al.
HyRF: Hybrid Radiance Fields for Memory-efficient and High-quality Novel View Synthesis
Zipeng Wang, Dan Xu
IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera
Jian Huang, Chengrui Dong, Xuanhua Chen et al.
Mani-GS: Gaussian Splatting Manipulation with Triangular Mesh
Xiangjun Gao, Xiaoyu Li, Yiyu Zhuang et al.
MET3R: Measuring Multi-View Consistency in Generated Images
Mohammad Asim, Christopher Wewer, Thomas Wimmer et al.
MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting
Yumeng He, Yunbo Wang
Multimodal LiDAR-Camera Novel View Synthesis with Unified Pose-free Neural Fields
Weiyi Xue, Fan Lu, Yunwei Zhu et al.
ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery
Yanzhe Lyu, Kai Cheng, Kang Xin et al.
Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction
Junyi Chen, Di Huang, Weicai Ye et al.
AltNeRF: Learning Robust Neural Radiance Field via Alternating Depth-Pose Optimization
Kun Wang, Zhiqiang Yan, Huang Tian et al.
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
Lingzhe Zhao, Peng Wang, Peidong Liu
BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling
Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham et al.
CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning
Qingsong Yan, Qiang Wang, Kaiyong Zhao et al.
CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians
Yang Liu, Chuanchen Luo, Lue Fan et al.
Coarse-To-Fine Tensor Trains for Compact Visual Representations
Sebastian Loeschcke, Dan Wang, Christian Leth-Espensen et al.
ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance Field
Zhangkai Ni, Peiqi Yang, Wenhan Yang et al.
DGD: Dynamic 3D Gaussians Distillation
Isaac Labe, Noam Issachar, Itai Lang et al.
Distractor-Free Novel View Synthesis via Exploiting Memorization Effect in Optimization
Yukun Wang, Kunhong Li, Minglin Chen et al.
Few-shot NeRF by Adaptive Rendering Loss Regularization
Qingshan Xu, Xuanyu Yi, Jianyao Xu et al.
Few-Shot Neural Radiance Fields under Unconstrained Illumination
SeokYeong Lee, JunYong Choi, Seungryong Kim et al.
Forecasting Future Videos from Novel Views via Disentangled 3D Scene Representation
Sudhir Kumar Reddy Yarram, Junsong Yuan
GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views
Vinayak Gupta, Rongali Simhachala Venkata Girish, Mukund Varma T et al.
GaussianPro: 3D Gaussian Splatting with Progressive Propagation
Kai Cheng, Xiaoxiao Long, Kaizhi Yang et al.
GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views
Yaniv Wolf, Amit Bracha, Ron Kimmel
Leveraging Thermal Modality to Enhance Reconstruction in Low-Light Conditions
Jiacong Xu, Mingqian Liao, Ram Prabhakar Kathirvel et al.
MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References
Lukas Bösiger, Mihai Dusmanu, Marc Pollefeys et al.
NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance Fields
Junge Zhang, Feihu Zhang, Shaochen Kuang et al.
OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos
Ziyang Song, Jinxi Li, Bo Yang
Revising Densification in Gaussian Splatting
Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder
SpectralNeRF: Physically Based Spectral Rendering with Neural Radiance Field
Ru Li, Jia Liu, Guanghui Liu et al.
Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction
Diwen Wan, Ruijie Lu, Gang Zeng
VEGS: View Extrapolation of Urban Scenes in 3D Gaussian Splatting using Learned Priors
Sungwon Hwang, Min-Jung Kim, Taewoong Kang et al.
VersatileGaussian: Real-time Neural Rendering for Versatile Tasks using Gaussian Splatting
Renjie Li, Zhiwen Fan, Bohua Wang et al.