Neural Rendering Intelligence
Workshop 2024
Workshop in conjunction with CVPR 2024
Monday, Jun 17th, 2024
Seattle, USA
Overview
Neural rendering has demonstrated significant success across various fields, including computer vision, computer graphics, and robotics. The scope and definition of neural rendering have considerably widened, finding applications in numerous downstream tasks. These applications extend beyond merely demonstrating the capability to fit a specific scene; they also uncover the intelligence that arises from neural rendering techniques. As a case in point, several studies have attempted to reconstruct 3D models or render novel views from a single image using generative models, showcasing remarkable generalization abilities. This workshop is designed to promote discussions on the latest developments in neural rendering and the emergent rendering intelligence. We have gathered a diverse group of researchers who will present their most recent findings and perspectives on neural rendering. By organizing this workshop, we aim to lay a strong foundation for the future evolution of neural rendering and recognize its unique contribution to the scientific understanding and advancement of 3D intelligence.
Invited Speakers
Ben Mildenhall
Gordon Wetzstein
Stanford University
Vincent Sitzmann
MIT
Yiyi Liao
Zhejiang University
Matthias Nießner
TUM
Hao Su
UCSD
Schedule
Date: 17th of June, UTC-7 | ||
---|---|---|
Address: Summit 332, Seattle Convention Center | ||
13:30 | Opening | |
13:40 | Invited Talk1: Ben Mildenhall | |
14:10 | Invited Talk2: Vincent Sitzmann | |
14:40 | Invited Talk3: Yiyi Liao | |
15:10 | Invited Talk4: Hao Su | |
15:40 | Coffee Break + Poster Session (Arch Building, Exhibition Hall, #46-67) | |
16:10 | Invited Talk5: Gordon Wetzstein | |
16:40 | Invited Talk6: Matthias Nießner | |
17:10 | Closing |
Important Dates
Event | Date (Anywhere on Earth) |
---|---|
Workshop paper submission deadline | March 27, 2024 |
Supplementary material submission deadline | March 27, 2024 |
Decisions | April 8, 2024 |
Camera ready | April 12, 2024 |
Call for Papers
We invite submissions of both short and long papers (4 pages and 8 pages respectively excluding references).
Author kit: CVPR Author KIT.
The long papers will be included in the proceedings of CVPR.
Potential topics include but are not limited to:
- Rendering models, e.g., NeRF, 3D Gaussian Splatting, etc.
- Novel view synthesis, generalizable NVS, NVS from single image, generative NVS.
- Relighting, e.g., light stage, intrinsic decomposition.
- Animation, e.g., facial & body reenactment.
- Rendering with diffusion models.
- SLAM and analysis by synthesis via rendering.
- Neural rendering for visual understanding.
- 3D foundation models.
- Neural rendering in robotics, autonomous driving, physics, etc.
- Ethical considerations in neural rendering.
Paper submission and review site: Submission Site
Accepted Papers
Long Papers: To appear in the proceeding of CVPR 2024.
Short Papers:
-
Differentiable Point-based Inverse Rendering [Paper] [Supplementary]
Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek -
SCNeRF: Feature-Guided Neural Radiance Field from Sparse Inputs [Paper]
Junting Li, Yanghong Zhou, Tracy Mok -
DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields [Paper] [Supplementary]
Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab Dey, Ishaan N Shah, Rugved Mavidipalli, Dylan Hu, Andrew I Comport, Kefan Chen, Srinath Sridhar -
Plug-and-Play Acceleration of Occupancy Grid-based NeRF Rendering using VDB Grid and Hierarchical Ray Traversal [Paper]
Yoshio Kato, Shuhei Tarashima -
GL-NeRF: Gauss-Laguerre Quadrature for Volume Rendering [Paper]
Silong Yong, Yaqi Xie, Simon B Stepputtis, Katia Sycara -
High-fidelity Endoscopic Image Synthesis by Understanding Depth-guided Neural Surfaces [Paper]
Baoru Huang, Yida Wang, Anh Nguyen, Daniel Elson, Francisco Vasconcelos, Danail Stoyanov -
Mitigating Motion Blur in Neural Radiance Fields with Events and Frames [Paper] [Supplementary]
Marco Cannici, Davide Scaramuzza -
Learning Relighting and Intrinsic Decomposition in Neural Radiance Fields [Paper] [Supplementary]
Yixiong Yang, Shilin Hu, Haoyu Wu, Ramon Baldrich, Dimitris Samaras, Maria Vanrell -
InterNeRF: Scaling Radiance Fields via Parameter Interpolation [Paper]
Clinton J Wang, Peter Hedman, Polina Golland, Jonathan T Barron, Daniel Duckworth -
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields [Paper]
Muhammad Zubair Irshad, Sergey Zakharov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rareș A Ambruș -
MC^2: Multi-view Consistent Depth Estimation via Coordinated Image-based Neural Rendering [Paper]
Subin Kim, Seong Hyeon Park, Sihyun Yu, Kihyuk Sohn, Jinwoo Shin -
InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds [Paper]
Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, Zhangyang Wang, Yue Wang -
TurboSL: Dense, Accurate and Fast 3D by Neural Inverse Structured Light
Parsa Mirdehghan, Maxx Wu, Wenzheng Chen, David B. Lindell, Kiriakos N. Kutulakos
Organizers
Fangneng Zhan
MPI-INF
Anpei Chen
ETH Zürich & Uni of Tübingen
Adam Kortylewski
MPI-INF & Uni of Freiburg
Program Committee
Guoxing Sun (Max Planck Institute for Informatics) | Pramod Rao (Max Planck Institute for Informatics) |
Junaid Wahid (Saarland University) | Chi Yu (Technical University of Munich) |
Kunhao Liu (Nanyang Technological University) | Jiahui Zhang (Nanyang Technological University) |
Muyu Xu (Nanyang Technological University) | Zuhao Yang (Nanyang Technological University) |
Haimin Luo (Shanghaitech University) | Shaofeng Wang (ETH Zürich and University of Tübingen) |
Bozidar Antic (University of Tübingen) | Gongjie Zhang (Black Sesame Technologies) |
Songyou Peng (ETH Zürich) | Binbin Huang (Shanghaitech University) |
Qianyi Wu (Monash University) | Taorui Wang (Nanyang Technological University) |
Yu Wei (Nanyang Technological University) |