- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0012000000000000
- More
- Availability
-
03
- Author / Contributor
- Filter by Author / Creator
-
-
Snavely, Noah (3)
-
Zhang, Kai (3)
-
Bi, Sai (2)
-
Jin, Haian (2)
-
Luan, Fujun (2)
-
Xu, Zexiang (2)
-
Cai, Ruojin (1)
-
Chou, Gene (1)
-
Hariharan, Bharath (1)
-
Jiang, Hanwen (1)
-
Li, Yuan (1)
-
Sun, Jin (1)
-
Tan, Hao (1)
-
Tung, Joseph (1)
-
Wetzstein, Gordon (1)
-
Xiangli, Yuanbo (1)
-
Yang, Guandao (1)
-
Zhang, Tianyuan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We propose the Large View Synthesis Model (LVSM), a novel transformer-based approach for scalable and generalizable novel view synthesis from sparse-view inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which encodes input image tokens into a fixed number of 1D latent tokens, functioning as a fully learned scene representation, and decodes novel-view images from them; and (2) a decoder-only LVSM, which directly maps input images to novel-view outputs, completely eliminating intermediate scene representations. Both models bypass the 3D inductive biases used in previous methods—from 3D representations (e.g., NeRF, 3DGS) to network designs (e.g., epipolar projections, plane sweeps)—addressing novel view synthesis with a fully data-driven approach. While the encoder-decoder model offers faster inference due to its independent latent representation, the decoder-only LVSM achieves superior quality, scalability, and zero-shot generalization, outperforming previous state-of-the-art methods by 1.5 to 3.5 dB PSNR. Comprehensive evaluations across multiple datasets demonstrate that both LVSM variants achieve state-of-the-art novel view synthesis quality. Notably, our models surpass all previous methods even with reduced computational resources (1-2 GPUs).more » « lessFree, publicly-accessible full text available April 24, 2026
-
Jin, Haian; Li, Yuan; Luan, Fujun; Xiangli, Yuanbo; Bi, Sai; Zhang, Kai; Xu, Zexiang; Sun, Jin; Snavely, Noah (, Conference on Neural Information Processing Systems (NeurIPS))Free, publicly-accessible full text available December 10, 2025
-
Tung, Joseph; Chou, Gene; Cai, Ruojin; Yang, Guandao; Zhang, Kai; Wetzstein, Gordon; Hariharan, Bharath; Snavely, Noah (, Springer Nature Switzerland)Free, publicly-accessible full text available November 3, 2025
An official website of the United States government
