State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to parameterize 3D radiance fields. While demonstrating impressive results, querying an MLP for every sample along each ray leads to slow rendering. Motivated by recent results in voxel-based novel view synthesis, we consider a sparse voxel grid representations for fast and 3D-consistent generative modeling. To obtain a compact representation of the scene and allow for scaling to higher voxel resolutions, our model disentangles the foreground object (modeled in 3D) from the background (modeled in 2D).
In contrast to existing approaches, our method requires only a single forward pass to generate a full 3D scene. It hence allows for efficient rendering from arbitrary viewpoints while yielding 3D consistent results with high visual fidelity.
@InProceedings{Schwarz2022NEURIPS,
author = {Schwarz, Katja and Sauer, Axel and Niemeyer, Michael and Liao, Yiyi and Geiger, Andreas},
title = {VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS) (NeurIPS)},
year = {2022}
}