WildFusion

Learning 3D-Aware Latent Diffusion Models in View Space

2024

  1. Katja Schwarz1, Seung Wook Kim2,3,4, Jun Gao2,3,4, Sanja Fidler2,3,4, Andreas Geiger1, Karsten Kreis2
    1University of Tübingen, 2NVIDIA, 3Vector Institute, 4University of Toronto
    ICLR 2024
WildFusion is a two-stage approach to 3D-aware image synthesis, trained only on unposed single view images. Left: Input images, novel views and geometry from our first-stage autoencoder. Right: Novel samples and geometry from our second-stage latent diffusion model and 3DGP for the ImageNet classes "macaw" (top), "king penguin" (middle), and "kimono" (bottom).

2024

  1. Abstract

    Modern learning-based approaches to 3D-aware image synthesis achieve high photorealism and 3D-consistent viewpoint changes for the generated images. Existing approaches represent instances in a shared canonical space. However, for in-the-wild datasets a shared canonical system can be difficult to define or might not even exist. In this work, we instead model instances in view space, alleviating the need for posed images and learned camera distributions. We find that in this setting, existing GAN-based methods are prone to generating flat geometry and struggle with distribution coverage. We hence propose WildFusion, a new approach to 3D-aware image synthesis based on latent diffusion models (LDMs). We first train an autoencoder that infers a compressed latent representation, which additionally captures the images’ underlying 3D structure and enables not only reconstruction but also novel view synthesis. To learn a faithful 3D representation, we leverage cues from monocular depth prediction. Then, we train a diffusion model in the 3D-aware latent space, thereby enabling synthesis of high-quality 3D-consistent image samples, outperforming recent state-of-the-art GAN-based methods. Importantly, our 3D-aware LDM is trained without any direct supervision from multiview images or 3D geometry and does not require posed images or learned pose or camera distributions. It directly learns a 3D representation without relying on canonical camera coordinates. This opens up promising research avenues for scalable 3D-aware image synthesis and 3D content creation from in-the-wild image data.

Novel samples from ImageNet classes generated by WildFusion.

3D-Aware Image Synthesis with Latent Diffusion Models in View Space

While existing 3D-aware generative models achieve high photorealism and 3D-consistent viewpoint control, the vast majority of approaches only consider single-class and aligned data like human faces or cat faces. We identify two main causes for this:

We hence propose WildFusion, a new approach to 3D-aware image synthesis based on latent diffusion models (LDMs) that addresses these limitations. We train our approach in view space which removes the need for camera poses and a priori camera pose distributions, unlocking 3D-aware image synthesis on unaligned, diverse datasets. To ensure distribution coverage on more diverse datasets, we build our approach upon latent diffusion models (LDMs) instead of GANs.

Our 3D-aware LDM, called WildFusion, follows LDMs’ two-stage approach: First, we train a powerful 3D-aware autoencoder from large collections of unposed images without multiview supervision that simultaneously performs both compression and enables novel-view synthesis. The autoencoder is trained with pixel-space reconstruction losses on the input views and uses adversarial training to supervise novel views. Note that by using adversarial supervision for the novel views, our autoencoder is trained for novel-view synthesis without the need for multiview supervision. Adding monocular depth cues helps the model learn a faithful 3D representation and further improves novel-view synthesis. In the second stage, we train a diffusion model in the compressed and 3D-aware latent space, which enables us to synthesize novel samples and turns the novel-view synthesis system, i.e., our autoencoder, into a 3D-aware generative model.

Technical Contributions

Baseline Comparison

We find that existing GAN-based approaches struggle with very low sample diversity (mode collapse) on multi-modal datasets with complex camera distributions like ImageNet. The results below show samples from WildFusion and 3DGP, the strongest baseline, where each row corresponds to samples of one class. While 3DGP collapses and produces almost identical samples within classes, WildFusion produces diverse, high-quality samples because it builds upon Latent Diffusion Models.

Autoencoder for Compression and Novel View Synthesis

Our 3D-aware autoencoder performs both compression and enables novel-view synthesis. Notably, it is trained from large collections of unposed images without any direct multiview supervision. The learned compressed 3D-aware latent space can then be used to train a latent diffusion model. In addition, we can leverage our autoencoder to more efficiently perform novel view synthesis for a single given image than common GAN-based methods relying on GAN-inversion. We show pairs of input images and synthesized novel views from our autoencoder below.

Generated Samples

We train a latent diffusion model on the compressed 3D-aware latent space of the 3D-aware autoencoder. Our 3D-aware LDM enables high-quality 3D-aware image synthesis with reasonable geometry and strong distribution coverage / high sample diversity.

Interpolation

Using WildFusion, we can interpolate in a semantically meaningful way between two given single images while simultaneously allowing to change the viewpoint. Note that the geometry also changes accordingly. Specifically, we encode two images into latent space, further encode into the diffusion model’s Gaussian prior space (inverse DDIM), interpolate the resulting encodings, and generate the corresponding 3D images along the interpolation path.

Generative Resampling with Different Noise Levels

We can further use WildFusion to perform 3D-aware generative image resampling. Given an image, we forward diffuse its latent encoding for varying numbers of steps and re-generate from the partially noised encodings. Depending on how far we diffuse, we control how strongly the sample adheres to the input image. For the samples below, we gradually increase the number of diffusion steps from left to right.

Citation

2024

  1. 
      @InProceedings{Schwarz2024ICLR,
        author = {Schwarz, Katja and Wook Kim, Seung and Gao, Jun and Fidler, Sanja and Geiger, Andreas and Kreis, Karsten},
        title = {WildFusion: Learning 3D-Aware Latent Diffusion Models in View Space},
        booktitle = {International Conference on Learning Representations (ICLR)},
        year = {2024}
      }