BlockFusion: Expandable 3D Scene Generation
using Latent Tri-plane Extrapolation
ACM Transaction on Graphics (SIGGRAPH 2024)
  Selected as SIGGRAPH 2024 Trailer Video

โ€ Corresponding Author
1The University of Tokyo    2Shanghai Jiao Tong University    3Australian National University    4Tencent XR Vision Labs

Abstract

overview

We present BlockFusion, a diffusion-based model that generates 3D scenes as unit blocks and seamlessly incorporates new blocks to extend the scene. BlockFusion is trained using datasets of 3D blocks that are randomly cropped from complete 3D scene meshes. Through per-block fitting, all training blocks are converted into the hybrid neural fields: with a tri-plane containing the geometry features, followed by a Multi-layer Perceptron (MLP) for decoding the signed distance values. A variational auto-encoder is employed to compress the tri-planes into the latent tri-plane space, on which the denoising diffusion process is performed. Diffusion applied to the latent representations allows for high-quality and diverse 3D scene generation. To expand a scene during generation, one needs only to append empty blocks to overlap with the current scene and extrapolate existing latent tri- planes to populate new blocks. The extrapolation is done by conditioning the generation process with the feature samples from the overlapping tri-planes during the denoising iterations. Latent tri-plane extrapolation produces semantically and geometrically meaningful transitions that harmoniously blend with the existing scene. A 2D layout conditioning mechanism is used to control the placement and arrangement of scene elements. Experimental results indicate that BlockFusion is capable of generating diverse, geometrically consistent and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.

BlockFusion Training Pipeline

The training contains three steps: First, the training 3D blocks are converted to raw tri-planes via per-block shape fitting. Then, an auto-encoder compresses the raw tri-planes into a more compact latent tri-plane space. Lastly, DDPM is trained to approximate the distributions of latent tri-planes, and during this process, layout control can also be integrated.

We convert scene meshes to water-tight meshes and then randomly crop the meshes into cubic blocks. Per-block fitting to convert all training blocks into raw tri-planes. The raw tri-planes are compressed into a latent tri-plane space for efficient 3D representation. We train the diffusion model on the latent tri-plane space.

To control the generation process, we add floor layout control by informing the model with 2D bounding box projections of objects:

Latent Tri-plane Extrapolation

We propose 3D aware denoising U-Net to facilitate diffusion training on tri-plane. We leverage the pre-trained latent tri-plane diffusion model to expand a scene: Given a kown block ๐‘ƒ and an unknown block ๐‘„ , the goal is to extrapolate the known latent tri-plane ๐‘ง๐‘ƒ to obtain the unknown tri-plane ๐‘ง๐‘„ (top row). This tri-plane extrapolation is factored into the extrapolation of three 2D planes separately (bottom row).

Results

We consider Text2Room [Hรถllein et al . 2023] as the baseline for indoor scene generation. Text2Room takes text prompt as input whereas ours is based on 2D layout map. For a fair comparison, we describe our input room layout using natural language and then concatenate it as part of the text prompt for Text2Room.

Citation


This website is based on mip-Nerf.