Neural Voxel Renderer

Learning an Accurate and Controllable Rendering Tool

Konstantinos Rematas   Vittorio Ferrari
Google Research


Responsive image

Abstract

We present a neural rendering framework that maps a voxelized scene into a high quality image. Highly-textured objects and scene element interactions are realistically rendered by our method, despite having a rough representation as an input. Moreover, our approach allows controllable rendering: geometric and appearance modifications in the input are accurately propagated to the output. The user can move, rotate and scale an object, change its appearance and texture or modify the position of the light and all these edits are represented in the final rendering. We demonstrate the effectiveness of our approach by rendering scenes with varying appearance, from single color per object to complex, high-frequency textures. We show that our rerendering network can generate very detailed images that represent precisely the appearance of the input scene. Our experiments illustrate that our approach achieves more accurate image synthesis results compared to alternatives and can also handle low voxel grid resolutions. Finally, we show how our neural rendering framework can capture and faithfully render objects from real images and from a diverse set of classes.

ShapeNet Chairs (default setting)

ShapeNet Cars (default setting)

ShapeNet (textured setting)

Appearance Edits

Geometric Edits

Realistic Lighting

Pix3D