Neural directional distance field object representation for uni-directional path-traced rendering
Faster rendering of synthetic images is a core problem in the field of computer graphics. Rendering algorithms, such as path-tracing is dependent on parameters like size of the image, number of light bounces, number of samples per pixel, all of which, are fixed if one wants to obtain a image of a desired quality. It is also dependent on the size and complexity of the scene being rendered. One of the largest bottleneck in rendering, particularly when the scene is very large, is querying for objects in the path of a given ray in the scene. By changing the data type that represents the objects in the scene, one may reduce render time, however, a different representation of a scene requires the modification of the rendering algorithm. In this paper, (a) we introduce directed distance field, as a functional representation of a object; (b) how the directed distance functions, when stored as a neural network, be optimized and; (c) how such an object can be rendered with a modified path-tracing algorithm.
READ FULL TEXT