X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel
10/1/2020
Keywords: Dynamic/Temporal, 2D Image Neural Fields, Editable, Material/Lighting Estimation
Venue: SIGGRAPH 2020
Bibtex:
@article{bemana2020xfields,
publisher = {Association for Computing Machinery},
journal = {ACM Transactions on Graphics (TOG)},
author = {Mojtaba Bemana and Karol Myszkowski and Hans-Peter Seidel and Tobias Ritschel},
title = {X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation},
year = {2020},
url = {http://arxiv.org/abs/2010.00450v1},
entrytype = {article},
id = {bemana2020xfields}
}
Abstract
We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i.e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the input to that rendering as an implicit map, that for any view, time, or light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position with respect to view, time, illumination, etc.). Our X-Field representation is trained for one scene within minutes, leading to a compact set of trainable parameters and hence real-time navigation in view, time and illumination.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.