SIGNET: Efficient Neural Representations for Light Fields
Brandon Yushan Feng, Amitabh Varshney
10/21/2021
Keywords: Graphics, 2D Image Neural Fields, Compression, Image-Based Rendering, Positional Encoding
Venue: ICCV 2021
Bibtex:
@inproceedings{feng2021signet,
year = {2021},
pages = {14224--14233},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
author = {Brandon Yushan Feng and Amitabh Varshney},
title = {{SIGNET: Efficient Neural Representation for Light Fields}}
}
Abstract
We present a novel neural representation for light field content that enables compact storage and easy local reconstruction with high fidelity. We use a fully-connected neural network to learn the mapping function between each light field pixel's coordinates and its corresponding color values. Since neural networks that simply take in raw coordinates are unable to accurately learn data containing fine details, we present an input transformation strategy based on the Gegenbauer polynomials, which previously showed theoretical advantages over the Fourier basis. We conduct experiments that show our Gegenbauer-based design combined with sinusoidal activation functions leads to a better light field reconstruction quality than a variety of network designs, including those with Fourier-inspired techniques introduced by prior works. Moreover, our SInusoidal Gegenbauer NETwork, or SIGNET, can represent light field scenes more compactly than the state-of-the-art compression methods while maintaining a comparable reconstruction quality. SIGNET also innately allows random access to encoded light field pixels due to its functional design. We further demonstrate that SIGNET's super-resolution capability without any additional training.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.