A Structured Dictionary Perspective on Implicit Neural Representations

Gizem Yüce, Guillermo Ortiz-Jiménez, Beril Besbinar, Pascal Frossard

12/03/2021

Keywords: Fundamentals

Venue: CVPR 2022

Bibtex: @article{yuce2022a, author = {Gizem Yuce and Guillermo Ortiz-Jimenez and Beril Besbinar and Pascal Frossard}, title = {A Structured Dictionary Perspective on Implicit Neural Representations}, year = {2021}, month = {Dec}, url = {http://arxiv.org/abs/2112.01917v2} }

Abstract

Implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still do not understand how INRs represent signals. We propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies. This structure allows INRs to express signals with an exponentially increasing frequency support using a number of parameters that only grows linearly with depth. We also explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK). Specifically, we show that the eigenfunctions of the NTK can be seen as dictionary atoms whose inner product with the target signal determines the final performance of their reconstruction. In this regard, we reveal that meta-learning has a reshaping effect on the NTK analogous to dictionary learning, building dictionary atoms as a combination of the examples seen during meta-training. Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.