Abstract
Neural networks are known to be a class of highly expressive functions able
to fit even random input-output mappings with $100\%$ accuracy. In this work,
we present properties of neural networks that complement this aspect of
expressivity. By using tools from Fourier analysis, we show that deep ReLU
networks are biased towards low frequency functions, meaning that they cannot
have local fluctuations without affecting their global behavior. Intuitively,
this property is in line with the observation that over-parameterized networks
find simple patterns that generalize across data samples. We also investigate
how the shape of the data manifold affects expressivity by showing evidence
that learning high frequencies gets easier with increasing manifold
complexity, and present a theoretical understanding of this behavior. Finally,
we study the robustness of the frequency components with respect to parameter
perturbation, to develop the intuition that the parameters must be finely tuned
to express high frequency functions.
Users
Please
log in to take part in the discussion (add own reviews or comments).