TY - GEN
T1 - Depth-width tradeoffs in approximating natural functions with neural networks
AU - Safran, Itay
AU - Shamir, Ohad
N1 - Publisher Copyright:
Copyright © 2017 by the author(s).
PY - 2017/1/1
Y1 - 2017/1/1
N2 - We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L\ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
AB - We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L\ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
UR - http://www.scopus.com/inward/record.url?scp=85048588620&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85048588620
T3 - 34th International Conference on Machine Learning, ICML 2017
SP - 4550
EP - 4572
BT - 34th International Conference on Machine Learning, ICML 2017
PB - International Machine Learning Society (IMLS)
T2 - 34th International Conference on Machine Learning, ICML 2017
Y2 - 6 August 2017 through 11 August 2017
ER -