Abstract
The remarkable development of deep learning over the past decade relies heavily on sophisticated heuristics and tricks. To better exploit its potential in the coming decade, perhaps a rigorous framework for reasoning deep learning is needed, which however is not easy to build due to the intricate details of modern neural networks. For near-term purposes, a practical alternative is to develop a mathematically tractable surrogate model that yet maintains many characteristics of deep learning models.
This talk introduces a model of this kind as a tool toward understanding deep learning. The effectiveness of this model, which we term the Layer-Peeled Model, is evidenced by two use cases. First, we use this model to explain an empirical pattern of deep learning recently discovered by David Donoho and his students. Moreover, this model predicts a hitherto unknown phenomenon that we term Minority Collapse in deep learning training. This is based on joint work with Cong Fang, Hangfeng He, and Qi Long.