Organizers: Italia De Feis and Flavio Lombardi (Cnr-Iac)
https://www.aim.iac.cnr.it/home
Overparametrization in machine learning: insights from linear models
Andrea Montanari
Department of Electrical Engineering, Department of Statistics,
and (by courtesy) Department of Mathematics, Stanford University
IAC Youtube Channel: https://www.youtube.com/watch?v=fKN1HseAXXQ
Abstract:
Deep learning models are often trained in a regime that is forbidden by classical statistical learning theory.
The model complexity is often larger than the sample size and the train error does not concentrate around the test error. In fact, the model complexity
can be so large that the network interpolates noisy training data. Despite this, it behaves well on fresh test data, a phenomenon that has been dubbed `benign overfitting.’
I will review recent progress towards understanding and characterizing this phenomenon in linear models.
[Based on joint work with Chen Cheng]