Cargando…
Less is more: regularization perspectives on large scale machine learning
<!--HTML--><p>Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameterization to be modeled accurately.<br /> Examples include image and audio classification but also data analysis problems in natural sciences, e..g high energy physics or...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2017
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2269969 |
Sumario: | <!--HTML--><p>Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameterization to be modeled accurately.<br />
Examples include image and audio classification but also data analysis problems in natural sciences, e..g high energy physics or biology.<br />
Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.</p> |
---|