This course covers basic concepts in machine learning in high dimension, and the importance of regularization. We study in detail high-dimensional linear models regularized by the Euclidean norm, including ridge regression, ridge logistic regression and support vector machines. We then show how positive definite kernels allows to transform these linear models into rich nonlinear models, usable even for non-vectorial data such as strings and graphs, and convenient for integrating heterogeneous data.
Slides for the course can be found here: Slides
For practical sessions, a working jupyter notebook setup is required. Course material will be done in python.
See the dedicated kaggle
For practice exercises and quizzes, please check out last year's course material
- Jean-Philippe Vert (Prof.)
- Julien Mairal (Prof.)
- Romain Menegaux (T.A.)