This course will be mostly blackboard-based, focusing on the statistical foundations of Machine Learning. Many numerical examples will be given in python notebooks, for the sake of illustration and further usefulness to the attendees, and two hands-on sessions are conceived. The required background to make the most of these lectures is, on the one hand, a good level of linear algebra, as well as familiarity with concepts of statistics (probability distributions, likelihood, Bayes theorem, etc)
Preliminary outline (it may change to some extent):
lecture 1: review of ML + summary of statistics -- May 21th (Wed.), 15:00-17:00
lecture 2: regression and overfitting control -- May 22th (Thu.), 09:30-11:30
lecture 3: hands-on session on regression -- May 22th (Thu.), 15:00-17:00
lecture 4: bayesian learning -- May 23th (Fri.), 11:00-13:00
lecture 5: classification -- May 26th (Mon.), 10:00-12:00
lecture 6: hands-on session on classification -- May 26th (Mon.), 15:00-17:00
lecture 7: neural networks -- May 27th (Tue.), 15:00-17:00
lecture 8: hands-on session on neural nets -- May 28th (Wed.), 11:00-13:00
lecture 9: gaussian processes -- May 30th (Fri.), 09:30-11:00
lecture 10: hands-on session on gaussian processes -- May 30th (Fri.), 11:00-13:00
Bryan Zaldívar