Description

As the World Wide Web keeps growing, computer science keeps evolving from its traditional form, slowly slowly becoming the art to create intelligent software and hardware systems that draw relevant information from the enormous amount of available data.

Why? Let's look at the facts: billions of web pages are at our disposal, videos with an accumulated time of 20 hours are uploaded every minute on Youtube and the supermarket chain Walmart alone performed more than one million transactions per hour, creating a database of more than 2.5 petabytes of information. John Naisbitt has stated the problem very clearly:

"We are drowning in information and starving for knowledge."

In the future of computer science, machine learning will therefore be an important core technology. Not only that, machine learning already is the technology which promises the best computer science jobs. Hal Varian, the Chief Engineer of Google in 2009 depicted it like this:

"I keep saying the sexy job in the next ten years will be statisticians and machine learners. People think I am joking, but who would have guessed that computer engineers would have been the sexy job of the 1990s? The ability to take data, to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it, that is going to be a hugely important skill in the next decades. "

Accordingly, this lecture serves as an introduction to machine learning. Special emphasis is placed on a clear presentation of the lectures contents supplemented by small sample problems regarding each of the topics. The teacher pays particular attention to his interactions with the participants of the lecture, asking multiple question and appreciating enthusiastic students.

Contents

The course gives an introduction to statistical machine learning methods. The following topics are expected to be covered throughout the semester:

  • Probability Distributions
  • Linear Models for Regression and Classification
  • Kernel Methods, Graphical Models
  • Mixture Models and EM
  • Approximate Inference
  • Continuous Latent Variables
  • Neural Networks
  • Hidden Markov Models

Requirements

Math classes from the bachelor's degree, basic programming abilities, introductory classes to computer science.

Literature

The most important books for this class are:

  1. C.M. Bishop. Pattern Recognition and Machine Learning, Springer free online copy
  2. K.P. Murphy. Machine Learning: a Probabilistic Perspective, MIT Press

Additionally, the following books might be useful for specific topics:

  1. D. Barber. Bayesian Reasoning and Machine Learning, Cambridge University Press Free online copy
  2. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Springer Verlag Free online copy
  3. D. MacKay. Information Theory, Inference, and Learning Algorithms, Cambridge University Press Free online copy
  4. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, Willey-Interscience
  5. T. M. Mitchell. Machine Learning, McGraw-Hill
  6. R. Sutton, A. Barto. Reinforcement Learning - An Introduction, MIT Press free online copy
  7. M. Jordan. An Introduction to Probabilistic Graphical Models Free online copy

Additional Material

Here are some tutorials on relevant topics for the lecture.

  1. An overview of gradient descent optimization algorithms (Sebastian Ruder)
  2. Covariant/natural gradient notes (Marc Toussaint)
  3. Bayesian Linear Regression, video (Jeff Miller)
  4. GP, video (John Cunningham)
  5. GP Regression, video (Jeff Miller)
  6. SVM, pdf (Andrew Ng)
  7. SVM, video (Patrick Winston)

Homework

  • The homework will be given during the course.
  • Homework is not compulsory but we strongly suggest that you complete them as they provide hands-on experience on the topics.
  • Credit for the final exam may be earned by regularly handing in homework.
  • Details will be announced during the class.


Prüfungsdatum (falls Klausur): Donnerstag, 23. Juli 2020, 09:00