Lately, I’ve been in the need for a better understanding of the theoretical grounds of machine learning algorithms. Yesterday, I finally had the opportunity to grab Vladimir N. Vapnik’s book, “Statistical Learning Theory”.

Here are some interesting excerpts. If I have the time, I will do my best to write a full blog post (later) on things I learned in the book. Please note that I am no expert. Let’s explore !

If you wish to discuss any of the following excerpts, please get in touch.

- “We show that if uniform two-sided convergence does not take place, then the method of empirical risk minimization is nonfalsifiable.” (p. 108.)

It is interesting to note that there are thereoretical results pertaining to the notion of falsiability.

(more to come)