Tag: inductive biases

The mathematical theorem proving this is the so-called “no-free-lunch theorem” It tells us that if a learning algorithm works well with one kind of data, it will work poorly with other types of data.

In a way, a machine learning algorithm projects its own knowledge onto data.

In machine learning, overfitting occurs when your model performs well on training data, but the performance becomes horrible when switched to test data.

Any learning algorithm must also be a good model of the data; if it learns one type of data effectively, it will necessarily be a poor model — and a poor student – of some other types of data.

Good regulator theorem also tells us that determining if inductive bias will be beneficial or detrimental for modeling certain data depends on whether the equations defining the bias constitute a good or poor model of the data.