Reconstructing pictures with machine learning [demonstration]

In this post I demonstrate how different techniques of machine learning are working.

The idea is very simple:

each black & white image can be treated as a function of 2 variables - x1 and x2, position of a pixel

intensity of a pixel is output

this 2-dimentional function is very complex

we can leave only a small fraction of pixels, treating others as 'lost'

by looking how different regression algorithms reconstruct the picture, we can get some understanding
of how these algorithms are operating

Don't treat this demonstration as some 'comparison of approaches', because this problem (reconstructing a picture)
is very specific and has very few in common with typical ML datasets and problems.
And of course, this approach is not to be used in practice to reconstruct pictures :)

I am using scikit-learn and making use of its API, enabling user to construct new models via meta-ensembling and pipelines.