Machine Learning from Art Data Is Harder Than You Think But About to Become Easier

Jason Bailey is fascinated by the application of machine learning to art. He explores the possibilities at his site Artnome.com. As part of his research, he’s been speaking to accomplished machine learning specialists, especially those who have worked with art data.

Here he interviews Ahmed Hosny, a polymath who does cancer research now but once applied machine learning to art data and published the results as The Green Canvas project. Hosny’s project was run several years ago before big breakthroughs in deep learning and processing power that now exist.

Here are some cautionary comments he makes in his interview with Bailey:

Instead of fitting a model to the data, deep learning learns feature representations from example data automatically and can hence learn very complex non-linear relationships. With both the shear amount of data and massive processing power we have at our disposal today, deep learning has become the defacto method for many applications.

I am sure you have been following the recent media craze over artificial intelligence and deep learning. What they don’t tell you is how difficult it is to train them. I have been using them to predict disease prognosis from medical images for a couple of years now. There is very little theory as to how and why these networks work. In the healthcare space, they call them “black box medicine”. As a result, training deep learning networks is more of an art that relies on empirical knowledge. Back to art, if (and only if) there is some sort of connection between any feature in the artwork and its price, then these networks would be able to identify it.