Detecting offensive / adult images is an important problem which researchers have tackled for decades. With the evolution of computer vision and deep learning the algorithms have matured and we are now able to classify an image as not suitable for work with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

Since images and user generated content dominate the internet today, filtering nudity and other not suitable for work images becomes an important problem. In this repository we opensource a caffe deep neural network for priliminary filtering of NSFW images.

jaclaz
_________________- In theory there is no difference between theory and practice, but in practice there is. -

How interesting, given we were discussing using machine learning to ID images (over hash matching) just the other week. Nice find.

From what I can gather, it seems like you need to set up caffe first? And once this is done, you run the yahoo detection software?

Edit; ah, I see - Yahoo have already trained the model to detect porn. The python script in the repo is a demonstration of how to use it. Very cool. Could potentially be far better than hash sets, although performance / hardware requirements remain to be seen I guess.