ImageNet Roulette uses a neural network trained on the “people” categories from the ImageNet dataset to classify pictures of people. It’s meant to be a peek into the politics of classifying humans in machine learning systems and the data they’re trained on.

ImageNet Roulette isn't designed to handle heavy traffic so if it's not working for you please be a little patient.

or upload an image:

NOTES AND WARNING:
ImageNet Roulette regularly classifies people in dubious and cruel ways. This is because the underlying training data contains those categories (and pictures of people that have been labelled with those categories). We did not make the underlying training data responsible for these classifications. We imported the categories and training images from a popular data set called ImageNet, which was created at Princeton and Stanford University and which is a standard benchmark used in image classification and object detection.

ImageNet Roulette is meant in part to demonstrate how various kinds of politics propagate through technical systems, often without the creators of those systems even being aware of them.

TECHINCAL: ImageNet Roulette uses a Caffe model trained on the “people” categories from the popular ImageNet dataset (in ImageNet these nodes are all found under the "person, individual, someone, somebody, mortal, soul" class in the WordNet hierarchy; proper nouns have been removed). When you upload a photo, we first run a face detector to find any faces. If we find faces, we send up to 10 of them to the Caffe model. The image you get back shows the face bounding box with the Caffe label. If no faces are detected, we send the entire scene to the Caffe model.

ImageNet Roulette does not store the photos people upload or any other data.