The ‘Not Safe For Work’ (NSFW) model analyzes images and videos and returns probability scores on the likelihood that the image or video contains pornography. For those of you who don’t know what NSFW is, we’re talking about Not Safe For Work. R-rated content. Stuff that would probably get you fired if you were looking at it at work.

When images or videos are run through the tag endpoint, they are tagged using a model. A model is a trained classifier that can recognize what’s inside an image or video according to what it ‘knows’. Different models are trained to ‘know’ different things. Running an image or video through different models can produce drastically different results.

If you’d like to get tags for an image or video using a different model, you can do so by passing in a model parameter. If you omit this parameter, the API will use the default model for your application.

NSFW

The ‘Not Safe For Work’ model analyzes images and videos and returns probability scores on the likelihood that the image or video contains pornography.

The response for NSFW returns probabilities for nsfw (Not Safe For Work) and sfw (Safe For Work) that sum to 1.0. Generally, if the nsfw probability is less than 0.15, it is most likely Safe For Work. If the nsfw probability is greater than 0.85, it is most likely Not Safe For Work.