We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on "HOG goggles" and perceive the visual world as a HOG based object detector sees it.

Check out this page for a few of our experiments, and read our paper for full details. Code is available to make your own visualizations.

Demo

Overview

This project introduces the tools to visualize feature spaces. Since most feature spaces are too high dimensional for humans
to directly inspect, we present algorithms to invert feature descriptors back to a natural image. We found that these
inversions provide an accurate and intuitive visualization of feature descriptors commonly used in object
detection. Below, we show an example of the visualization for HOG:

HOG [1]

Inverse (Us)

Original

Why did my detector fail?

Below we show a high scoring detection from an object detector with HOG features
and a linear SVM classifier trained on PASCAL. Why does our detector
think that sea water looks like a car?

Our visualizations offer an explanation. Below we show the output from our visualization
on the HOG features for the false car detection. This visualization reveals that, while there are clearly
no cars in the original image, there is a car hiding in the HOG descriptor.

HOG features see a slightly different visual world than what humans see, and by visualizing this space,
we can gain a more intuitive understanding of our object detectors.

Visualizing Top Detections

We have visualized some high scoring detections from the deformable parts model. Can you guess which are false alarms? Click on the images below to reveal the corresponding RGB patch. You might be surprised!

What does HOG see?

HOG inversion reveals the world that object detectors see. The left shows
a man standing in a dark room. If we compute HOG on this image and invert it, the previously
dark scene behind the man emerges. Notice the wall structure, the lamp post, and
the chair in the bottom right hand corner.