Does HOG capture color?

HOG is designed to be a color invariant feature. But, is it possible that HOG carries color information as well?

This page is part of a feature visualization project at MIT. To learn more, check out our main project page.


To answer this question, we hacked our HOG visualization algorithms to predict RGB images instead of grayscale images. We trained a paired dictionary over the PASCAL 2011 training set. We then computed HOG on the validation set (using the same features as voc-release5) and inverted them.

Results are below. Throughout this page, the left image is the recovered image, and the right is the original image.


We are able to recover a surprising amount of color on outdoor scenes:

What about indoor scenes?

We still make interesting color predictions indoors, but they are often wrong:

Flipped Images

How well can we color flipped images? Below, the left image is the visualization with the original orientation, and the right image is the flipped visualization. Notice how the sky is blue when viewed in the original orientation, but often changes when upside down:

Learned Dictionary Elements

Below are visualizations of the learned dictionaries. Click one to see it larger:


We note that our inversion algorithms has no location information. When inverting a sky patch, it is not aware that a patch is towards the top of the image.


The code that generated all these figures is available online on Github. To produce color inversions, use the "color" branch. The inversion takes about a second on a modern computer.