You may have heard about the possibility of cloaking devices based on metamaterials but perhaps there is a simpler but cleverer way of merging with the background - computational camouflage.

Researchers Andrew Owens (MIT), William Freeman (MIT), Connelly Barnes (University of Virginia); Alex Flint (Flyby Media); and Hanumant Singh (Woods Hole Oceanographic Institution) have turned the usual task of image processing object detection on its head and created an algorithm that can hide objects in real 3D.

They assumed that the object to be hidden was box shaped and that they could put any image onto its surfaces in an effort to hide it. It is suggested that being able to hide a box might be useful if modern control equipment or plant had to be placed in a site of beauty or antiquity - although how a technician or engineer is expected to find the device is an interesting question.

The problem isn't easy because you can't simply place an image of what the box obscures on each surface because it can be viewed from many different angles. Viewed from one position the camouflage might be a perfect match, but from another the mismatch would make the cube stand out very clearly.

You really need to vary the image on each surface according to the viewing angle, which isn't a practical solution. A first attempt at an algorithm would be to simply find averages of the images needed for a range of angles. A better algorithm would be to pick an angle and fill the visible parts of the surfaces with the correct image then move to another position and fill the newly visible parts of the surfaces that become visible with an image suitable for the new angle. You could simply pick a few random positions, but it turns out to be better to limit the image placement to angles that are less than 70 degrees to the viewpoint as this minimizes the distortion of the viewed image. You can also optimize the method by picking viewpoints that see the most surface area at such angles.

This approach ends up with a patchwork of images on the object and to minimize sudden transition effects the images are smoothed, but even so edge effects occur. It seems that we are quick to spot the cube if we can see an edge between the faces due to the different perspective effects. A possible approach is to create a set of images that are optimized for different viewing angles and for hiding the edges of the cube.

To find out what worked best, the team turned to the Mechanical Turk and paid people to look at photos and identify the camouflaged object. The best performing algorithm turned out to be a single image for each face that looked right from as many angles as possible while hiding the cubes edges.

If you want to help with the research you can try your hand and spotting hidden boxes in a camouflage game. You can also see some of the attempts at hiding boxes from different angles in the following video:

While the research is clearly successful, it doesn't quite go far enough to make the technique a practical way to hide machinery. In real life the illumination changes and this would have to be taken into account. Ultimately what is required is a box with active displays and a computer inside working out what the best computational camouflage image is required in any given illumination.

Perhaps one day soon all sorts of things will vanish from view, but not quite yet.