Pixy is a video camera that you can train to recognize objects. Instead of outputting a large, difficult-to-process image it simply provides information like, purple dinosaur detected at x=54, y=103. Is this the revolution we are waiting for?

The answer, as it is with so many new ideas, is both yes and no.

Pixy is a low cost, about $59, video camera that provides higher level semantic output potentially to another microcontroller. Each Pixy comes with a suitable cable to connect to an Arduino, but you can use it with other devices.

By doing the processing it relieves the associated microcontroller of the responsibility of having to accept and process a large video file. This opens up the possibility of creating low-cost robotic and automation devices that can navigate and work with a natural or home environment without the need to implement computer vision algorithms.

This is something of a revolution and a very welcome one judging by the response to the project on KickStarter. With 17 days to go, it has already exceeded its target of $25,000 with a pledge of $117,937. The project is based on designs by Carnegie Mellon University and a small Austin-based company, Charmed labs is working on getting it to the end user.

The only downside is the way that Pixy actually does its object recognition. It uses a color hue detection algorithm to spot objects of a specified color. Notice that color hue is a good approach because the mix of RGB in a color doesn't change much with changed in brightness. However, it does mean that if the objects you are trying to detect aren't of a very specific color it isn't going to work - forget detecting white dinosaurs against a white wall. You can detect up to seven color signatures which you can establish by training - i.e. showing the color to the camera. It is fast enough to track a hundred objects or more.

If there isn't a sufficient color contrast between the objects you are trying to detect and the environment then you can always label them with a color code. The idea is that you stick a sort of color bar code onto the object which lets Pixy detect and locate it. The example given in the KickStarter blurb is labeling a charging station so that a robot can find it and plug in. The color code idea also means that you are going to get fewer false positives because of the need for an exact color combination - it also goes beyond the seven-color profiles. The relative positions of the colors also provide orientation estimates for the labeled object.

Check out the promo video from the KickStarter - stick with it as the demos later on (about 2mins in) are worth seeing:

So Pixy isn't a full recognition camera, but you can see that it is incredibly useful if you can prepare the environment for a robot with color labels. However, in the future, the on-board processor might manage to run a face detection algorithm - and if it can do this then perhaps a more general recognition algorithm is possible for a future model.

If you can't wait then the good news is that Pixy is an open source project and you can modify it or start from scratch. The hardware is based on a NXP LPC4330, 204 MHz, dual core processor coupled to a 1280x800 video camera so there probably is enough power to do more than just hue detection.