Categories

Well, I’m pretty sure the guys at 13th Lab will get mad at me for comparing them to Layar. Most importantly, they don’t consider themselves as an augmented reality company. They view themselves as a computer vision company, and AR only serves as a cool proof of concept for their technology. And what exactly is their tech? For now it’s implementing SLAM algorithm on iPad2, as can be seen in the video below. Next they plan to implement more computer vision algorithms for mobile platforms.

SLAM, if you are too lazy to read the wikipedia article and prefer to learn this kind of stuff from a blogger, enables the device to locate its position in a pre-scanned room while continuously update its stored map of the room, all this without using markers. Here’s a cool demo from Oxford, showing SLAM assisted augmentation of a museum, which suggests one way this technology can be used. Another scenario may be something like an ikea store where using an iPad you could change the color of the sofa which is right in front of you (or locate the exit).

This lead me believe that with some luck 13th Lab may become a force to be reckoned with in indoors AR. Moreover, 13th Lab aims to be a platform provider, like, well, Layar (and admittedly, many other companies in the AR space).

Writes Petter Ivmark, one of the founders:

The ambition of this company is not just to make a game though, but rather to take this pretty complicated technology, that requires a lot of specific math and low level programming skills, meaning that very few developers work with it today, and make it available to developers as a platform that doesn’t require these skills at all. Hopefully, this will spur a lot more innovation in computer vision. We strongly believes that, as computer vision and artificial intelligence evolves, the camera will take over from the GPS as the device’s most important sensor to understand, interpret and navigate the world.
We have had the idea that the camera has the potential to be the most important sensor for a long time.

A few years ago when we started talking about doing something in this area, the devices where not powerful enough to do SLAM and other advanced computer vision work. When we started looking at this, the iPhone 3GS had not yet been released (let alone a dual core device like the iPad 2 or some of the newer Android devices). iOS didn’t even have a public camera API. But we made a bet on the exponential growth in computing power on devices, that if we started working on this, the devices would catch up quickly. This turned our to be true. Apple released the camera APIs for iOS, they put gyros in their devices, and finally released the iPad 2 which had a camera, gyro and a fast dual core processor. This was around the time we had a first working prototype of our platform, so the timing was great.

I was delighted to see that Patched Reality’s Patrick O’Shaughnessey answered my call and shared his augmented reality related predictions for 2010 in his company blog. It’s Patrick’s first prediction that I find most interesting (though all of them are very good). While many of our prior columns in this series had predictions about how AR will change the way we see the outside world, Patrick reminds us there’s use for indoors AR:

While AR browsers like Layar and Wikitude will continue to focus their attention on discovering information that is in the world at large, another class of AR applications will emerge that helps people see what could be in the comfort of their own home. We’ll see a lot more applications released by manufacturers that sell products that go in people’s homes. These applications will be more sophisticated than the recent IKEA campaign in Germany, as they will make use of the actual smartphone video stream to make sense of the user’s environment, and also allow people to purchase the products they’ve previewed right within the app.
Products that people will be able to “try before they buy” will run the gamut from furniture, artwork, electronics, window treatments, clothing, and maybe even paint colors. This type of application will be to 2010 what the “hold a marker up to your webcam to see a marketing message” was in 2009. And there will likely be both good and bad executions of the basic concept.

Ironically, accurate registration and image recognition may not be the main issue preventing AR from coming indoors. After a conversation with a friend it became apparent to me, that scanning items in order to create a 3d representation is a real roadblock for retailers on the route to selling via AR,

I find the next piece of research so amazingly cool that I can’t understand how I’ve missed for so long (a whole three days!). Submitted to next month’s SIGGRAPH, MIT’s Media Lab Bokode is a new way to visually code information.
I’m not going to try to explain the technology behind it (that’s what the paper for), but it a nutshell it uses a small light source to create an image consisting of thousands of pixels. The pixels are only discernible when a camera is looking at the Bokode while its focus is set to infinity. I hope the next video explains it better:

As the video above shows, there are very nice implications to augmented reality. Aside from coding the identity of the object, it can also encode how’s the object positioned in comparison to your camera. Though, if I understood correctly, the demonstration above uses two cameras, one shooting the object in focus, while the other looks at the Bokode.
Another obstacle in the way of wide adoption is that the Bokode currently requires an energy source to operate. Nevertheless, it has already taken a step in the right direction, and currently have a short page on Wikipedia.
More information here and here. Via Augmented.org.

Barbie maker Mattel, and augmented reality provider Total Immersion, have joined forces to bring the public the first retail toys that are AR enhanced (or so says their press release). Unveiled today at Comic-Con 2009, each product in Mattel’s line of action figures and vehicles based on James Cameron coming film Avatar will come with a –

3-D web tag, called an i-TAG, which consumers can “scan” using a home computer’s webcam. Scanning the i-TAG will reveal special content onscreen unique to the corresponding product. Exact content varies for each item, but could include biographical information, additional images and animated models of the figures. When the i-TAG for deluxe figures, vehicles or creatures are placed under a webcam, animated 3-D models will “come alive” through engaging, evading or defending moves. Place two i-TAGs from the “Battle Pack” together and the 3-D images will interact with each other.

3d web tag? Sounds impressive, but thanks to this next video clip, we can all see it’s nothing more than a marker card:

Still, it looks cool and I’m quite sure it’s gonna be a hit this Christmas season (unless the film itself bombs). Pity they used such a convoluted term for it.

While the whole web is gushing over James Alliban‘s augmented business card, I find the next implementation even more exciting. Don’t get me wrong, Alliban’s card is cool, but this one is a bit more useful:

It was created by Jonas Jäger, and more importantly, he doesn’t plan to keep the technology to himself. Jäger plans to release a front-end application that will let you create your own “presentation” that will be displayed when your business card is flashed in front of a web camera. It uses a QR code to identify your card from others, and an AR marker to have FLARToolKit something to get a fix on. All in all, it answers Thomas Carpenter’s call to create a service for these kind of augmented business cards, and really looks good.

Dutch website (what’s with all those Dutch companies lately?) YouTellMe.com, which specializes in social shopping online (e.g. recommendation engines) has just launched a new augmented reality application, letting you see how your favorite electronic products look like in the palm of your hand (or in your living room).
By harnessing the power of your webcam, Flash, and probably FlarToolKit (though, I failed to prove it), you can now try the new iPhone, or that Canon camera you always coveted:

Actually, since style and appearance play a big part these days when we are out to buy a new gadget, I can imagine such an application would have a market (much like those magic mirrors that let you try on jewelery and accessories). Though, IMHO, it could be much improve if instead of simply printing a marker, you would be able to print a simple paper-craft box with markers on its sides, that although will require some folding, will give you some more “hands on” experience.

Sein has just posted a new video on his blog (in Japanese, though an English version is apparently in the workings). I think it’s really amazing what one man can do on his own:

I’ve covered SREngine before, and so did Ori, and from video to video you could really see how this application takes shape.

Though using image recognition makes it a bit slow (for the meanwhile) in comparison to systems based purely on GPS and compass positioning , it allows it to identify smaller things, at shorter distance and within close quarters. I really can’t wait to see it available on the appstore.