Here are 2 game prototypes the team created called AR Gardener and Sketch-Chaser. It is played on a regular white board.

AR Gardener

Draw symbols on the white board and 3D content is pulled from a database of objects to appeas in an Augmented Reality (AR) scene.

The sketch determines what object to create, its location, scale, and rotation.

The outer line sketched here defines the game anchor and is served for tracking; in this game it becomes a brown ridge.

Simple symbols drawn generate a couple of benches, a cabin, and in the spirit of the playground theme – rockers, and swings.

Virtual elements could also be created based on a real life object such as a leaf; here it is used to create a patch of grass using the color and shape of the leaf (and no, it can’t recognize that’s a leaf, or 3D object whatsoever)

The color of the marker could define the type of virtual object created: For example, blue represents water. Other objects that are put in it will sink.

Sketch-Chaser

In the second game you basically create an obstacle course for a car chase.

It’s a “catch the flag” or tag game. The winner is whoever has the flag for the most time.

First you draw, then play.

Once again, the continuous brown line represents a ridge and bounds the game.

A small circle with a dot in it represents the starting point for the cars.

A flag becomes the flag to capture. A simple square creates a building, etc.

The player adds more ridges to make it more challenging. Adds blue to generate a little pond (which also indicates a different physical trait to this area)

Then – graphics are generated, the players grab their beloved controllers and the battle begins!

This research represents an opportunity for a whole new kind of game experience that could make kids play more in the real world.

Many questions still remain, such as how do you recognize in a sketch what the player really means without requiring her to be an artist or an architect. Or where does the sketch fit in the game play? Before, after or during?

Now, it’s up to game designers to figure out what sketching techniques work best, what’s fun, what’s interesting, and what’s just a doodle.

If you are still contemplating whether to go – check out what you might be missing on our preview post.

The folks from the Visual Media Lab at Ben Gurion University in collaboration with HIT Lab NZ are preparing a real treat for ISMAR 2009 participants.

Sketch recognition (already covered in our previous post) is a major break away from “ugly” markers or NFT (tracking natural 2d images). It is the dawn of user generated content for Augmented Reality, and an intuitive new interaction approach for changing the CONTENT overlaid on a marker. Big wow.

In-Place 3D Sketching

But the team lead by Nate Hagbi and Oriel Bergig (with support from Jihad El-Sana and Mark Billinghurst) is just warming up…In the next video Nate shows how any sketch you draw on a paper (or even on your hand!) can be tracked.

So are you telling me I won’t need to print stuff every time I want to play with augmented reality?
-That’s right! Hug a tree and save some ink!

Shape Recognition and Pose Estimation

But wait, there is more!

Nate says this demo already runs on an iPhone.

And to prove it, he is willing to share the code used to access the live video on iPhone 3.0.(note: this code accesses a private API on the iPhone SDK)

Ready for the BIG NEWS?

For the first time ever, the core code necessary for real augmented reality “(real” here means precise alignment of graphics overlaid on real life objects) on iPhone 3.0 is available to the public.

Top 10 AR milestones in 2008 was one of the most popular posts this year. What came out of it was even more gratifying: a multitude of reflections, impressions, and thoughts I received about your own AR moments, including some last minute finds.

Here is an anecdotal collection of your greatest AR moments in 2008:

1. The Most fundamental AR milestone in 2008

Oriel Bergig: During 2008 we have seen some major advances in the field of Augmented Reality. Porting AR technology to mobile devices and especially cellular phones creates an opportunity to reach millions of users. For several years, the biggest AR labs and companies have made huge steps in this direction. In 2008 these efforts have started to show results. Pose estimation has been upgraded with the StbTracker release in the end of 2007. Research focusing on better user experience, and in particular on making mobile AR technology accessible to people with no special training, is being conducted by the best minds of the HitLabNZ. During one of the top covered events of the year, CES2008, Intel’s CEO Paul Otellini demonstrated Total Immersion’s technology enabling mobile AR experiences such as urban guidance. To wrap-up, the 2008 most fundamental milestone would be: AR technology is closing up fast on the mass user market.

Charles Woodward: The greatest milestone? Commercial breakthroughs by Metaio and Total Immersion.

Thomas Wrobel: Wikitude I think. Seems the first released, useful, AR software. Runner up to the AR Geisha doll…

2) The best AR device of the year

Oriel: Since 2008 would most be remembered for its advances in mobile AR technology, the AR device of the year is the mobile phone. Nokia has released the Navigator phone that includes a GPS and an accelerometer, which make a valuable addition. The N95 has been demonstrated as well in many more contexts as a good choice for AR applications. The next AR device of the year would be the Nokia N97 and of course the iPhone with its huge global success. iPhone feets very well AR applications and a successful attempt to port ARToolKit to iPhone has already been made by ARToolWorks. Appealing applications are next to come but only after the iPhone OS has better support for real time video acquirement.

Charles: Best device? iPhone, and/or Nokia 6210…

Thomas: hmz…tricky. I personally think hardware is still rather lackluster, and I have had little experience with some of the most recently released stuff.
I guess probably the iPhone + G1 devices…while far from ideal, they are at least getting location-aware services, and “barcode scanning” style product information into public hands.

Eric Rice shares what gets him excited about a video comparing between PS2 Eyetoy and PS3 Eye.

Vodpod videos no longer available.

3) Best AR Demo

Oriel: The best demo of 2008 is the demo that will be remembered by most people a decade from now. The demo that reached most of the people in the world is most likely Intel’s CEO Paul Otellini keynote talk during CES2008.

Charles: Haunted Book, Cherrer et al at ISMAR2008 – just beatiful!
(click Interaction on the left menu bar and then Haunted House.)

Thomas: LevelHead [by Julian Oliver], I think. Although this pet demo [ARf] is also nice;(that may be because I want my own desuke though :p)

4) Person of the AR year

Thomas: There’s been so much development by so many individuals and companies I don’t know one specific person.

5) The most significant AR deal of 2008

Charles: Beijing Olympics fake fireworks. About the viewers of the Olympics openning ceremony:”What they did not realise was that what they were watching was in fact computer graphics, meticulously created over a period of months and inserted into the coverage electronically at exactly the right moment. ”

Thomas: Not sure about AR deals as such, but Total Immersion getting offices in the US is a good sign for the company and AR in general.

6) A [Predictable?] disappointment

Gizmondo won’t be coming out this year after all…The Nordik Link has the scoop.

7. Last minute find: A Surprising Simplicity in AR

Anyone can build 3D models with Google’s Sketchup. With the AR Media plugin from Inglobe – anyone can bring it into an augmented reality scene. ArchDaily tried it here.

Take the example of AR advertising for the Willington Zoo tried by Satchi and Satchi (2007).

This is a pretty complex approach which requires publishing printed material, creating a database for the additional AR info and querying database before presenting

In place Augmented reality is a vision based method for extracting content all encapsulated in the image itself.

The process includes: Using our visual language to encode the content in the image. The visualization is done as in a normal AR application.

The secret sauce of this method is the visual language used to encoding the AR information.

There are multiple benefits to this approach: the content is human readable and it avoids the need for an AR database, and for any user maintenance of the system. This approach also works with no network communication.

A disadvantage is that there is a limit of the amount of info which can be encoded in an image. Nate describes this as a trade off.

I am also asking myself, as a distributor of AR applications, what if I want to change AR data on the fly? Nate suggests that in such a case a hybrid approach could be used: some of the info is extracted from the encoded image. Additional image coding could point to dynamic material from the network (e.g. updated weather or episodic content).

~~~

Second presenter is Kohei Tanaka which will unveils An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability

The idea in short is to place virtual information on the AR screen in a way that always maintains a viewable contrast.

The amusing example demonstrates a case where this approach ca help dramatically: you are having tea with a friend, wearing your favorite see-through AR HMD. An alert generated the AR system tries to warn me about a train I need to catch, but due to the bright alert on top of a bright background – I miss the alert, and as a consequence miss the train…

Kohei’s approach, makes sure that the alert is displayed in a part of the image where the contrast is good enough to make me aware of the alert. Next time, I will not miss the train…

Question: Is in it annoying for users that the images on screen constantly change position…?

Kohei responds that it requires further research…

~~~

Last in this session is Stephen Peterson from Linkoping University with a talk about Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality.

The domain: Air Traffic control. A profession that requires to maintain multiple sources of information and combine them into a single context cognitively.

Can Augmented Reality help?

The main challenge is labeling: how do you avoid clutter of labels that could quickly confuse the Air traffic controller?

The conclusion: Remapping stereoscopic depth of overlapping labels in far field AR improves the performance. In other words – when you need to display numerous labels on a screen that might overlap with each other – use the depth of the view and display the labels in different 3d layers.