Categories

ISMAR ’08 is the epicenter of the world’s best augmented reality demos. Here are the audience favorite picks:

The most beautiful demo

Markerless Magic Books

Created by the only artist at ISMAR ’08…
(on the left menu bar click Interaction/Haunted Book)

Our demonstration shows two artworks that rely on recent Computer Vision and Augmented Reality techniques to animate the illustrations of poetry books. Because we don’t need markers, we can achieve seamless integration of real and virtual elements to create the desired atmosphere. The visualization is done on a computer screen to avoid cumbersome Head-Mounted Displays. The camera is hidden into a desk lamp for easing even more the spectator immersion. Our work is the result of a collaboration between an artist and Computer Vision researchers. It shows beautiful and poetic augmented reality. It is further described in our paper ‘The Haunted House’.

Camille Scherrer, Julien Pilet, Vincent Lepetit (EPFL)

The most invisible demo

Sensor-fusion Based Augmented Reality with off the Shelf Mobile Phone

OK, you see a Scandinavian guy standing in the middle of the yard with a cell phone held high in his hand. What’s the big deal ? Exactly!

We demonstrate mobile augmented reality applications running on the newly released Nokia 6210 Navigator mobile phone. The device features an embedded 3D compass, 3D accelerometer, and assisted GPS unit – the fundamental ingredients for sensor-based pose estimation, in addition to smart-phone standards: forwards-pointing camera, high-resolution displays and internet connection. In our applications sensor based pose estimation is enhanced with computer vision methods and positioning error minimization techniques. Also the user interface solutions are designed to try to convey the relative uncertainty of the pose estimate to the user in intuitive ways.

Markus Kähäri, David J. Murphy (Nokia Research Center)

The most 90’s demo

See-Through Vision for Mobile Outdoor Augmented Reality

(compare to the previous demo)

We have developed a system built on our mobile Augmented Reality platform that provides users with see- through vision, allowing visualization of occluded objects textured with real-time video information. The demo participants will be able to wear our lightweight, belt- mounted wearable computer and head mounted display. The display will render hidden locations captured from the University of South Australia. These locations consist of 3D models of buildings and courtyard areas that are textured with pre-recorded video images. The system includes a collection of visualizations and tools that assist with viewing these occluded real-world locations; e.g. digital zoom and texture highlighting.

The most playful mixed-reality game demo

Mobile Phone Augmented Reality

In our demo booth we will show a compilation of recent developments created by the Handheld AR group at Graz University of Technology and Imagination Computer
Services. None of these demos has been shown before at a scientific conference making it a unique experience for every ISMAR attendee. All our demos are hands-on: During our demos we will hand out devices and let people experience our applications.

The most crowded demo

Superimposing Dynamic RangeIn a dark, corner room the size of a closet, about 150 people are gathering around an artifact from the future….

We present a simple and low-cost method of superimposing high dynamic range visualizations onarbitrary reflective media, such as photographs, radiological paper prints, electronic paper, or even reflective three-dimensional items. Our technique is based on a secondary modulation of projected light when being surface reflected. This allows boosting contrast, perceivable tonal resolution, and color saturation beyond the possibility of projectors, or the capability of spatially uniform environment light when illuminating such media. It holds application potential for a variety of domains, such as radiology, astronomy, optical microscopy, conservation and restoration of historic art, modern art and entertainment installations.

The most iTouchy demo

Multimodal Mobile Augmented Reality on the iPhone

How do you spell ARToolkit in iPhonese? (Hype is a beautiful thing)

In this demonstration we show how the Apple iPhone can be used as a platform for interesting mobile phone based AR applications, especially because of its support for multimodal input. We have ported a version of the ARToolKit library to the iPhone and customized it for the unique input capabilities of this platform. The demo shows multimarker-based tracking, virtual object rendering and AR overlay, gesture-based interaction with shared virtual content, and accelerometer input. This demonstration shows some of the possibilities of AR when there is no hardware to configure, no interface to learn, and the interaction is natural and intuitive.

Philip Lamb (ARToolworks)

The most down-under demo

An Augmented Reality Weather System

You have to live down-under to conceive a machine that simulates bad weather…brilliant!

This demo presents ARWeather, a simulation application, which can simulate three types of precipitation: rain, snow, and hail. Our goal is to fully immerse the user in the simulated weather by multimodal rendering of audio and graphics, while preserving autonomous and free movement of the user. Therefore, ARWeather was developed and deployed on the Tinmith wearable computer system. Software highlights of this demo include: GPU-accelerated particle systems and video processing, spatial audio with OpenAL, and physics-based interaction of particles with the environment (e.g., hail bounces of the ground).

The most highbrow demo

AR Museum Presentation Room

I never would have learned about this ancient plate’s history – had AR not been invented. A Classic.

The artwork to which the augmented reality technology is applied, is a plate produced by the technique called metallic lustre. Around the exhibited real artwork,
information is provided by multimedia tools, offering the visitor various approaches to the artwork. Adding information with augmented reality is intuitive and offers an illustration of something that cannot be seen by the naked eye, without turning away the visitor’s eyes from the real artwork. The system is currently in use at the Louvre – DNP Museum Lab (LDML) – Tokyo/Japan.

The “I am waaaay ahead of you” demo

Mapping large environments using multiple maps for wearable augmented reality

video of last year's demo

A demonstration of a wearable robotic system that uses an extended version of the parallel tracking and mapping system by Klein and Murray from ISMAR 2007. This extended version allows multiple independent cameras to be used to build a map in unison, and to also create multiple independent maps around an environment. The user can explore an environment in a natural way, acquiring local maps in real-time. When revisiting those areas the system will select the correct local map and continue tracking and structural acquisition, while the user views relevant AR constructs registered to that map.

Robert Castle, Georg Klein & David W. Murray (University of Oxford)

Additional great demos weren’t included due to the lack of space on this post and lack of sleep of the author…

They say, every good thing has an end…and this event is no exception; ISMAR ’08, the world’s most important augmented reality event, is coming to a close with in a high note and with fireworks (augmented, of course).

That’s the part where the event chairs recognize the organizers which have made it possible, and thank the keynote speakers, paper submitters, demo exhibitors, poster presenters, competition contenders, and all participants for making it such a memorable event.

Cut to…flashback. It’s last night at King’s College; Ron Azuma is the MC for the best paper award ceremony…

King's College "Cafeteria"

And the honorable mention goes to: Georg Klein and David Murray “Compositing For Small Cameras”….winners of last year’s best paper…this is excellent work…many other practitioners can use these results”

Cut to…flash forward. It’s the present back in the Cambridge Engineering Department. The winners of the Tracking Competition are about to be announced by the competition team:

Tracking Competition setup

We defined the setup in a large room in the department, with reference points and coordinates and installed 8 different stations with many different objects in them. We made it really hard on the competitors. We gave them time to prepare; they got coordinates of 16 items which they had to pick using their AR tracking technology.

We started with 5 contenders: Metaio (Tobias Eble), Fraunhofer (Harald Whuest), University of Bristol (Sudeep Sundaram), Millennium 3 Engineering (Mark Fiala), and University of Oxford (Georg Klein). Mark Fiala unfortunately had to drop due to lack of sufficient preparation time. Bristol thought the room was missing some features…

And here are the results: in the second place came Metaio with 15 items picked in a little more than 10 minutes…and in first place [the audience favorite] Georg Klein who picked all 16 items in a record time 8:48!

Hallelujah!

Georg (Le magnifique) will return to Oxford with an extra 1000 pounds in his pocket. And he’s humble and gracious:

And for those who wonder what kind of bug drove me to write more than 10,000 words in 17 posts within 4 days – I have one word for you: passion…plus the amazing support I got from ISMAR attendees and chairs, and mostly – you guys: AR avid fans out there, that weren’t as fortunate and couldn’t attend the event this year. THANK YOU!

They also say it’s never over ’till the fat lady sings…and in this case Christopher Stapleton plays the role: he’s the last to come up on stage and his deep voice vibrates across the walls of the auditorium as he shouts into the mic:

ISMAR 2009 Experience starts right now!

If you want to be part of it, help or support it – just send a note to christopher@stapleton.net

The last session of ISMAR ’08 is about to begin, and it concentrates on perfecting Rendering and Scene Acquisition in augmented reality and making it even more realistic.

First on stage is Yusaku Nishin with a challenging talk attempting Photometric registration by adaptive high dynamic range image generation for augmented reality.

His goal : development of photorealistic augmented reality with a High Dynamic Range (HDR) image.

Estimating the lighting environment of virtual objects is difficult because of low dynamic range cameras. In order to overcome this problem, they propose a method that estimates the lighting environment from an HDR image and renders virtual objects using an HDR environment map. Virtual objects are overlaid in real-time by adjusting the dynamic range of the rendered image with tone mapping according to the exposure time of the camera. The HDR image is generated from multiple images captured with various exposure times.

Now you are ready to watch the resulted effect. Incredible.

[youtuve=http://www.youtube.com/v/M53Tqqdk9w0]

~~~

Next on stage is the soon-to-be-hero-of-the-show Georg Klein (more on that later…) Compositing for Small Cameras

Blending virtual items on real scenes. It can work with small cameras. Video from such cameras tend to be imperfect (blurring, over saturation, radial distortion, etc) so when you impose a virtual item it tend to stick out in a bad way. Since we can’t improve the live video – we will try to adapt the virtual item to match the video at hand. Simply put, Georg samples the background and applies it to the image which matches blur, radial distortion, rotation, color saturation, etc) and he does it in 5 millisecond on a desktop… For details check the pdf paper; take a look for yourself and tell me if it works on Kartman:

Done! Georg is already working on the next challenge.

~~~

Following is Pished Bunnun introduces his work: OutlinAR: an assisted interactive model building system with reduced computational effort

Building 3D models interactively and in place (in-situ), using a single camera, and low computational effort – with a makeshift joystick (Button and wheels.)

In this case the video does a better job at explaining the concept than any number of words would…

In the very last talk of the event Jason Wither courageously takes on another challenge to perfecting augmented reality, with his talk: Fast Annotation and Automatic Model Construction with a Single-Point Laser Range Finder

Jason is using a laser finder typically used by hunters (though he will not be shooting anything or anybody), mounted on the head or handheld, in conjunction with a parallel camera. First he wants to create an annotation. that’s totally trivial. But you can then orient the annotation according to a building for example.

Next, he is going to correct occlusion of virtual objects by real objects for improved augmented realism. Just click before and after the object and pronto:

Finally he will create a 3D model of an urban environment semi-automatically, by creating a depth map courtesy of the laser. To achieve that he’s using a fusion process. You got to see that video; the laser’s red line advancing on buildings reminds me the blob swallowing the city in that quirky Steve McQueen movie.

In conclusion this is a really low cost and fast approach for modeling and annotation of urban environments and objects. That capability would become extremely handy once Augmented Reality 2.0 picks up and anyone would want to annotate the environment (aka draw graffiti without breaking the law).

Next is the event wrap up and the results of the Tracking Competition. Stay tuned.

First is Johannes Tuemler which will talk about Mobile Augmented Reality in Industrial Applications: Approaches for Solution of User-Related Issues.

The study looks at psychological and ergonomic factors in augmented reality usage and create a requirements catalog for mobile AR assistance systems in diverse scenarios. This was a collaboration with Volkswagen, Ergonomics department in Ott-von-Wolfsburg, Perception Psychology from Weymar University, and Information technology by the Fraunhofer Institute.

The reference scenario chosen was “AR picking”, where subjects would work for a couple of hours of picking items from shelves using a mobile AR device. The users reported no rise of stress level with an AR system compared with no AR (except for some visual discomfort). Since the AR system was less than optimal, the research may point to the fact that with a better AR system the stress level of workers – compared with no AR system – could be reduced!

~~~

As a direct follow up to the first study, Bjoern Schwerdtfeger comes on stage to describe the results of an Order Picking with AR work.

Traditionally the system includes a print out with instructions of what items to pick from bins on shelving.

How can an AR system help improve the performance of such an activity?

The results showed that the frame visualization was more efficient than the arrow. It’s not clear whether the rings visualization is superior.

~~~

Final speaker for this session is Gerhard Schall from Graz University to discuss Virtual Redlining for Civil Engineering in Real Environments.

What is virtual redlining? Virtually annotation paper maps or 2d digital information systems (mostly for the utility sector). This process helps significantly in the workflows associated with network planning or inspection.

The process involved mapping of 2D geographical data with 3D models of buildings and underground infrastructure. The tool developed allows for collaboration, inspection, and annotation.

Results of the usage study confirms that the AR system has significant advantage in civil engineering – in this redlining scenario. The color coding was important, as well as the digital terrain model.

Question from the audience: where do you get the 3D modeling of the piping?

Answer: Some utility companies have started to map the underground infrastructure. But in most cases we create it based on 2D maps which is only an approximation.

And that concludes the Industrial user studies session. See you next at the last session of the event: Rendering and Scene Acquisition, leading to the grand finale with the award ceremony for the winner of the Tracking Competition.

The last day of ISMAR ’08 is upon us, and the day opens by stimulating our senses with a session about sensors.

Gabriele Bleser starts this session with a talk about Using the marginalised particle filter for real-time visual-inertial sensor fusion

She starts by showing a short clip with an erratic camera motion that makes everyone dizzie…it actually proves an important capability that she studied which creates less jitter and less requirements imposed on the camera.

She explains the basics of particle filter and the use of inertial measurement. In the past researchers studied standard particle filter. This is the first study using the a marginalised particle filter.

Testing using the new technique (non linear state space model with linear Gaussian substructure for real time visual inertial pose estimation) with 100 particles resulted in increased robustness against rapid motions.

To prove: Gabriele shows the rapid camera movements once again…

Well, we have to suffer now so that in the future users won’t have to. Kudos Gabriele.

What you need for ubiquitous tracking is automatic discovery of tracking infrastructure, and shield applications from tracking details.

Gyroscopes are very interesting to use (low latency, high update rate, always available), but they have drawbacks (drift, only for rotation) and are only usable when fused with other sensors.

Daniel and team have proved that the ubiquitous tracking tool set consisting of spatial relationship graphs and patterns is very useful to analyze tracking setups including gyroscopes. It allows a Ubitrack system to automatically infer occasions for gyroscope fusion in dynamically changing tracking situations.

This study builds on the idea that by combining vision and inertial sensors you get accurate real time position and orientation in a robust and fast motion, and this is very suitable for AR applications. However, calibration is the essential pointfor this to work.

An easy to use algorithm has been developed and yields results with real data.

Ron Azuma asks: When the image is captured in high motion does it create blur?

Jeroen answers that it can be addressed by changing some parameters.

~~~

Last for this session is Wee Teck Fong from NUS to discuss A Differential GPS Carrier Phase Technique for Precision Outdoor AR Tracking.

The solution that Fong presents provides good accuracy with low jitter, drift and low computational load – and no resolution ambiguities. It works well for outdoor AR apps. With just one GPS you get an accuracy of about 10 meters plus you get high jitter of the tracking. Differential GPS using 2 GPS receivers (low cost 25mm sized) improves the accuracy of tracking. Fong and team have taken it a steps further with an advanced computational model that delivers higher precision for outdoor AR tracking. Fong claims that with a more expensive receiver he can achieve a less than 1mm accuracy, but you can’t use this technique anywhere. An infrastructure of stationary GPS stations transmitting wirelessly could provide a wide constant coverage for this technique.

Fong concludes with a positive note regarding the upcoming European update to the GPS system dubbed Galileo (in 5 years) were things will get significantly better.

Welcome to the climax of ISMAR ’08. On stage the 9 “gods” of the augmented reality community. And they are siting in a panel to muse about the next 10 years of augmented reality.

Dieter Schmalstieg took on the unenviable job of moderating this crowd of big wigs. See if he can curb them down to 3 minutes each.

Here is a blow-by-blow coverage of their thoughts.

Ron Azuma (HRL)

The only way for AR to succeed is when we insert AR into our daily lives – it has to be available all the time (like Thad Starner from GA Tech which always wears his computer)
Ron asks – What if we succeed? what are the social ramifications? those who have thought about it are science fiction writers…such as Vernor Vinge (have you read Rainbows End and Synthetic Serendipity.)

Reinhold Behringer (Leeds)

AR is at the threshold of broad applications.
Cameras, GPS, bandwidth have improved immensely – split into lo-fi AR, approximate registration, low end hardware. and also hi end AR, live see through displays, etc.
What’s missing is APIs, common frameworks, ARML descriptor (standardization)

Mark Billinghurst (HitLab NZ)

Mobility (now) – It took 10 years to go from backpack to palm
Ubiquity (5+ years) – how will AR devices work with other devices (TV, home theater, …),
Sociability – it took us 10 years to go from 2 to 4 to 8 users . When will we have massive scale?
Next is AR 2.0 with massive user generated content and a major shift from technology to user interaction

Steve Feiner – Columbia

AR means “The world = your user interface”
What will it take to make this possible?
Backpacks are ridiculous; handheld devices will look ridiculous 5 years from now – so don’t write off eyewear.
A big one is dynamic global databases for identification/tracking of real world objects. Tracking could be viewed as “just” search (granted a new kind of search.)
There is more to AR than registration; AR presentations need to be designed (AR layouts).

Gudrun Klinker – TU Munchen

|ntegrating AR with ubiquitous. We are interfacing with reality, with our senses and others are mental. We need those lenses to connect to our “senses” (not just visually – it could also be sound, etc). Combining the virtual with the real – where is the information? and can we see it? How do we communicate with the stationary world? We need to connect with the room we are in and hear the “story”. The devices at least need to talk to each other.
We also need to think about “augmented” building, they do not evolve as fast as cell phones. Another aspect is how are we going to survive “this thing”. We need much more usability studies and connect it with real world applications. The ultimate test (I challenge you to show it in next year’s competition) is a navigation system for runners. It’s easy to do it for cars – but may be harder for people.

The simplest idea is getting into the operation room – combining X Ray and optics as part of the common operating workflow.
Next is fusion of pre/intra operative functional and anatomical imaging; patient motion tracking and deformable registration; adaptive, intuitive and interactive visualization; Integration into surgical workflow
Finally we need to focus on changing the culture of surgeons (e.g. training with AR simulation).

Haruo Takemura – Osaka University

Showing a table comparing the pros and cons of hardware platforms: e.g. mobile have potential benefits vs HMD (but also drawbacks – such as processing power); desktop is cheap and powerful but not mobile (tethered).
Cell phones have another issue – they are tied to the carriers which is problematic for developers.

Bruce Thomas – UniSA

We are extremely interdisciplinary – and should keep it up.
However with so many of these it’s hard to develop and evaluate. And by the way innovation is difficult to articulate.
We are in a “Neat vs. Scruffy” situation – the bottom line is that a smaller self-contained pieces of research is easier to get in front of the community – and get results.

Questions floating:
is high end or low end AR the goal?
is ubiquity in AR realistic or wishful thinking?
are we innovative/.
Does augmented reality need to make more money to survive?
Platforms: Don’t write off eyewear?
Social: what if we succeed with AR?
What is the position of ISMAR in the scientific community?

A controvertial question from the audience to the panel: How many of you have subject matter expert working in your office on a daily basis? (few hands) How many of you have artists working a daily basis? (even fewer hands) How many of your research have reached the real world? (once again – few hands)

A question from the audience about the future of HMD. Mark takes the mic and asks the audience:

How many of you would wear a head mounted display? (5 hands)

How many of you would wear a head mounted display that looks like a normal glasses? (75% of the audience raise hands)

Dieter asks the panel members to conclude with one sentence each (no semi columns…)

Ron: I want to refer to the comment that the cell phone is too seductive. We should make it indispensable so users won’t want to give it up – just like a cell phone.

Mark: We need to make sure that children, grandparents, in Africa and everywhere – could use AR

Steve: You ain’t seen nothing yet; look at the progress we have made in the last 10 years! No one can predict what will happen.

Gudrun: We have to be visionary and on the other hand. We need to be realistic and make sure RA doesn’t end up like AI…don’t build hopes in areas where people shouldn’t have them…don’t let AR get burned…

Nassir: Next event we should include designers and experts from other disciplines; and create solutions that go beyond the fashion

Haruo: Maybe combining information like Googles with devices

Bruce: I want you to have fun and be passionate about what you do! We can change the world!

After a quick liquid adjustment, and a coffee fix – we are back with the next session of ISMAR ’08, tackling a major topic in augmented reality: Tracking.

Youngmin Park is first on stage with Multiple 3D Object Tracking. His first demonstration is mind blowing. He shows an application that tracks multiple 3D objects, which have never been done before – and is quite essential for an AR application.

The approach combines the benefits of multiple approaches while avoiding their drawbacks:

Match input image against only a subset of keyframes

Track features lying on the visible objects over consecutive frames

Two sets of matches are combined to estimate the object 3d poses by propagating errors

Conclusion: Multiple objects are tracked in interactive frame rate and is not affected by the number of objects.

Don’t miss the demo.

~~~

Next two talks with Daniel Wagner from Graz university about his favorite topic Robust and Unobtrusive Marker Tracking on Mobile Phones.

Why AR on cell phones? there are more than a billion phones out there and everyone knows how to use them (which is unusual for new hardware).

A key argument, Daniel is making: Marker tracking and natural feature tracking are complementary. But we need a more robust tracking for phones, and create less obtrusive markers.

The goal: Less obtrusive markers. Here are 3 new marker designs:

The frame markers (the frame provides the marker while the inner area is used to present human readable information.

The split marker (somewhat inspired by Sony’s by the eye of judgment) we use barcode split, with a similar thinking to the frame marker.

A third marker is a Dot marker. It covers only 1% of the overall area (assuming it’s uniquely textured – such as a map).

Incremental tracking using optical flow:

These requirements are driven from industrial needs: “more beautiful markers” and of course making them more robust.

~~~

Daniel continues with the next discussion about Natural feature trackingon mobile phones.

Compared with marker tracking, natural feature tracking is less robust, more knowledge about the scene, more memory, better cameras, more computational load…

To make things worse, mobile phones have less memory, with less processing power (and no floating point computation), and a low camera resolution…

The result is that a high end cell phone runs x10 slower than a PC, and it’s not going to improve soon, because the battery power is limiting the advancement of this capabilities.

So what to do?

We looked at two approaches:

SIFT (one of the best object recognition engines – though slow) and –

Ferns (state of the art for fast pose tracking – but is very memory intensive)

So both approaches wont work for cell phones…

The solution: combine the best of both worlds into what they call: PhonySift (Modified SIFT for phones). And then complementing it with PhonyFern – detecting dominant orientation and predicting where the feature will be in the next frame.

Conclusion: both approaches did eventually work on mobile phones in an acceptable fashion. The combined strength made it work, and now both Fern and Sift work at similar speeds and memory usages.