The leaders of this workshop will discuss how they are currently using Mixed and Augmented Reality for education and entertainment and the challenges they face or most wish to tackle in the future.

First Post of Three

This is the first of three posts covering this workshop. Here, I will summarize the first three of six presentations given in the morning by those already using augmented reality for their particular purposes. In the next post, I will cover the remaining talks. Finally, the third post will cover the afternoon's discussions that sought to answer three main questions about augmented reality's place in education.

What Is Augmented Reality?

If you really want to know, check out the Wikipedia article. The points mentioned before the six presentations began include:

AR gives context to the situation. It's not an out-of-body experience or a separate thing from the world we know.

Blends the real and the synthetic.

When the technology disappears, the imagination is enhanced.

Involves multiple senses.

Can record experiences in detail (such as high scores, stress of learners, etc).

What's Happening at UCF

Eileen Smith, director at the Institute for Simulation and Training at the University of Central Florida, spoke first, telling us about some of the projects surrounding experiential learning going on at UCF. Some examples include informal learning at museums, teacher training, recreating the World Fair, and military training.

One of the most interesting and unique uses of AR was, for me, the green kitchen. This is a reconfigurable set of cabinetry that can be arranged as per anyone's kitchen. Someone requiring cognitive rehabilitation can then wear a head mounted display and see what looks a lot like their own house, and then practice performing simple tasks like making cereal.

Another neat project was Journey With the Sea Creatures. A magic window into a fossil exhibit that would otherwise never change made the museum worth visiting more than once. This particular program filled the room with water and brought in the amazing creatures alive many years ago. Apparently once the children discovered this feature, they would go back into the main exhibit area and start swimming around for their friends and family to see on the magic window.

Eileen closed with a suggestion on when to use augmented reality. Don't use it when the real world will do just fine (in other words, if you can just do what you are trying to simulate, why bother with the simulation?). Instead, employ AR when you want to explore space, time, and scale, or to collect data you can then use or display to others later.

Museum Exploration, DNP Digitalcom

Next up was Tsutomu Miyashita from DNP Digitalcom [Japanese]. He discussed the AR projects intended for use in the Louvre to encourage better appreciation of art by visitors, and route guidance.

His group wanted to use markerless tracking at first, since they felt that the 2D bar codes would probably detract from the art itself, not being terribly attractive. Visitors using this technology were surprised and gleeful, but because they were not familiar with the concept of AR, they did not use it as expected. Furthermore, the weight and battery life of the devices used were a problem. (Something that may not be as important in research, but crucial in the real world!)

The next iteration used cell phones and markers instead. In the interface, a computer animated character taught users how to view art and properly appreciate it in addition to showing them where to go next. They understood the marker-based system much better, and the system also performed better in terms of recognition accuracy.

The key takeaway was that users feel surprised when they see augmented reality for the first time, leading to strong attention. But if they don't really know how to use it, then engaging them is really important so that they actually want to figure it out. Finally, once their attention is obtained, retention, understanding, and satisfaction become the aim.

EyePet

Istvan Siklossy spoke next, mainly showing us the new EyePet game for Playstation 3. He explains that in camera based games, you typically see yourself and use motions and gestures to interact. Player actions generally map to the game action, making the games accessible to everyone.

(Here Istvan is showing the shower game for EyePet -the screen is all foggy like a shower door!)

In EyePet, an adorable creature comes to life on your living room floor. Your interaction with it, which occurs by gestures as well as with a special marker, is robust and responsive. It's quite impressive! To get the robust tracking even in low-lighting (noisy images), the group took the usual tracking algorithms and made some improvements, such as rapid multiple thresholding to find many contours and locate the marker. It's crucial in the skill-based games that the tracking accuracy is no less than excellent.

In terms of learning environments, the EyePet allows for experimentation in that some basic sketches drawn by players are interpreted and transformed into toys for the pet. Players learn how the pet reacts, get a personalized experience, and have an opportunity to record and share videos of their experience.