Seeing this little animation was one of those serendipitous moments, as I had that very day experienced something eerily similar.

I’ve written previously about how I’ve been toying around with the augmented reality app Aurasma. In A way with the fairies I described how I used this app to replicate Disney’s fairy trail in my local botanic garden.

Impressed with what the app can do, I turned my attention to using it in the workplace. I decided to start small by using it to promote a new online course that my team was launching. I took a screenshot of the main characters in the course’s scenario and provided 3-step instructions for the target audience outlining how to: (1) Install the app onto their mobile device; (2) Visit the relevant URL in their browser; and (3) Point their device at the picture. When they did so, the names of the characters would magically appear above their heads.

This wasn’t just a gimmick; it was a proof of concept. By starting small, I wanted to test it cheaply and fail quickly. And fail I did.

When I asked several of my tech-savvy colleagues to test it, every one of them reported back saying it didn’t work. Huh? It worked for me! So what could be the problem?

After much tinkering and re-testing in vain, I decided to ask a friend of mine to test it. Bang, it worked for her first go. As it turns out, my colleagues simply weren’t following the second instruction to go to the URL. In their excitement to scan the image, they did so immediately after installing the app – but of course without the link, the app had nothing to connect the image to my augmentation. So when I pointed out their skipping of Step 2 and they re-tried it, voila it worked.

Despite this rough start, another colleague of mine cottoned on to my trial and was keen to use the idea to jazz up a desk-drop he was creating. Upon scanning the trigger image, he wanted a video to play. Aurasma can indeed do this, but I was trepidatious because my experiment had failed with tech-savvy colleagues – let alone regular folks. But I decided to look on the bright side and consider this an opportunity to expand my sample size.

Learning from my mistakes, I re-worded the 3-step instructions to make them clearer, and this time I asked a colleague to test it in front of me. But again we ran into trouble. This fellow did follow Step 2, but when the URL opened the app, it immediately required him to scroll through a tutorial. Then it asked him to sign up. Argh… these steps were confusing… and I was oblivious to it because I had installed Aurasma ages ago and had long since done the tutorial and signed up.

But that wasn’t all. After I grandfathered my colleague to Step 3, he held out his smartphone and pointed it at the image like a lightsaber. WTF? He read the instruction to “point” his device literally.

Another lesson learned.

Steve Jobs famously obsessed over making his products insanely simple. Apple goodies don’t come with a user manual because they don’t need them.

My experience is certainly a testament to that philosophy.

Three steps were evidently too many for my target audience to handle. The first step appeared simple enough: millions of people go to the App Store or Google Play to install millions of apps. And indeed, no one in my test balked at that. (Although convincing IT to tolerate a 3rd-party app would have been my next challenge.)

Similarly, the third step was easy enough when re-worded to point your device’s camera at the image.

The second step was the logjam. Not only is it unintuitive to open your browser after you have just installed a new app, but dutifully following this instruction mires you into yet more complexity. Sure, there is an alternative: search for the specific channel within the Aurasma app and then follow it – but that too is problematic as the user has to click a tab to filter the channel-specific results, which is academic anyway if you don’t want the channel to be public.

I understand why Aurasma links images to augmentations via specific channels. Imagine how the public would augment certain corporate logos, for example; those corporations wouldn’t want anything derogatory propagated across the general Aurasmasphere. Yet they hold the rights over their IP, so I would’ve thought that cutting off Joe Public’s inappropriate augmentation would be a matter of sending a simple email request to the Aurasma folks. Not to mention it would be in the corporation’s best interest to augment its own logo.

Anyway, that’s all a bit over my head. All I know is that requiring the user to follow a particular channel complicates the UX.

So that has caused me to wind down my plans for augmented domination. I am still thinking of using Aurasma: we might use it in our corporate museum to bring our old photos and artefacts to life. But if we go down this road, I’ll recommend that we provide a loan device with everything already set-up on it and ready to go – like MONA does.

One of the greatest hoaxes to be perpetrated last century was that of the Cottingley Fairies.

In 1917, 9-year-old Frances Griffiths and her 16-year-old cousin, Elsie Wright, borrowed Elsie’s father’s camera to take a photograph of the fairies they claimed lived down by the creek. Sure enough, when he developed the plate, Elsie’s father saw several fairies frolicking in front of Frances’s face.

A couple of months later the girls took another photograph, this time showing Elsie playing with a gnome. While her father immediately suspected a prank, her mother wasn’t so sure and she took the photos to a local spiritualist meet-up. From there the photos, and three others taken subsequently by the girls, eventually hit the press and became a worldwide sensation.

Although the photos were quite real, the fairies of course were fake. In 1983, the cousins (by now old ladies) finally confessed they were cardboard cut-outs.

Not everyone – it must be said – had been fooled. In fact, most probably weren’t. However one who took the bait hook, line and sinker was none other than Sir Arthur Conan Doyle, whose involvement helped catapult the photographs to public attention in the first place.

It must also be said that as a devout spiritualist, Sir Arthur wanted to believe.

Developed in association with the Botanic Gardens Trust, the app promised to bring to life Tinker Bell and her flighty friends using our beautiful public spaces as the back drop.

Despite not being a member of the app’s target audience, I am an augmented reality advocate, so I downloaded the app and installed it on my iPad.

Its instructions were simple:

Choose a botanic garden.

Start the trail.

Use the map to help find all the fairy locations.

Your device will vibrate when there is a fairy nearby.

Use your device to find the fairy.

Tap the fairy to reveal their [sic] secrets.

I wanted to believe, so I hotfooted my way to the Royal Botanic Garden in Sydney to give it a go. Unfortunately the experience was less than optimal.

From the get-go, the map was so high level it was effectively useless.

After wandering semi-randomly around the park, I stumbled upon a tiny sign with an arrow promising that fairies live over there. Yet after crisscrossing my way all over the vicinity, my device never vibrated.

So I moved on. After a while I stumbled upon another tiny sign promising that the fairy trail continued this way, but after a short distance the path split out into multiple alternatives, none of which were sign posted.

After some more rambling, I finally stumbled upon a nice big sign declaring that fairies live here. Alas, still no vibrating – but I had a thought… perhaps the app doesn’t like my iPad? So I whipped out my Android smartphone and downloaded the app to it. It wouldn’t be as fun on the smaller screen, but I was determined to see a friggin fairy.

But it didn’t work on my smartphone either.

I should have read the customer reviews on the App Store before going to so much trouble. I clearly wasn’t the only one who had trouble with the app. And this bewilders me.

Unfortunately it was no surprise in retrospect when the real-life aspect of the experience didn’t work. To put my opinion into context, the Botanic Gardens Trust is the organisation that allows hordes of boot campers to bully tourists and locals alike off the park’s footpaths; and whose own staff choose the peak CBD lunch hour to drive their trucks and trailers along said paths.

But Disney! How could Disney fail to bring Tinker Bell & Co to life? The makers of masterpieces such as Toy Story and The Lion King evidently couldn’t engineer a dinky little augmented reality app that would work on my device of choice – or even on my device of second choice.

It just goes to show, if you want something done right, you have to do it yourself. So I did.

I discovered the Aurasma app a while ago, and I’ve been toying around with it to get a sense of how it works.

The app allows you to upload an image or a video, which appears or plays when you scan a real-world trigger with your device’s camera, hence augmenting reality.

I haven’t yet encountered a burning need to use it for an educational purpose at my workplace, but when I had trouble with the Fairies Trail app, I decided to see if I could replicate the intended experience.

So I downloaded Green Fairy 3 by TexelGirl, uploaded her to Aurasma, and associated her with a rose in the garden. When I scanned my iPad’s camera over the rose… Voila! She appeared.

As you can see via my screenshot, Aurasma supports the binary transparency of PNG files; with the background of the fairy image invisible, the real background shines through. The app also supports the partial transparency of PNG files; if I were to make the fairy 50% transparent, the real background would be partially visible through her.

My fairy was a static image, so she wasn’t moving. While Aurasma doesn’t seem to support animated gifs, it does support video. However there appears to be a conspicuous problem. My understanding is that MP4 format does not support transparency, and while FLV does, it won’t run on the iPad. I tweeted the Aurasma folks asking them to clarify this, but they are yet to respond.

Nevertheless, I discovered a work-around which is to duplicate the real background in the background of your video clip. That way when the video launches, it appears that its background is the real background. (I suspect this is how Dewars brought Robert Burns to life.)

Of course, this means the experience is no longer augmented reality, but rather an illusion of it. Though I wouldn’t go so far as to call it a hoax. Fauxmented reality, perhaps?

This volume comprises my latest collation of articles from this blog. As in the earlier volumes, my intent is to provoke deeper thinking across a range of e‑learning related themes in the workplace, including:

I love history, I love augmented reality, and I own an iPhone – so a combination of all three proved irresistible.

Unfortunately, though, I was a little bit disappointed.

Here’s why…

1. The title is meh

Exciting initiatives should have a catchy yet self-evident title to attract users like bears to a honey pot. However, Augmented Reality browsing of Powerhouse Museum around Sydney is boring and clunky.

I’d prefer something like Pocket Time Machine: An augmented reality tour of Old Sydney. A bit cheesy, I know, but a lot more interesting.

2. The app focuses on south CBD and the inner west

As the first European settlement on the continent – with a rich indigenous history – Sydney is teeming with sites of historical significance. However the app conspicuously misses the most obvious ones (eg Sydney Harbour Bridge, Sydney Opera House and the AMP Building).

Of course you have to start somewhere and the PhM website does promise a new version, but it refers to contemporary photography and gamification. I’d rather they expand their range into The Rocks and Circular Quay.

3. The app barely augments reality

Since the app is built on the Layar platform, it connects to Google Maps. Select the “i” icon at the relevant location and a photo pops up from the museum’s collection showing you what it looked like 100 years ago. This functionality is excellent, and frankly it could stand alone.

I suggest PhM follows the lead of the Museum of London and leverages the technology more fully. How? By laying the old photos over the real background.

This is what edtech is all about: transforming the educational experience.

Put a map on a smartphone? A crumpled tourist map is just as good; Plug in some photos? Nice touch, but those can be printed too; Lay century-old photos over the modern world in real time? Now that’s novel.

Even better, why not complement the visual with narration to provide a richer multimedia experience?

Who dares wins

As you would have gathered earlier, it is not my intention to pick on PhM. On the contrary, I salute them for having a red-hot go at something new.

Having taken the first step, they have earned the right to sit back and evaluate their app, with a view to making it even better the next time around.

I tried a similar thing at home when my local newspaper promoted Night At The Museum 2. I put the paper up to my webcam, and like magic a dinosaur skeleton came to life, a giant squid flailed its tentacles, and an aeroplane buzzed around my head.

But are these two latter examples really augmented reality?

By projecting both the digital imagery and the real background onto a computer screen, I would argue they are not actually augmenting reality. Instead, they are augmenting a representation of reality.

It’s just like adding cartoons to a movie set like they did in Who Framed Roger Rabbit, using CGI like they did in Star Wars, or even scribbling a moustache and devil horns onto someone’s photo.

In all these examples, the background isn’t real. It’s film, or light, or paper. In other words, a copy of reality.

Rewind

This insight was genius – at least in my own mind – until I realised that a smartphone doesn’t actually show reality on the other side of itself as do goggles or the viewfinder of an old camera. Instead, the device digitises the image and represents it as pixels on the screen, like a modern camera.

With that in mind, the Layar example is closer to the GE example than it is to the BMW example. Damn!

This was bugging me, and after a period of reflection I think I’ve identified why.

New criteria

To me, the exciting emergent form of augmented reality has the following characteristics…

1. It adopts the user’s personal POV.

When a webcam captures reality and projects it onto a computer screen, it’s not real in the sense that you don’t look at the background in that way (unless you constantly carry a mirror around with you).

A smartphone similarly projects the background onto its screen, but because you are mobile and pointing the device in front of you, it is for all intents and purposes real.

2. It is live.

We don’t live our lives by watching a recording of it. We live it here and now.

Reality is in real-time.

The two types

In light of the above criteria, I recognise two types of augmented reality:

Type I Augmented Reality (AR1), whereby the artificial imagery is layered over the background from the personal POV in real-time;

and

Type II Augmented Reality (AR2), whereby the artificial imagery is layered over the background from an impersonal POV or not in real-time.