You already know what augmented reality is. You just might not know it's called that, and when you've seen it as its best, you probably haven’t even noticed it at all.

Put at its simplest, augmented reailty, or AR to its friends, is the art of super-imposing of computer generated content over a live view of the world. It is quite literally the practice of enhancing what’s already around us. The most often used example is the one the world is most familiar with and that’s television sports analysis. The reality is the footage of the game of football, rugby, cricket or what have you and the augmentations are the arrows of the players' movement and the zonal areas marked out that don’t exist on the actual playing surface. For example, the first down line in American football is something that doesn't exist on the real pitch but that the viewer can see nonetheless on the picture beamed over the airwaves and to television screen thanks to the addition of a graphical overlay.

One definition of AR as laid down by Professor Ronald Azuma in his Survey of Augmented Reality paper in 1997, is that it combines the real and the virtual, it’s interactive in real-time and that it must register in 3D. Using the example of the first down line, you’ve got the combination of the computer generated line - the virtual - on top of the real American Football footage; it’s present in the real time motion of the game - whether recorded or not - and the graphical line obeys all physical rules of depth as if it existed in the real world. In other words, that it appears underneath the plays feet and behind their legs when they cross it rather than spatially out of sync. It’s as if it really is painted on the pitch.

The first down line, of course, represents a very mild level of AR. It’s a very small, simple, largely static piece of augmentation and makes up for a tiny per centage of the total view. It’s much more reality than virtuality. Nevertheless, according to a second definition by Paul Milgram and Fumio Kishino, it does still exist on a continuum of augmentation which they describe as a line existing between the real world and a totally virtual environment. At the one end would be what you see through your eyes with no device in the way and at the other a completely computer generated world, a virtual reality such as Second Life.

The example of the first down line lies a long way to the left of this scale - just millimetres from the end really - but it’s equally possible to have a situation of something you might call augmented virtuality at the corresponding point on the right. By today’s standards, anything that fits anywhere on this continuum that isn’t right at one of the extremes would be classed as AR.

Now that we’re fully versed on what it is we’re talking about, let’s get practical with a few examples of what AR can do for us and how it works. The key part of AR is that you need to place a layer of virtual information over your view of the real world and, in order to do that, there must be a device in between to display that information upon. There are three main ways of doing that and they all relate to the position these devices occupy.

The first instance is where that display is right up against the eyes of the beholder. These are often referred to as Head-Mounted Displays (HMDs) and will take the form of a visor of some sort or a pair of connected glasses such as those manufactured for consumers by Vuzix. The holy grail of HMDs is contact lens solution and indeed there’s plenty of research and development here that will be the subject of other articles in AR Week on Pocket-lint. HMDs are generally a good solution. It leaves the user’s hands free and means that their entire visual field can be overlayed with augmentation wherever they turn.

One step away from that are the handheld devices which are most notably these days smartphones but will doubtless include camera-toting tablet computers after the flurry seen at CES and MWC 2011. Either way, there’s an advantage here in that these things already exist in quite powerful and convenient forms. The issues, though, are that the user is limited to just a frame, that your hands are tied up and, possibly most problematic of all, is that the more AR is used like this, the greater the chance that people will end up assaulting each other by accident as they spin around with their arms stretched out in front of them or just getting their expensive phones whipped from their grip.

Finally at the other end of the business, and closest to the real world itself, is the method of pasting that computer-generated overlay directly on top of your real environment instead. It’s usually done with digital projectors or other devices known in this group as Spatial Displays. The advantages are that the user is required to hold or wear no computer equipment whatsoever and the second key difference is that everyone else can see the AR as well. The downside is that it’ll only work on a very specific environment but it’s perfect for collaborations like building projects and might even be the future of construction sites. Who needs plans and blueprints when everyone can see where each timber is supposed to go?

There is another slightly different group of devices out there which can be employed for activity specific environments and, in fact, these are what’s used for some of the most developed and mature examples of AR around at the moment. These are the Heads-up Displays or HUDs. HUDs are powered and connected transparent view screens with computer graphic LED information displayed over the environment.

Essentially, we’re talking about large, fixed, transparent computer screens sat somewhere not to far out in front of the user. The military have been using them for years with screens of fighter jets made just like this and indeed it’s where the name HUD comes from. It’s a heads-up display because the pilots are able to keep their heads up and looking at the action in front of them rather than having to constantly reference controls and meters on their cockpit panels. They can monitor the speed of other aircraft, compass headings, vectors, wind speed and anything else you could possibly want to know - all by just looking straight ahead. In the next few years, we’ll begin to see consumer car windscreens as HUDs but, again, there’ll be more on that coming up in AR week.

So, by now you should be thinking that you’ve pretty much got a handle on what this AR thing is all about and there’s even an excellent chance that knew all along anyway. Well done you. But just before we go, spare a little thought for this teaser - does AR have to be all about your eyes?

What about putting a layer of information between your other four senses and the rest of the world? For touch, you could have information sent back to you via haptic feedback, for taste, there could be device mounted on your tongue - as uncomfortable as that might sound - having headphones in your ear is simple enough and there’s no reason why there couldn’t be similar units for one’s nostrils.

So long as the user still has contact and appreciation of the natural world while plugged up with sensors, then there is still the mixing of the real and virtual and that is what AR is all about. A car’s increasingly rapid beeps when its bumper gets closer and closer to a static object could be considered AR, the noises of a Geiger counter are a form of AR and, doubtless, someone could invent a small nasal-lining film for hay fever suffers that might emit a strong smell when pollen passes across it.

All the same, for the time being, most of the development and much of the interest of AR lies with the visual mode, largely because it’s an excellent medium for getting across more rich information in any one moment, and it’s here where much of AR Week will be based. Tune in as we take a closer look at some of the more exciting applications that the future holds.