Make the world your virtual target range in this Augmented Reality tutorial

This is the second part of a four-part series on implementing AR (Augmented Reality) in your games and apps. Check out the first part of the series here!

Welcome to the second part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.

Your task in this stage of the tutorial is to add some HUD overlays to the live video, implement the basic game controls, and dress up the game with some explosion effects. I mean, what gamer doesn’t love cool explosions? :]

Adding Game Controls

Your first task is to get the game controls up and running.

There’s already a ViewController+GameControls category in your starter project; this category handles all the mundane details relating to general gameplay support. It’s been pre-implemented so you can stay focused on the topics in this tutorial directly related to AR gaming.

Open up ViewController.mm and add the following code to the very end of viewDidLoad:

// Activate Game Controls
[self loadGameControls];

Build and run your project; your screen should look something like the following:

Basic gameplay elements are now visible on top of the video feed you built in the last section.

Here’s a quick tour of the new game control elements:

The instruction panel is in the upper left portion of the screen.

A scoreboard in located the upper right portion of the screen.

A trigger button to fire at the target can be found the lower right portion of the screen.

The trigger button is already configured to use pressTrigger: as its target.

pressTrigger: is presently stubbed out; it simply logs a brief message to the console. Tap the trigger button a few times to test it; you should see messages like the following show up in the console:

A set of red crosshairs is now visible at the center of the screen; these crosshairs mark the spot in the “real world” where the player will fire at the target.

The basic object of the game is to line up the crosshairs with a “real world” target image seen through the live camera feed and fire away. The closer you are to the center of the target at the moment you fire, the more points you’ll score!

Designing the Gameplay

Take a moment and consider how you want your gameplay to function.

Your game needs to scan the video feed from the camera and search for instances of the following target image:

Once you detect the target image, you then need to track its position on the screen.

That sounds straightforward enough, but there’s a few challenges here. The onscreen position of the target will change or possibly even disappear as the user moves the device back and forth, or up and down. Also, the apparent size of the target image on the screen will vary as the user moves the device either towards or away from the real world target image.

Shooting things is great and all, but you’ll also need to provide a scoring mechanism for your game:

If the user aligns the crosshairs with one of the rings on the real world target image and taps the trigger, you’ll record a hit. The number of points awarded depends on how close the user was to the bull’s-eye when they pressed the trigger.

If the crosshairs are not aligned with any of the five rings on the real world target when the user taps the trigger button, you’ll record a miss.

Finally, you’ll “reset” the game whenever the app loses tracking of the target marker; this should happen when the user moves the device and the target no longer appears in the field-of-view of the camera. A “reset” in this context means setting the score back to 0.

That about covers it; you’ll become intimately familiar with the gameplay logic as you code it in the sections that follow.

Adding Gameplay Simulation

There’s a bit of simulation included in the project to let you exercise the game controls without implementing the AR tracking. Open ViewController+GameControls.m and take a look at selectRandomRing:

This method selects a random ring using the test API selectRandomRing discussed above. If a ring is selected, record a "hit" along with the commensurate number of points. If no ring was selected, record a "miss".

You're abstracting the target hit detection to a separate module so that when it comes time to do away with the simulation and use the real AR visualization layer, all you should need to do is replace the call to selectRandomRing with the call to your AR code.

Still in ViewController.mm, replace the stubbed-out implementation of hitTargetWithPoints: with the code below:

This method triggers when you record a "miss" and simply plays a "miss" sound effect.

Build and run your project; tap the trigger button to simulate a few hits and misses. selectRandomRing returns a hit 50% of the time, and a miss the other 50% of the time.

At this stage in development, the points will just keep accumulating; if you want to reset the scoreboard you'll have to restart the app.

Adding Sprites to Your Display

Your crosshairs are in place, and your simulated target detection is working. Now all you need are some giant, firey explosion sprites to appear whenever you hit the target! :]

Note: Sprites, the staple elements of game programming, can be implemented in many different ways. Several game engines including Cocos2D provide their own sprite toolkits, and the recent release of Sprite Kit provides native support for sprite animations in iOS7 and above. Check out the Sprite Kit Tutorial for Beginners on this site to learn more.

The images you'll animate are shown below:

The above explosion consists of 11 separate images concatenated into a single image file explosion.png; each frame measures 128 x 128 pixels and the entire image is 1408 pixels wide. It's essentially a series of time lapse images of a giant, fiery explosion. The first and last frames in the sequence have intentionally been left blank. In the unlikely event that the animation layer isn't properly removed after it finishes, using blank frames at the sequence endpoints ensures that the view field will remain uncluttered.

A large composite image composed of many smaller sub-images is often referred to as an image atlas or a texture atlas. This image file has already been included as an art asset in the starter project you downloaded.

You'll be using Core Animation to animate this sequence of images. A Core Animation layer named SpriteLayer is included in your starter project to save you some time. SpriteLayer implements the animation functionality just described.

Once you cover the basic workings of SpriteLayer, you'll integrate it with your ViewController in the next section. This will give you the giant, fiery explosions that gamers crave.

SpriteLayer Constructors

This constructor sets the layer's content attribute directly using the __bridge operator to safely cast the pointer from the Core Foundation type CGImageRef to the Objective-C type id.

You then index the first frame of the animation to start at 1, and you keep track of the running value of this index using spriteIndex.

Note: A layer's content is essentially a bitmap that contains the visual information you want to display. When the layer is automatically created for you as the backing for a UIView, iOS will usually manage all the details of setting up and updating your layer's content as required. In this case, you're constructing the layer yourself, and must therefore provide your own content directly.

The image's bitmap that you set as the layer's content is 1408 pixels wide, but you only need to display one 128 pixel-wide "subframe" at a time. The spriteSize constructor argument let you specify the size of this display "subframe"; in your case, it will be 128 x 128 pixels to match the width of each subframe. You'll initialize the layer's bounds to this value as well.

contentsRect acts like this display's "subframe" and specifies how much of the layer's content bitmap will actually be visible.

By default, contentsRect covers the entire bitmap, like so:

Instead, you need to shrink contentsRect so it only covers a single frame and then animate it left-to-right as you run your layer through Core Animation, like so:

The trick with contentsRect is that its size is defined using a unit coordinate system, where the value of every coordinate is between 0.0 and 1.0 and is independent of the size of the frame itself. This is very different from the more common pixel-based coordinate system that you’re likely accustomed to from working with properties like bounds and frame.

Suppose you were to construct an instance of UIView that was 300 pixels wide and 50 pixels high. In the pixel-based coordinate system, the upper-left corner would be at (0,0) while the lower-right corner would be at (300,50).

However, the unit coordinate system puts the upper-left corner at (0.0, 0.0) while the lower-right corner is always at (1.0, 1.0), no matter how wide or high the frame is in pixels. Core Animation uses unit coordinates to represent those properties whose values should be independent of changes in the frame’s size.

If you step through the math in the constructor above, you can quickly convince yourself that you're initializing contentsRect so that it only covers the first frame of your sprite animation — which is exactly the result you're looking for.

SpriteLayer Animations

Animating a property means to show it changing over time. By this definition, you're not essentially animating an image: you're actually animating spriteIndex.

Fortunately, Core Animation allows you to animate not just familiar built-in properties, like a position or image bitmap, but also user-defined properties like spriteIndex. The Core Animation API treats the property as a "key" of the layer, much like the key of an NSDictionary.

Core Animation will animate spriteIndex when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex key changes. Core Animation will animate spriteIndex when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex key changes. The following method, defined in SpriteLayer.m, accomplishes just that:

But what mechanism do you use to tell the layer how to display its contents based on the spriteIndex?

A clear understanding of the somewhat counterintuitive ways properties change — or how they don't change — is important here.

Core Animation supports both implicit and explicit animations:

Implicit Animation: Certain properties of a Core Animation layer — including its bounds, color, opacity, and the contentsRect you're working with — are known as animatable properties. If you change the value of those properties on the layer, then Core Animation automatically animates that value change.

Explicit Animation:Sometimes you must specify an animation by hand and explicitly request that the animation system display it. Creating a CABasicAnimation and adding it to the layer would result in an explicit animation.

Working with explicit animations exposes a subtle distinction between changing the property on the layer and seeing an animation that makes it look like the property is changing. When you request an explicit animation, Core Animation only shows you the visual result of the animation; that is, it shows what it looks like when the layer's property changes from one state to another.

However, Core Animation does not actually modify the property on the layer itself when running explicit animations. Once you perform an explicit animation, Core Animation simply removes the animation object from the layer and redraws the layer using its current property values, which are exactly the same as when the animation started — unless you changed them separately from the animation.

Animations of user-defined layer keys, like spriteIndex, are explicit animations. This means that if you request an animation of spriteIndex from 1 to another number, and at any point during the animation you query SpriteLayer to find the current value of spriteIndex, the answer you'll get back will still be 1!

So if animating spriteIndex doesn't actually change the value, then how do you retrieve its value to adjust the position of contentsRect to the correct location and show the animation?

The answer, dear reader, lies in the presentation layer, a shadowy counterpart to every Core Animation layer which represents how that layer appears on-screen, even while an animation is in progress.

This code returns the value of the spriteIndex attribute associated with object’s presentation layer, rather than the value of the spriteIndex attribute associated with the object itself. Calling this method will return the correct, in-progress value of spriteIndex while the animation is running.

So now you know how to get the visible, animated value of spriteIndex. But when you change contentsRect, the layer will automatically trigger an implicit animation, which you don't want to happen.

Since you’re going to be changing the value of contentsRect by hand as the animation runs, you need to deactivate this implicit animation by telling SpriteLayer not to produce an animation for the "key"contentsRect.

Scroll to the definition for defaultActionForKey:, also located in SpriteLayer.m:

The class method defaultActionForKey: is invoked by the layer before it initiates an implicit animation. This code overrides the default implementation of this method, and instructs Core Animation to suppress any implicit animations associated with the property key contentsRect.

Finally take a look at display, which is also defined in SpriteLayer.m:

The layer automatically calls display as required to update its content.

Step through the math of the above code and you'll see that this is where you manually change the value of contentsRect and slide it along one frame at a time as the current value of spriteIndex advances as well.

Implementing Your Sprites

Now that you understand how to create sprites, using them should be a snap!

Open ViewController+GameControls.m and replace the stubbed-out showExplosion with the following code:

Create a new instance of SpriteLayer. Prior to iOS6, there was a known ARC bug that would sometimes cause an instance of UIImage to be released immediately after the object's CGImage property was accessed. To avoid any untoward effects, make a copy of the CGImage data before ARC has a chance to accidentally release it and work with the copy instead.

You adjust the position of the sprite layer just slightly to align its center with the target crosshairs on the center of the screen. Even though CALayer declares a frame property, its value is derived from bounds and position. To adjust the location or size of a Core Animation layer, it’s best to work directly with bounds and position.

You then add the sprite layer as a sublayer of the current view.

Construct a new Core Animation object is constructed and add it to the sprite layer.

Sharp-eyed readers will note that the animation runs to index 12, even though there are only 11 frames in the texture atlas. Why would you do this?

Core Animation first converts integers to floats before interpolating them for animation. For example, in the fraction of a second that your animation is rendering frame 1, Core Animation is rapidly stepping through the succession of "float" values between 1.0 and 2.0. When it reaches 2.0, the animation switches to rendering frame 2, and so on. Therefore, if you want the eleventh and final frame to render for its full duration, you need to set the final value for the animation to be 12.

Finally, you need to trigger your new shiny explosions every time you successfully hit the target.

Add the following code to the end of hitTargetWithPoints: in ViewController.mm:

// (4) Run the explosion sprite
[self showExplosion];
}

Build and run your project; tap the trigger button and you should see some giant balls of fire light up the scene as below:

Giant fiery explosions! They're just what you need for an AR target blaster game!

Where To Go From Here?

So far you've created a "live" video stream using AVFoundation, and you've added some HUD overlays to that video as well as some basic game controls. Oh, yes, and explosions - lots of explosions. :]

Paul is a mobile developer based in Los Angeles. He has over 15 years of experience doing professional software development, and has been publishing apps on the iTunes stores since 2009. When he's not busy with work, he enjoys Stanley Kubrick movies, urban hiking and spending time with his wife and son. You can find and follow him on Twitter or Github.