Adding touch controls for Windows 8

For the last two weeks or so I’ve been working out bugs in Super Rawr-Type and integrating touch controls for the Windows 8 port. This is the first time I’ve ever created touch controls, so it took a bit to understand how they work, specifically with toggling the joystick for movement and stationary buttons for shooting, switching weapons, and activating powerups.

What proved most difficult for me really was not having a machine to test my controls on. I’ve had to rely on the simulator thus far, which basically runs a virtual machine on your desktop, wrapped a shell that appears like a Win8 tablet. The buttons on the right hand side emulate figure gestures, so you can view and understand how your own touches will affect the game.

Active touch inputs, using the Win8 simulator

You can do in Visual Studio 2012 by selecting “simulator”from the debug ribbon. Shortly after, the simulator will appear. From there I load another instance of Visual Studio from within the simulator, but this time I click on “local machine” from the debug ribbon so that I can run Super Rawr-Type from within that instance of the simulator. This is where all of my testing for touch inputs occurs.

The Joystick and Hit-Area Classes

Jesse Freeman wrote a great Joystick class in his Windows 8 Boostrap kit for ImpactJS. I highly suggest you look through everything offered in that package, as it greatly streamlines the process of porting your impactJS project and implements a number of great plugins that I continue to use throughout this project.

The GitHub for the joystick can be found here. It’s pretty straight forward, and the way you implement it into your game is what’s really important though. The joystick works by drawing itself beneath where your mouse pointer is clicked, then can be rotated a fixed distance around the circumference of your digital joystick.

That’s our player’s draw loop, but inside that you’ll see we have functions for drawing the joystick and button. The joystick is only visible if both the touch controls are active AND joystick is true. Joystick is only true if the player’s finger is against the screen. This prevents us from having a joystick cluttering up the screen at all times.

The function ig.game.registerHitArea comes from Jesse’s hit-area plugin, which injects some functionality into the game class from within the plugin. You can read more about injection here, and quite simply it adds functionality to a class (in this case, source code) in instances where you do not have access to the source code.

Rather than make any changes from within the game class itself, we can add functions and variables from another class, and tie it into game. It is useful in certain situations, although I find that it often leads to confusing code, as you may not always be aware of where the injected code is coming from.

So in one function we are drawing the buttons on screen, in addition to registering the hit areas. Notice that the hit areas are the same size as the button textures?

Updating the joystick to respond to our touch

Touch instructions before the game starts

With that out of the way, we can focus on the final part, and that’s updating the joystick and hit areas to respond to our touch. I tried separating my update loop into smaller, manageable functions, but it’s turned out to be a mess because of how tightly coupled everything is. My animations are tied to the speed of my player, my inputs are tied directly to the speed, and my weapon firing and switching are also tied directly to my inputs.

Despite my lack of modularity in my current code, it should still give you a great idea of how touch controls work with Win8 and JavaScript. I probably should go back and refactor much of this, but that’s time consuming, and I’m simply using my brief time with this project as a learning experience.

My update loop looks like this (missing content is denoted with “….” for brevity):

I’ve gone ahead and commented all of the code, so it should be incredibly easy to read. I do this for myself, as I can quickly scan through the code and understand what the purpose of each block or line of code is for. This also allows others to view the code and say “Hey, your comment doesn’t match up with the task that this block is performing”, and therefore makes troubleshooting and collaboration far easier.

Without these the joystick would also be drawn on screen and updated each time the mouse is moved, regardless of whether or not it is clicked For ship movements with the joystick you’ll see that I also have +15 tacked onto the end of many of my if statements.

This is a small buffer that allows for a “dead zone” in the center of the joystick. If the player is touching the joystick, but hasn’t moved 15 pixels from the center of the joystick’s radius in any direction, then the ship will remain stationary. Without this, the ship would begin to fly up the moment joystick was pressed up, regardless of where the player’s finger was in relation to the joystick.

That’s all there really is to the joystick. Take a look at Jesse’s code to get a better idea of how he uses it for his project, and you see spot some of our key differences. Additionally, he uses a slightly larger joystick than I do, so I had to compensate for that difference due to my lower screen resolution.

This can be seen in the draw loop for my joystick. Without the numbers added to my mouseDownPoint function, my joystick would appear up and to the left from my actual mouse down point:

Updating the buttons to respond to our touch

The first thing I needed to do here was create a toggle so that the touch buttons are not always drawn on screen. What if the user is playing on a machine that doesn’t have touch? Why draw the buttons then? There is no way that I’m aware of to detect whether or not the user has a touch-capable machine, so I’d rather just give them the option to select it him or herself. That is done with this block in my player’s update loop:

This button resides in the top left corner of my screen, just beneath the player’s HUD. When touched with either the left or right mouse button, it will trigger the buttons to be drawn on screen. Otherwise, all of the button logic resides in this block of code, also found in the player’s update loop:

If the player right clicks (or touches) the a button, then the player will begin firing, as determined by the boolean this.bIsShooting = true;. This is tied directly into my player’s firing function at the top of the update loop:

Now I have the option of pressing the shoot button (tied to “C” on the keyboard) or the shoot button on the touch interface. The final key part to the button logic is the else if statement. Without this, the player would continue to perform the given task (ie. shooting, switching weapons, activating slow mo).

Conclusion

Support for Win8 Snap View

Well that’s all there is to it! It may seem far more complicated than it is, but if I can figure out touch controls and implement them in one day, then surely you can as well. Again, I urge you to take a look at that bootstrap starter kit, as it provides tons of functionality to get your game working on Windows 8 and other devices.

My game is completed now, and I’m handing it out to testers before submitting it to the Win8 store later this week. The web build is complete as well, but I ran into some sort of build error, where it seems to be missing a “;” termination statement somewhere. I’ll sift through it when I have more time.

Up next? WebGL support! If you have any questions about this project or need help integrating certain features into your own Impact project, feel free to get in touch with me!