Use your business data to your advantage with the help of Syncfusion’s new data science offerings. Discover how a custom big data solution can provide your company with valuable predictions about key market trends.

Interacting with the User

Packt Publishing

Whether you want to design 3D games with Java for love or for money, this is the primer you need to start using the free libraries of jMonkeyEngine 3.0. All hands on, all fun – it makes light work of learning.

The digital Dungeon Master

Have you ever played a role-playing game (RPG) such as Dungeons & Dragons? In a pen-and-paper RPG, one of the players takes the role of the storyteller or Dungeon Master (DM). The DM keeps track of the score and describes the scene in words. The DM controls all non-player characters (NPCs), enemies as well as extras. The DM is also the referee who interprets rules.

In a computer game, the listen-update-render loop takes the role of the Dungeon Master. Instead of players moving pieces on the table, Java input listeners trigger game actions. Instead of players rolling dice and looking up rules, the update loop manages game mechanics. Instead of the DM deciding on random encounters, the update loop controls NPC behavior. Instead of the DM describing the scene in words, the renderer draws the scene in 3D.

Time for action – from input to output in slow motion

Each game world has certain properties: the level, the score, players and enemies, position and speed of objects and characters, inventories and skills, maybe a countdown timer, and so on. These properties together are the game state.

Each game world also offers some actions: the player can decide to walk, jump, drive, fight, use equipment, take or drop items, change game options, and so on. There are also events that happen without user input: traps are reset, power-ups respawn, automatic doors open or close, and so on. We call these incidents game actions. Game actions are triggered either by the player (in response to input), or by NPCs and objects, as part of the main loop.

Let's zoom in on one day in the life of the average event-driven video game:

The player's input triggers a game action. For example, the user clicks to attack an enemy, the user presses the W, A, S, and D keys to walk, and so on.

The game action updates the game state. For example: a successful attack decreases the enemy's health, walking changes the player's location, pushing a ball increases its momentum, and so on.

The update loop polls the game state. For example: what are the current locations of characters relative to any traps? What is the location and momentum of the rolling ball?

The update loop performs tests and decides which game action it should trigger. For example: if the enemy's health is zero, it dies and the player gets points; if the player is close, the trap goes off; as long as there are no obstacles, the ball keeps rolling, and so on.

The game action updates the game state. For example: the dead character is removed from the scene and the player's score increases, the trap inflicts damage on the player, the location and momentum of the rolling ball have changed a bit, and so on.

The game outputs the new game state. The output includes audio and video that communicate the updated game state to the player. For example: the scene is rendered in 3D and displayed on the user's screen, sounds and animations play, the score display is updated, and so on.

The player watches the new game state.

The listen-update-render loop continues running even if there is no player input—this means steps 1 and 2 are optional. The driving force is the main event loop, not the player.

What just happened?

The listen-update-render loop is the heartbeat of your game and brings action and interaction to an otherwise static scene. You understand that game actions update the game state.

Step

What happens?

What's that in Java?

Listen

Listeners receive input from the player and handle events.

Java event handling

Update

Game actions update the game state.

Java methods (game actions) change Java objects (game state)

Render

The renderer outputs graphics and sound for the player.

Java video and audio libraries (LWJGL)

The whole event loop is made up of familiar pieces: for the Listen step, you use Java event handling. You represent the game state as Java objects, and implement game actions as Java methods. In the update loop, you work with standard conditionals, timers, and randomizers, to advance the game by rules that you specify. The loop is the mastermind that pulls strings in the background for you.

As an example, let's implement a simple game action that changes the game state.

Time for action – pushing the right buttons

Remember our friend, the blue cube from the template? Let's write some code that changes the cube state: the cube has a color, a scale, a location, and a rotation. So just for fun, let's make the cube rotate when we left-click on it, and change its color every time we press the Space bar.

Make another copy of the BasicGame project's Main.java template and name the class UserInput.java. Remember to also refactor the first line of the main() method to the following:

UserInput app = new UserInput();.

Define class constants that represent the Space bar and left-click of the mouse. Import the necessary classes from the com.jme3.input.* and com.jme3.input.controls.* packages.

In the jMonkeyEngine SDK, place the insertion point behind the period after KeyInput. or MouseInput., and so on. Then press Ctrl+ Space bar to select constants from the code-completion pop-up.

You now have two triggers, TRIGGER_COLOR and TRIGGER_ROTATE, and two mappings, MAPPING_COLOR and MAPPING_ROTATE.

What just happened?

Each physical input, such as the Space bar or left-click of the mouse, is represented by a Trigger object. You create a KeyTrigger object for a key. You create MouseButtonTrigger objects for mouse clicks, and MouseAxisTrigger objects for mouse movement. Similarly, you create JoyAxisTrigger objects and JoyButtonTrigger objects for joystick buttons and movements. Android devices also support TouchTrigger objects that act similarly to MouseButtonTrigger objects.

The two action names that you prepared here, MAPPING_COLOR and MAPPING_ROTATE, are mappings. The mappings are case-sensitive Strings, and must be unique, one for each action. For mappings, always choose meaningful names that reflect the action, not the trigger. This way they still make sense even if you change the assigned triggers later.

Using String constants instead of Strings has the advantage that the compiler warns you if you misspell the mapping (as opposed to silently ignoring your input). Using String constants also makes it possible to use IDE features, such as refactoring or finding usages.

Time for action – trigger meets mapping

The SimpleApplication class provides you with a handy inputManager object that you can configure.

Go to the simpleInitApp() method. Leave the template code that creates the blue cube as it is.

Register your mappings and triggers with the inputManager. At the beginning of the method, add the following two lines:

But what if half of your users think the Space bar is unintuitive for toggling color, and prefer the C key instead? You can easily allow several variants in one mapping; define a trigger object for the C key on the class level, as you did for the Space bar key:

What just happened?

The inputManager object uses mappings to associate action names and triggers. You use mappings because the implementation of the action may change, and the assignment of keys or mouse clicks may change—but the mapping always remains the same. You can map one unique action name, such as MAPPING_COLOR to several triggers, such as TRIGGER_COLOR/Space bar and TRIGGER_COLOR2/ C key. You can use each trigger only once (in one mapping) at the same time.

Now you have triggers, action names, and input mappings, but they don't do anything yet. Your mapping needs to be registered to the appropriate listener.

Time for action – mapping meets listeners

To activate the mappings, you have to register them to an InputListener object. The jMonkeyEngine offers several InputListener objects in the com.jme3.input.controls.* package. Let's create instances of the two most common InputListener objects and compare what they do.

On class level, below the closing curly braces of the simpleInitApp() method, add the following code snippet:

When you paste code in the jMonkeyEngine SDK's editor, unknown symbols are underlined in red. Whenever you see yellow warnings next to lines, click on the lightbulb icon and execute the hint to resolve the problem, in this case, Add import for... Alternatively, you can also press the Ctrl + Shift + I shortcut keys to fix all import statements in one go.

Register each mapping to one of the InputListener objects. Paste the following two lines of code below the addMapping() code lines in the simpleInitApp() method:

inputManager.addListener(actionListener,
new String[]{MAPPING_COLOR});
inputManager.addListener(analogListener,
new String[]{MAPPING_ROTATE});

If you run the code sample now, and then click, or press the C key or Space bar, you should see some output in the console.

What just happened?

jMonkeyEngine comes with three preconfigured InputListener objects: ActionListener, AnalogListener, and TouchListener. Each responds differently to com.jme3.input.event.InputEvents.

ActionListener is optimized for discrete either/orInputEvents. Use it if you want to detect whether a key is either pressed or released, or the mouse button is either up or down, and so on.

AnalogListener is optimized for continuous InputEvents with an intensity. This includes mouse or joystick movements, (for example, when looking around with the camera), or long-lasting key presses (for example, when walking by pressing the W, A, S, and D keys), or long mouse clicks (for example, when the player shoots an automatic weapon).

TouchListener is optimized for InputEvents on touchscreen devices. It supports events, such as tap, double-tap, long pressed tap, fling, and two-finger gestures.

In the previous example, you registered MAPPING_COLOR to the actionListener object, because toggling a color is a discrete either/or decision. You have registered the MAPPING_ROTATE mapping to the analogListener object, because rotation is a continuous action. Make this decision for every mapping that you register. It's common to use both InputListener objects in one game, and to register all analog actions to one and all discrete actions to the other listener.

Note that the second argument of the addListener() method is a String array. This means that you can add several comma-separated mappings to one InputListener object in one go. It is also perfectly fine to call the inputManager.addListener() method more than once in the same application, and to add a few mappings per line. This keeps the code more readable. Adding more mappings to an InputListener object does not overwrite existing mappings.

When you implement the actual actions, you write a series of conditionals in these inner methods. When you want to detect several triggers (the most common scenario), add several else if conditionals. You test for each mapping by name, and then execute the desired action. In our example, we want the action to affect the blue cube (geom).

Make geom accessible as a class field. Remember to adjust the geom object's constructor call in simpleInitApp() method accordingly.

private Geometry geom;
...
geom = new Geometry("Box", mesh);

Now the InputListener objects in your class have access to the cube geometry.

Let's handle MAPPING_COLOR first. In the inner onAction() method of the actionListener object, test whether name equals to MAPPING_COLOR. To execute the action when the trigger is released (that is, the key or mouse button is up again), test whether the Boolean is !isPressed:

Implement the color toggle action for geom: get the cube's material, and set the Color property using the return value of the static randomColor() method. Replace the implement action here comment with the following line:

geom.getMaterial().setColor("Color", ColorRGBA.randomColor());

Let's handle MAPPING_ROTATE. In the inner onAnalog() method of the analogListener, test whether name equals MAPPING_ROTATE.

Implement the rotation action for geom. To execute the action continuously as long as the trigger is pressed, use the provided intensity value as a factor in your continuous rotation action. Replace the implement action here comment with the following line:

geom.rotate(0, intensity, 0); // rotate around Y axis

When you run UserInput.java now, you see the blue cube. Press the Space bar and the cube changes its color. Keep the left mouse button pressed and the cube rotates around its y axis. You can even do both at the same time. When you press the C key, however, you notice that the color changes, but the application also prints camera information to the console. Strange! Didn't you just declare the C key as an alternative to the Space bar? You did, but you did not consider the existing default mappings.

SimpleApplication internally maps the C key to a diagnostic output action. You will notice a similar issue if you map anything to the M (prints memory diagnostics) or Esc keys (stops the game), which are also internally registered by SimpleApplication. If necessary, you can remove any of the three existing mappings as follows:

You can call the inputManager.clearMappings() method and define all mappings from scratch. But clearing also removes the preconfigured W, A, S, and D key navigation, which we would like to keep as long as we're looking at examples. So don't clear the mappings for now—until you develop your own game.

What just happened?

Congrats! You now know how to set up individual keys and mouse buttons, and so on to trigger custom game actions that change the game state.

Start by deciding on a list of actions and default triggers in your game.

Define triggers and give them unique names that describe the action, not the key.

Register each name—trigger pair as a mapping to the inputManager object.

Create InputListener instances for discrete and analog events, and register each mapping to one of them.

Finally, test for each mapping in its InputListener object's onAction(), onAnalog(), or onTouch() method, and implement its action.

Click me if you can

Now you know how to trigger simple game actions such as rotating a cube. Since the cube is the only object in the scene, the user does not have any choice which spatial to interact with. But as soon as you have several objects in a scene, you need to find out what the user was looking at when he or she pressed an action key.

There are several possible targets of game actions: the target of navigational actions is typically the player character or vehicle—no explicit target selection is necessary. The target of a take action is, in contrast, one of many items lying around. The target of an attack can be a subset of enemy characters, while the target of a magic spell can be pretty much anything, depending on the spell: the player, an ally, an enemy, an item, or even the floor.

These seemingly different game actions are very similar from an implementation point of view: the player selects a target in the 3D scene, and then triggers an action on it. We call the process of identifying the target picking.

If the hot spot for clicks is the center of the screen, some games mark it with crosshairs. Alternatively, a game can offer a visible mouse cursor. In either case, you need to determine what 3D object the player was aiming at.

Time for action – pick a brick (using crosshairs)

To learn about picking a target in the scene, let's add a second cube to the scene. Again we want to click to rotate a cube, but this time, we want to pick the cube that will be the target of the action.

Make a copy of the previous exercise, UserInput.java. Keep the mouse click and the key press actions for inspiration.

Rename the copy of the class to TargetPickCenter.java. Remember to also refactor the first line of the main() method to the following:

TargetPickCenter app = new TargetPickCenter();.

Let's write a simple cube generator so that we can generate sample content more easily: move the code block that creates the blue cube from the simpleInitApp() method into a custom method called myCube(). Turn the Box mesh object into a static class field so that you can reuse it. Your method should use three arguments: String name, Vector3f loc, and ColorRGBA color. The method should return a new colored and named cube Geometry at the specified location.

Use your myBox() method to attach two cubes to the rootNode object, a red one and a blue one. Space them apart a little bit so that they don't overlap sitting at the origin: move the red one up 1.5f WU, and the blue one down 1.5f WU, along the y axis.

To make aiming easier, let's mark the center of the screen with a little white cube. Since a mark is 2D, we attach it to the 2D user interface (guiNode), and not to the 3D scene (rootNode)! Call the attachCenterMark() method from the simpleInitApp() method.

Run the TargetPickCenter method to see the intermediate result, the red cube above the blue cube.

What just happened?

Whenever you need to mass produce geometries, consider writing a convenience method, such as the myBox() method with parameters to decrease clutter in the simpleInitApp() method.

Between the two cubes you see a tiny white cube—this is our center mark. When you look around by moving the mouse, or navigate to the sides by pressing the A or D keys, you notice that the center mark stays put. This mark is not attached to the rootNode object—and, therefore, is not part of the projected 3D scene. It is attached to a special guiNode object, and is therefore part of the 2D graphical user interface (GUI). Just as with the rootNode object, you inherited the guiNode object from the SimpleApplication class.

We added the center mark because you may have noticed that the mouse pointer is invisible in a running game. It would be hard to click and select a target without some visual feedback. For a sample application, our white mark is enough—in a real game, you would attach a crosshairs graphic of higher artistic value.

Time for action – pick a brick (crosshairs with ray casting)

You want the player to aim and click one of the cubes. You want to identify the selected cube, and make it rotate.

Start by implementing the AnalogListener object on the Analog() method to test for our left-click action, MAPPING_ROTATE. Remember, we chose the AnalogListener object because rotation is a continuous motion.

If the user has clicked anything, the results list is not empty. In this case we identify the selected geometry; the closest item must be the target that the player picked! If the results list is empty, we just print some feedback to the console.

Replace the implement action here comment with the actual code that rotates the cubes around their y axes. You can use the getName() action in a conditional statement to identify geometries, and respond differently to each, for example:

Build and run the sample. When you point with the center mark and click either the red or the blue cube, it rotates. You notice that the red cube rotates to the left and the blue cube to the right.

What just happened?

Impressive: your application can now not only feel key presses and clicks coming from the user, but it can even see through the eyes of the user and know what he or she looked at in the scene! In our example, you can now tell the cubes apart, and respond with different actions accordingly: the red one rotates to the left because you used -intensity (negative), and the blue one to the right because you used intensity (positive) as rotation factors.

What does seeing through the eyes of the user mean? Mathematically, you aim an invisible line from the camera location forward, in its view direction. A line that has a fixed beginning and direction is called a ray. In the jMonkeyEngine, this corresponds to a com.jme3.math.Ray object. The ray starts at the 3D coordinates of the camera, which coincides with the center of the screen, the typical location of crosshairs. The ray travels in the view direction through the scene and intersects (collides) with scene objects. The com.jme3.collision package provides you with specialized methods that detectcollisions between various mathematical objects, including between rays and the scene graph. This neat little trick for identifying click targets is called ray casting.

The code sample prints a bit of information so you can see what this picking algorithm is capable of. The CollisionResults object contains a list of all collisions between two scene elements, and accessors that let you pick the closest (or farthest) intersected object. In the jMonkeyEngine SDK, choose Window | Output | Output to open the console. When you run the application and click on the cubes, the output should look similar to the following:

Note that each cube is detected twice: the first impact point is on the front side, where the ray enters the geometry, and the second is the exit point on the backside of the geometry. In general, the clicked item is the closest geometry in the results list. Therefore, we use results.getClosestCollision().getGeometry() to identify the target.

In an actual game, you can use this crosshair-style picking method to implement an attack on an enemy; instead of simply rotating the picked geometry, you could play a gun sound and subtract health points from the identified target. If your conditional identifies the clicked geometry as a door, you could play a creaking sound, and trigger a closing or opening animation on the target, and so on. Your imagination is the limit!

Time for action – pick a brick (using the mouse pointer)

Aiming fixed crosshairs is one way to pick objects in the scene. Another option is to make the mouse pointer visible, and allow free clicks.

Make a copy of the previous exercise, TargetPickCenter.java. You can keep the code that handles the mouse click actions and the key press actions for inspiration.

Rename the copy of the class to TargetPickCursor.java. Remember to also refactor the first line of the main() method to the following:

TargetPickCursor app = new TargetPickCursor();

Keep the myBox() method, the constants, the analogListener object, and the two cubes. Remove the attachCenterMark() method, and the AnalogListener object implementation.

By default, the mouse pointer is hidden. To make it visible, add the following to the simpleInitApp() method:

flyCam.setDragToRotate(true);
inputManager.setCursorVisible(true);

Run TargetPickCursor to see the intermediate result.

What just happened?

Again, you see the red cube above the blue cube. When you move the mouse, you notice that the pointer is a visible arrow now. But you also notice something else; earlier, the view rotated when you moved the mouse to the sides. This feature is part of the default camera behavior and is called mouse look, or free look. Mouse look does not get along with a visible mouse pointer. You can use the mouse either for navigating, or for pointing and clicking. Both at the same time is quite confusing for the player.

You deactivated mouse look when you set flyCam.setDragToRotate(true);. To rotate the camera and look around now, keep the left mouse button pressed while moving.

Time for action – pick a brick (pointer with ray casting)

Now your player can click items, but how do you find out what he clicked? In principle, we can use the same ray casting algorithm as for the previous example with the crosshairs. Only instead of casting the ray from the camera forward, we now cast it forward from the 3D coordinates of the click location.

Implement the AnalogListener object on the Analog() method to test for our left-click action, MAPPING_ROTATE.

Instead of aiming the ray forward from the camera location in the camera direction, we now aim the ray starting from the click location into the calculated forward direction.

Ray ray = new Ray(click3d, dir);

Now that we have the ray, the rest of the code is the same as for the crosshairs example. We calculate intersections between this line-of-sight ray and all geometries attached to the rootNode object, and collect them in the results list.

rootNode.collideWith(ray, results);

If the user has clicked anything, the results list is not empty. In this case we identify the selected geometry—the closest item must be the target that the player picked! If the results list is empty, we just print some feedback to the console.

Build and run the sample. When you point the mouse pointer at either the red or blue cube and click, the cube rotates as in the previous example. Congrats! Not even fancy-free mouse clicks can escape your notice now!

What just happened?

Previously, when aiming with crosshairs, we assumed that the crosshairs were located in the center of the screen, where the camera is located—we simply aimed the ray forward from the (known) 3D camera location, in the (known) 3D camera direction. A mouse click with a pointer, however, can only identify (x,y) coordinates on a 2D screen.

Vector2f click2d = inputManager.getCursorPosition();

Similarly, we can no longer simply use the known camera direction as the direction of the ray. But to be able to cast a ray, we need a 3D start location and a 3D direction. How did we obtain them?

The cam.getWorldCoordinates() method can convert a 2D screen coordinate, plus a depth z from the camera forward, into 3D world coordinates. In our case, the depth z is zero—we assume that a click is right on the lens of the camera.

To specify the forward direction of the ray, we first calculate the coordinates of a temporary point tmp. This point has the same (x,y) coordinates as the mouse click, but is 1 WU deeper into the scene (1 WU is an arbitrarily chosen depth value). We use the cam.getWorldCoordinates() method once more with the click's 2D screen coordinate, but now we specify a depth of 1 WU:

We need to identify the direction from the 3D click location towards the temporary point 1 WU in front of it. Mathematically, you get this direction vector by subtracting the click's vector from tmp point's vector as in the following code:

Vector3f dir = tmp.subtractLocal(click3d);

Subtracting one vector from another is a common operation if you need to identify the direction from one point to another. You can do it in one step:

In these small examples, we collide the ray with the whole scene attached to the rootNode object. This ensures that all nodes are tested. On the other hand, too many tests may slow performance in large scenes. If you know that only mutually exclusive sets of nodes are candidates for a certain action, then restrict the collision test to this subset of nodes. For example, attach all spatials that respond to an open/close action to an Openable node. Attach all spatials that respond to a take/drop action to a Takeable node, and so on.

Experiment with either picking method, crosshairs or mouse pointer, and see which one suits your gameplay better.

Summary

Congratulations! By completing this article, you have made a big leap ahead in your game developer career.

You now know how to respond to user inputs, such as mouse clicks and motions, touch events, joysticks clicks and motions, and key presses.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.