Game Development & Programming

Main menu

Category Archives: Unity

In this post I will go over the following aspects of RTS style unit selection:

Draw a selection box using the mouse

Determine which units are within the selection box

Highlight selected units

Drawing rectangles

There are many ways to draw rectangles in Unity. In this example, I will use GUI.DrawTexture, as it is an easy and straightforward way to achieve our goal. We can draw a simple colored Rect using GUI.DrawTexture by setting GUI.color to the desired color, and then passing a white 1×1 texture to GUI.DrawTexture. I wrote a short utility class to make this easier:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

publicstaticclassUtils

{

staticTexture2D _whiteTexture;

publicstaticTexture2DWhiteTexture

{

get

{

if(_whiteTexture==null)

{

_whiteTexture=newTexture2D(1,1);

_whiteTexture.SetPixel(0,0,Color.white);

_whiteTexture.Apply();

}

return_whiteTexture;

}

}

publicstaticvoidDrawScreenRect(Rect rect,Color color)

{

GUI.color=color;

GUI.DrawTexture(rect,WhiteTexture);

GUI.color=Color.white;

}

}

Keep in mind that GUI methods (and thus also our DrawScreenRect utility method) can only be called during OnGUI(), and make sure you create the white texture only once for performance reasons.

We can now draw screen rectangles from any component in OnGUI():

1

2

3

4

voidOnGUI()

{

Utils.DrawScreenRect(newRect(32,32,256,128),Color.green);

}

With the following utility method we can also draw borders for a rect:

Using the mouse to draw a selection box

Screen space in Unity has its origin (0,0) at the bottom left of the screen. This is inconsistent with the Rect struct, which has its origin at the top left. Input.mousePosition gives us the position of the mouse in screen space, so we have to be careful when using mouse positions to create a Rect. Knowing this, it is fairly easy to create a Rect from 2 screen space (mouse) positions:

We can then write a simple script that allows us to draw a selection box with the mouse:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

publicclassUnitSelectionComponent:MonoBehaviour

{

boolisSelecting=false;

Vector3 mousePosition1;

voidUpdate()

{

// If we press the left mouse button, save mouse location and begin selection

if(Input.GetMouseButtonDown(0))

{

isSelecting=true;

mousePosition1=Input.mousePosition;

}

// If we let go of the left mouse button, end selection

if(Input.GetMouseButtonUp(0))

isSelecting=false;

}

voidOnGUI()

{

if(isSelecting)

{

// Create a rect from both mouse positions

varrect=Utils.GetScreenRect(mousePosition1,Input.mousePosition);

Utils.DrawScreenRect(rect,newColor(0.8f,0.8f,0.95f,0.25f));

Utils.DrawScreenRectBorder(rect,2,newColor(0.8f,0.8f,0.95f));

}

}

}

Selecting Units

In order to determine which objects are within the bounds of the selection box, we need to bring both the selectable objects and the selection box into the same space.

Your fist idea might be to run these tests in world space, but I personally prefer doing it in post-projection (viewport) space because it can be a bit tricky to convert a selection box into world space if your camera uses a perspective projection. With a perspective projection, the world shape of the selection box is a frustum. You’d have to calculate the correct viewprojection matrix, extract the frustum planes, and then test against all 6 planes.

Highlighting selected units using projectors

There are many different ways to highlight units. In RTS games it seems to be pretty popular to place a small circle below selected units, so that’s what we are going to do.

In order for the circles to work well with sloped terrain we are going to use projectors (rather than just drawing a circle sprite below selected units). Projectors can project materials onto other geometry, but they need a special type of shader in order to do so. The Unity 5 standard assets contain both a Project/Light shader and a Project/Multiply shader, but unfortunately neither of those shaders are appropriate for what we want to do: Add circles to geometry below selected units. We’ll have to write our own projector shader.

It will take 2 parameters: a cookie texture (alpha mask) that defines which pixels are going to be affected, and a tint color, which will define the color of the affected pixels:

1

2

3

4

Properties{

_Color("Tint Color",Color)=(1,1,1,1)

_ShadowTex("Cookie",2D)="gray"{}

}

Set up shader states for projection, but use additive blending:

1

2

3

4

ZWrite Off

ColorMask RGB

Blend SrcAlpha One// Additive blending

Offset-1,-1

Combine the alpha mask and the tint color to determine the final color:

1

2

3

4

5

6

7

fixed4 frag(v2fi):SV_Target

{

// Sample cookie texture

fixed4 texCookie=tex2Dproj(_ShadowTex,UNITY_PROJ_COORD(i.uvShadow));

// Apply tint &amp; alpha mask

return_Color *texCookie.a;

}

Using this shader, we can set up a projector like we initially intended. In the following example I use a simple circular alpha mask to project a circle below a unit. Note the import settings I used for the alpha mask. It needs to be a cookie texture with light type spotlight (this sets the wrap mode to clamp) and alpha from grayscale needs to be checked.
While we are now capable of rendering our selection circles, there are still a few issues that we need to address.

Ignoring layers

By default projectors project onto everything. In our situation, we want projectors to ignore other units in order to avoid the behaviour seen in this image. This can be achieved using the Ignore Layers property of the projector. Personally, I like to have a ground layer which contains all of the terrain, and I simply ignore all other layers in the projector.

Attenuation

Projections can sometimes appear on objects outside of the projectors frustum. In this example, a projection appears on the terrain above the unit. This happens because the terrains bounding box intersects the projectors frustum (even though its geometry does not). Granted, this scenario is fairly unlikely in an RTS game, but it is easily solved by introducing an attenuation factor. This will also fade out the projection when a unit stands close to a cliff.

1

2

3

4

5

6

7

8

9

fixed4 frag(v2fi):SV_Target

{

// Apply tint &amp; alpha mask

fixed4 texCookie=tex2Dproj(_ShadowTex,UNITY_PROJ_COORD(i.uvShadow));

fixed4 outColor=_Color *texCookie.a;

// Distance attenuation (_Attenuation = 1.0 works well)

floatdepth=i.uvShadow.z;// [-1(near), 1(far)]

returnoutColor *clamp(1.0-abs(depth)+_Attenuation,0.0,1.0);

}

Example Project

I have created a Unity 5 project that implements everything discussed in this post. I also extended the unit selection component to preview unit selection and output all selected units. You can find the download link below.

Download Links

With its most recent update, Unity has received a massive overhaul to its lighting and rendering system. Unity 5 features both baked and real-time GI, using Geomerics Enlighten.

Baked GI

Baked GI uses a path tracer to simulate indirect lighting ahead of time. The information is then stored in lightmaps, which are used at run-time to light up the scene. The result is realistic and high quality lighting. Performance-wise, it is fairly inexpensive at run-time (but takes a long time to bake).

Since lighting is baked ahead of time, this requires both the meshes and the light sources to be static. Therefore baked GI will not work for scenes relying heavily on dynamic lighting, or scenes featuring procedural geometry and/or dynamic level design.

Precomputed Realtime GI

Precomputed Realtime GI tackles one of these issues. It is able to simulate indirect lighting for dynamic light sources, by precomputing how light can bounce throughout the scene.

To test it out I loaded up the Crytek Sponza, and added a dynamic directional light.

Here are the settings that I used:

Note that I have disabled ambient lighting completely. There is only 1 directional light source, and there are no reflection probes. All visible indirect lighting is a result of the precomputed realtime GI. There are some minor lighting artifacts, but overall I am rather pleased with both the quality and the performance. I’m probably going to make extensive use of this feature in future projects that use dynamic lighting.

The quality of the indirect light is directly related to the “Realtime Resolution” parameter, and so is the time required for baking. In a production environment you probably want to set the realtime resolution higher than I have. During development you can lower the resolution to improve bake times.

There are a few caveats with precomputed realtime GI though:

Realtime indirect bounce light shadowing is only supported for directional lights. You will get a warning in Unity if you try to do this with another type of light source. In the following example you can see that indirect bounce light from the point light spheres bleeds through the curtains.

If you make changes to the lighting within your scene at runtime, you may need to update some of your reflection probes to reflect these changes (which can be very expensive). This problem is not directly related to realtime GI, but rather to dynamic lighting in general. Unity offers some ways to deal with this through scripting, and I’ll probably make another blog post about using reflection probes in dynamic environments.

Combined GI

You can use both Baked GI and Precomputed Realtime GI simultaneously. Static geometry will use baked GI for static lights and realtime GI for dynamic lights. In most situations you’ll probably achieve the best result by using both.