I guess many developers, like me, use a dual-output graphics card to make their Windows workspace extend over two monitors. I’ve been experimenting with whether it’s possible to make a Unity game that spanned two monitors (and, perhaps even more interestingly, to make a gameplay feature out of it). Here’s how:

a.) I have two monitors side-by-side both set to 1280×1024. So, first I opened up nVidia control panel and created a custom resolution that represented their combined area – 2560×1024.

b.) You will now be able to select the custom resolution when launching a standalone Unity game.

c.) However, there’s still a problem. If you launch the app as full-screen, it will fill only one monitor. If you choose windowed mode, however, you’ll get the title bar and window edging visible in the display (even after having set a resolution that exactly fills both displays).

The solution is to use windowed mode, but to launch the application using the -popupwindow command line switch. This can easily be done by creating a batch file and launching from that instead of launching the .exe directly:nameofyourgame.exe -popupwindow

The game will now run in a borderless window that fills both screens.

d.) Now, to create the level setup that makes a gameplay feature of this. I created two orthographic cameras – Left and Right, and positioned them so that there was a slight gap between their viewing frustrums. This, in gameplay terms, represented the gap between the monitors. I then placed a large black cuboid in the gap. It would never be visible in-game, but allowed me to, e.g. fire OnTriggerEnter() calls any time a collider crossed between monitors.

Since my monitors have exactly the same resolution as each other, I want the left camera to render to the left hand-side of the game display (the left monitor), and the right camera to render to the right-hand side (the right monitor). This is easy to achieve by setting the Viewport Rect setting on the left camera to X:0 Y:0 W:0.5 H:1, and the right camera to X:0.5 Y:0 W:0.5 H:1. And then I added a little trivial logic to determine which side of the central “void” cube the player was on, and change the colour of the platforms accordingly:

I’ve had a few questions recently about how to apply my hand-drawn shader pack on Unity terrain. So here’s a guide (written using Unity 5, but the instructions should remain almost identical in Unity 4 save for a few menu options being in different places).

To start, create a terrain just as you would do normally:

1.) Create a new terrain: GameObject -> 3D Object –> Terrain

2.) Use the terrain tools to sculpt it.

3.) Now paint on textures just as you would do normally. I’m using the terrain textures included with Unity’s standard assets package: Assets -> Import Package –> Environment

4.) Create a new material and assign the Hand-Drawn/Fill+Outline/Overdrawn Outline + Smoothed Greyscale Fill shader from the Hand-Drawn shader pack (you can use the same settings as shown here if you want, but you might prefer to tweak them later anyway)

5.) Select your terrain object in the scene hierarchy and, on the settings tab of the inspector pane (the furthest right-most tab, with the cog icon), change Material to "Custom", then select the Custom Material you just created in step 4.

At this point, your screenshot probably looks like this – getting there but not quite right yet 😉

To give the effect that the ink terrain has been drawn onto paper, we need to put a background instead of that skybox. There are a couple of ways of doing this – in the following steps I’ll put an image on a canvas. (if you’re using Unity < 4.6 when UI was introduced, you could simply create a quad in the background instead).

6.) Add a new UI image component by going GameObject -> UI -> Image. Set the source image to be one of the background textures included with the shader pack (e.g. Japanese Paper, Ruled Paper etc.). Then click the "native size" button to resize it to normal size.

7.) Select the Canvas element in the hierarchy and change the Render Mode to “Screen Space – Camera”. Then set the render camera to "Main Camera". Finally set the Plane Distance to 999 (just inside the default far clipping plane of the camera which is 1000). This will place the background texture behind everything else in the view.

8.) On the Canvas Scaler component, set the Scale Mode to "Scale with Screen Size", set the reference resolution to 640×480 (or whatever the size of the background texture you used is), and set the Screen Match Mode to "Shrink". This will make the background texture always fill the camera screen. Your settings on the Canvas should now be as follows:

And here’s what you should now see in the game view:

If you want the ink lines to have an animated effect, you can increase the scribbliness and redraw rate parameters of the shader, which gives results as follows:

This was a question asked on the Unity Forums recently, so I thought I’d just write up the answer here.

Unity provides its own unique brand of “surface shaders”, which make dealing with lighting and shadows relatively simple. But there are still plenty of occasions in which you find yourself writing more traditional vert/frag CG shaders, and needing to deal with shadows in those too.

Suppose you had written a custom vertex/fragment CG shader, such as the following simple example:

This shader simply outputs the colour red for all fragments, as shown on the plane in the following image:

Now what if you wanted to add shadows to that surface? Unity already creates a shadowmap for you from all objects set to cast shadows, and defines several macros that make it easier to sample that shadowmap at the appropriate point. So here’s the changes you need to make to a shader to make use of those built-in shadows:

Shader "Custom/SolidColor" {
SubShader {
Pass {
// 1.) This will be the base forward rendering pass in which ambient, vertex, and
// main directional light will be applied. Additional lights will need additional passes
// using the "ForwardAdd" lightmode.
// see: http://docs.unity3d.com/Manual/SL-PassTags.html
Tags { "LightMode" = "ForwardBase" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
// 2.) This matches the "forward base" of the LightMode tag to ensure the shader compiles
// properly for the forward bass pass. As with the LightMode tag, for any additional lights
// this would be changed from _fwdbase to _fwdadd.
#pragma multi_compile_fwdbase
// 3.) Reference the Unity library that includes all the lighting shadow macros
#include "AutoLight.cginc"
struct v2f
{
float4 pos : SV_POSITION;
// 4.) The LIGHTING_COORDS macro (defined in AutoLight.cginc) defines the parameters needed to sample
// the shadow map. The (0,1) specifies which unused TEXCOORD semantics to hold the sampled values -
// As I'm not using any texcoords in this shader, I can use TEXCOORD0 and TEXCOORD1 for the shadow
// sampling. If I was already using TEXCOORD for UV coordinates, say, I could specify
// LIGHTING_COORDS(1,2) instead to use TEXCOORD1 and TEXCOORD2.
LIGHTING_COORDS(0,1)
};
v2f vert(appdata_base v) {
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
// 5.) The TRANSFER_VERTEX_TO_FRAGMENT macro populates the chosen LIGHTING_COORDS in the v2f structure
// with appropriate values to sample from the shadow/lighting map
TRANSFER_VERTEX_TO_FRAGMENT(o);
return o;
}
fixed4 frag(v2f i) : COLOR {
// 6.) The LIGHT_ATTENUATION samples the shadowmap (using the coordinates calculated by TRANSFER_VERTEX_TO_FRAGMENT
// and stored in the structure defined by LIGHTING_COORDS), and returns the value as a float.
float attenuation = LIGHT_ATTENUATION(i);
return fixed4(1.0,0.0,0.0,1.0) * attenuation;
}
ENDCG
}
}
// 7.) To receive or cast a shadow, shaders must implement the appropriate "Shadow Collector" or "Shadow Caster" pass.
// Although we haven't explicitly done so in this shader, if these passes are missing they will be read from a fallback
// shader instead, so specify one here to import the collector/caster passes used in that fallback.
Fallback "VertexLit"
}

News concerning the recent release of Unity 4.6 was largely dominated by its new UI system.

However, closer inspection of the 4.6 release notes reveals some other interesting changes and improvements, including the announcement that “Stencil buffer is now available in Unity Free”.

Stencil buffers are not exactly new technology – they’ve been around for at least 10 years, and available in Unity Pro since Version 4.2. They can be used in somewhat similar circumstances to RenderTextures (still a Pro-only feature) to create a range of nice effects, so I thought I’d have a play…

The stencil buffer is a general purpose buffer that allows you to store an additional 8bit integer (i.e. a value from 0-255) for each pixel drawn to the screen. Just as shaders calculate RGB values to determine the colour of pixels on the screen, and z values for the depth of those pixels drawn to the depth buffer, they can also write an arbitrary value for each of those pixels to the stencil buffer. Those stencil values can then be queried and compared by subsequent shader passes to determine how pixels should be composited on the screen.

For example, adding the following tags to a shader will cause it to write the value "1" to the stencil buffer for each pixel that would be drawn to the screen:

With a few more tags, we can turn this shader into a mask whose only function is to write to the stencil buffer – i.e. that does not draw anything to the screen itself, but sets values in the stencil mask to control what pixels are drawn to the screen by other shaders:

So, now this shader won’t have any visible output – all it does is write the value 1 to the stencil buffer for those pixels it would otherwise have rendered. We can now make use of this stencil buffer to mask the output of another shader, by adding the following lines:

The effect created by these two shaders is illustrated by the two models in the following screenshot:

the picture frame contains an (invisible) quad which writes the value 1 to the stencil buffer for all pixels contained within the frame.

the character standing behind the frame has a shader that tests the values in the stencil buffer and only renders those pixels that have a value of 1 – i.e. those pixels that are seen through the picture frame.

I’m currently considering a game design that makes use of various optical and logical tricks to fool the player as to how objects in the game are spatially-related – and thought I’d start by doing a bit of a review of what games are already out there that use similar techniques:

Sphinx Adventure

Released in 1982, this is one of the first computer games I can ever remember playing. Since the game was entirely text-based, there were no graphical tricks to deceive the player. However, that didn’t mean that you couldn’t be deceived and disoriented in other ways. The “trick” employed in this case was not a graphical one, but a logical one about the spatial connection between locations in the game.

This was back in the days before GameFAQs or games magazines, so players had to hand-draw maps of the game locations in order to find their way around. And, with only a sentence or two description for each room, it took a long time to figure out that rooms were sometimes connected to each other in impossible ways, creating infinite loops as demonstrated by the “iron passages” in this map extract…

The Legend of Zelda

The Lost Woods are a recurring theme that have appeared in numerous titles in the Legend of Zelda series. Here’s a screenshot of the original NES incarnation:

It’s a “room” with several exits. Successful traversal through the woods required you to know the correct pattern of exits to follow through the rooms. Taking a correct choice led you gradually through the rooms of the forest. But taking an incorrect choice led you immediately back to the start room. However, since each room looked identical, there were no visual cues to suggest this was what was happening, and players were often left disorientated by the fact that simple logical truths proved incorrect (i.e. starting from a room, going North once and then South once did not put you back where you started).

EchoChrome

Echochrome was released on the Playstation over 6 years ago, although it feels more like a technical demo than a full released game. The technology which it demonstrates is the “Object locative environment coordinate system”, in which the relationship between objects in the virtual environment are not only determined by their 3d coordinates, but on the position from which they are viewed. Essentially if, from a certain point-of-view, two objects are made to look like they touch, then they do.

The levels were sparse and clearly M.C.Escher-inspired, meaning the entire focus of the game was concentrated on the gameplay mechanic of rotating the camera to create, and destroy, links between apparently separate platforms.

The exact same gameplay mechanic appear to have recently been re-used (in slightly more colourful form) in the game “Miika”:

echochrome

Miika

Monument Valley

Monument valley also appeared to take some inspiration from Echochrome, but developed the ideas further to include not only movement of the camera, but movement and rotation of parts of the level to create surfaces on which the player could walk (they also added a lovely art-style, making this feel much more of a developed game than Echochrome was). It still makes heavy use of the basic gameplay mechanic of making things appear connected in order to make them actually connected.

Mystic Mine

Mystic Mine takes just one specific element of MC Escher’s iconic isometric style and turns it into a fun puzzle game. The mine cart will only ever go “down” slopes but, due to the optical illusion employed, it is always possible to reach every part of every level. It’s a neat, compact game, and you can even get the source code on GitHub.

Perspective

Most people are intuitively familiar with the concepts of a “2D game” and a “3D game”. “Perspective” challenges the boundaries between these genres by allowing the player to move around a 3D world in order to create a 2D world based on the view of world objects from the current camera position. Unlike echochrome, Monument Valley, Super Paper Mario etc., interaction is not dimensionally-restricted – you can translate and rotate the camera in all three axes of the “3d” world to create an infinite variety of corresponding “2d” interpretations. It’s hard to describe and it’s surprisingly disorientating to play, but technically it’s very impressive to consider the level design process that must have gone into this:

Super Paper Mario

Super Paper Mario is hard to categorise as either a 2D game or a 3D game – rather, it offers the player the opportunity to switch between two different, but consistent, 2D perspectives of a 3D world. Like either looking at a cube from exactly front-on, or exactly side-on. The “paper”-thin theme allowed level elements and characters to have a physical presence in only two dimensions, meaning that obstacles in one view could easily be navigated by switching to the other view.

Fez

Fez creator Phil Fish says that Super Paper Mario is a terrible game, and that Fez is nothing like it. However, it’s hard not to see the similarity – with Fez similarly offering players different 2D perspectives of a 3D world:

Crush

I have to confess of never having heard of the game until I started writing this post, but apparently “Crush” on the PSP also uses a similar mechanic to Fez and Super Paper Mario – “crushing” 3D space into two dimensions:

Tale of Scale

“Tale of Scale” was created for the Ludum Dare 25 gamejam. Despite not really fitting the theme of “You are the Villain”, it featured a novel gameplay mechanic (and particularly impressive considering the 48hrs in which the game was developed) based around the apparent scale of in-game objects and forced perspective.

It’s somewhat reminiscent of the fantastic Father Ted episode in which Ted tries to teach Father Dougal about perspective:

Ted:[holding up a toy cow] All right, one more time. These… are small. The ones out there… are far away. Small. Far away.

Dougal:[shakes his head in bewilderment]

The trick is that Tale of Scale dynamically adjusts the scale of objects such that objects that are small because they are “far away” can become actually small, and therefore can be picked up between the fingers.

It’s a technique familiar to anyone who’s ever taken holiday snaps of “pushing the Tower of Pisa back straight”, or “squeezing someone’s head between their fingers”, but to my knowledge it’s the first time it’s been used in a game as a puzzle mechanic.

The required code is actually surprisingly simple – here’s an implementation in Unityscript that will keep an object the same apparent size, whatever it’s true distance from the camera:

Museum of Simulation Technology

MoST uses very similar game mechanics based around forced perspective as introduced in Tale of Scale, and also includes familiar world landmarks as objects in the game to accentuate the player’s preconception of what should be large or small.

BluePrint 3D

Not really a “trick” here, but I thought I’d include it anyway. An object has been smashed into pieces which, when viewed from a certain angle, will reconstruct the appearance of the original object. Rotate the camera to find the correct viewpoint and the object will be revealed. It’s a simple mechanic but nicely implemented:

The same mechanic is also used in “Starlight”, and, IIRC, some elements of “The Room” series.

IIRC, there’s also a game where, when the completed object is seen to be constructed correctly, it becomes real in the 3D world, but I can’t remember what the game is right now.

I’ve been building up a library of FBX animations from a combination of MoCap stuff I’ve found around the internet and my own lovingly hand-crafted (though rather stilted – I’m not a natural animator!) clips.

I now have a folder of, for example, about 10 different walk cycles, but I’ve yet to find a good way of browsing those animations – a kind of video thumbnail preview integrated into Windows explorer would be ideal but, if that exists, I haven’t found it yet. In the meantime, one thing I have just found is this add-on that let’s you preview FBX animations in QuickTime player.

It’s a little clunky in places (including, for some reason, only allowing you to have a single FBX file loaded at a time, even after having started multiple instances of QuickTime player?) but it’s certainly made it easier to browse through my animation library than importing each clip into Blender as I was doing previously. Range of features and shortcuts shown in the screenshot below.

Everyone likes a good space game, right? Then you’ll need some way of creating spacey backdrops. You could go for an 8bit parallax pixel starfield, or, at the other extreme, you might use authentic NASA imagery of the night sky.

What’s more, it doesn’t just generate single images – you can create a set of 6 cubemaps ready to import straight into a Unity skybox and get a full 360 space panorama. Fantastic asset for any developers looking for resources for a space-themed game.