Although Mark and I manage to do a lot between the two of us we are lucky enough to have made contact with the extremely talented Andy Guy (@Andy_Guuuy) who has created the music and all of the sound design for BeeBeeQ. We’ve worked with him in the past an have always been pleased with the audio he produces for us. We asked him to write this Devlog for us to outline his creative process, it’s been great to see the evolution of BeeBeeQs audio from our almost to obvious suggestions to the sound Andy created for us which we feel really matches the style of the game and gives BeeBeeQ it’s own identity while sits in line with many modern day animated films. Now over to Andy for the important stuff!

The brief for BeeBeeQ was to create a soundtrack that was cartoonish and whimsical in style, reflecting the games bold and colourful graphics. In order to set the tone, it was important to create a series of soundtracks that really embraced the gameplay. After a few test scores that didn’t really fit, we struck gold, agreeing on a lighthearted collection of six pieces of music to accompany the game.

The music was inspired by an abundance of light pop songs, pixar films and kids tv shows, each track being driven and built up around the strumming of the ukulele. Around a basic chord structure, I began to explore with low brass, for a clumsy bumble bee feel, as well as high xylophones and whistles, giving a carefree, childlike feel.

Six tracks were made in total;

Lobby theme – reminiscent of elevator music mixed with pixar.

Action theme – a faced paced piece building into a crescendo to accompany the bee’s n their mission.

Sneaky theme – A sleuth inspired, jazzy track.

Bedroom theme – A whimsical, lazy and flowing dreamlike track.

Goofy Theme – A playful kids show theme.

Goofy theme 2 – Another playful kids show theme.

In terms of audio, we wanted to give BeeBeeQ a very diverse sound world. We were constantly finding needs for new sounds as more and more elements were added in. I felt it was important that each sound had variety so that the same exact sound would not be heard within seconds of another, giving a more realistic feel.

Much of the work was relatively simple; a case of recording a large selection of foley sounds, cutting them up and processing them with minimal effects. The sounds of rocks hitting one another, for example, required very little processing.

Other sounds required a more inventive approach to get the desired effect. The sound of meat hitting the floor for example was built up of various layers, using the sound of a hand hitting bare skin, at different speeds. These layers were processed with pitch correction, slight reverb and EQ to reflect the size of the meat and the height it was dropped at.

Thanks Andy, you can keep up to date with Andy and his work by following him on Twitter (@Andy_Guuuy) and via his website (http://www.y-u-g-a.co.uk/) , We cant recommend Andy enough, working with him is always effortless on our part and we always end up with way more than we originally felt we needed.

]]>http://popupasylum.co.uk/?feed=rss2&p=15060Devlog 11 / EGX Rezzedhttp://popupasylum.co.uk/?p=1493
http://popupasylum.co.uk/?p=1493#respondSat, 08 Apr 2017 17:36:09 +0000http://popupasylum.co.uk/?p=1493Read more »]]>It’s been one week since we got back from EGX Rezzed and it still hasnt sunk in that almost of the feedback about BeeBeeQ was good. We learned so much from bringing the game to the show and managed to act on most of the feedback each night meaning the game we showed each day was way better and more balanced than the day before. Gathering this much QA in our own time would have taken so long, attending the should worked perfectly for us.

What we learned

I think the most interesting thing was that people seem to enjoy playing the game more than winning the game, and whether that’s something we need to address or not is still in discussion. There will always be a winner and loser in BeeBeeQ but it’s great to see that the game is just fun enough that people like to play it. I can’t imagine anyone ever getting ultra competative while playing BeeBeeQ, almost everyone leaves the game with a smile on their face regardless of who won, which is something I think I’m most proud of.

Balancing is hell, Day One, we had a build that due to a last minute change to food spawn numbers was heavily unbalanced toward the bee side (roughtly 80/20, win/loses), we tested the build with myself and Mark, who clearly know the game too well and got 50/50, which in hindsight wasn’t very smart. On Day Two we brought an experimental deathmatch build which was still unbalanced towards the chef side, but a little closer to a ratio of 35/60. On Day Three we introduced a button to switch between both playable modes (Cook-off and Deathmatch(dubbed Bee-t em up but the Ready Up video below)) and dynamic balancing, adjusting the bees health depending on numbers of bees in the game, similar to the way we adjust the food provided in the Cook-off mode and the game finally settled around the 50/50 mark.

Oculus support was finally polished up and is now in, this was a necessity due to our Vive being left at the show and wanting to dev in the evenings to improve the game. This has proved invaluable since now Mark and I can test new features easily and even take the game out to small events to test/show without disrupting the development of the game.

I took the opportunity while it was quiet to take one quick video of a players first impressions of BeeBeeQ right at the end of the show since it was quiet, and really I should have been doing that from day one, and in future will be. We have a test day planned soon and I’m going to make sure I get some video of the players thoughts.

Sometimes it’s better to be lucky

So there were no bad points for the show, we had a great time and I don’t think we would have done anything differently, but we did get lucky a few times. The long Queues alone for BeeBeeQ have led to some great opportunities that we cant talk about yet, one of our players got a little excited, left the chaperone and knocked over a base station which smashed the plastic casing, fortunately the base station still worked, else we would have been showing off the untested Oculus build or rushing out to buy a new Vive. We chatted to so many publishers who were full of great advice which we are talking through. I think most importantly we met up with youtubers and twitch streamers who are interested in receiving builds so they can make content, I’m looking forward to getting a stable version on Steam and getting keys out to everyone we spoke to.

The press at EGX were amazing, we had loads of fun chatting to people about the game, more often than not we were unaware that they were members of the press. Which for us was a good thing since it’s our first show with press involved and we really didn’t know what to expect. So far the below articles/videos have come out and we couldn’t be happier with the response.

Jamie from UploadVR wrote such a great article, it really felt like he just got it and honestly we’ve tried to put the game in to words many times, and never got close to capturing the spirit of the game the way Jamie did. We saw the article pop up on our twitter feed on the evening of Day One and it really gave us a push and some confidence in what we’re doing.

The Gamescore Whores put this fantastic interview together, which was our first time being interviewed, and thankfully it went okay.

The Game Show put out this video which gives a nice honest description of the game, and once again gives us good feedback on the balancing, which we are really happy to have improved. Also Bee puns make Popup Asylum extremely happy!

Ready Up came along and played, they gave us some great feedback which we have already started working on, the Bee control systems are tricky, and I think it’s going to be an ongoing task to get them perfect, currently any suggested control scheme that people give us is being added in and getting tested although we still feel that with a little practice our default scheme is the most friendly for 6 degrees of freedom of movement. The evening of Day One we reduced down the sensitivity of the controls and it’s made the game much easier to play, we were worried that it would make flying as a bee less fun as you couldnt do those quick get aways that bees tend to pull off, but in reality for a three minute round you don’t have enough time to get those sorts of manoeuvres down anyway, in the final release I think we will give the option to increase sensitivity but keep default at the currently low setting.

Mainly we were just really happy that they had fun playing and were kind enough to feature us in their podcast! Thanks guys!

Was it worth it?

Hell yeah it was! We didn’t know what to expect and the EGX team were really supportive, we learned so much and given that we’re completely self funded by Unity Asset Store sales, we had to be really careful where we invested money in to Marketing, I don’t think we could have used the money better, we got great press coverage and some exciting developments, and a invaluable amount of feedback. Not to mention an archive of videos of players being completely ridiculous!

Firstly I just wanted to talk about the BeeBeeQ avatar and why we decided to choose this cartoon visual style, we decided on the colourful and oversized visuals to match the fun but clumsy style of game play, it felt like the right direction to take the avatar.

Technical (Rigging/Shortcuts)

We didn’t want to limit what we could do technically with the avatar but being a 2 man team we needed to build a versatile rig quickly that would accept animations for Mixamo, keeping the amount of animation we needed to do to a minimum. We decided to use the Mixamo autorig for the main rig of the character, we then exported out the weights from the mixamo rig to textures, it was then a simple task to add in the joints for a custom face rig based off of the Joint-Based Facial Rigging in Maya Tutorial by Tim Callaway (http://www.digitaltutors.com/tutorial/1133-Joint-Based-Facial-Rigging-in-Maya). We have used this rigging system a lot in the past when working on Kindred so it didnt take long to implement. Once parented in to the Mixamo rig we just needed to import the weights back in and paint the weights for the face rig. The effect worked really well and gave us a lots of control when bringing the Chef to life!

This is a quick video showing the Chef Avatar rig and what it can do.

Blend Shapes

In BeeBeeQ when the VR player gets stung by a Bee on either his hands or face he is affected by one of two penalties, if on the hands he drops whatever he is holding and on the face his vision is limited to a thin slit for a short time. to visualise this on the Avatar we decided to add in blendshapes so that other Bee players would know that the Chef was being affected by a penalty. There is also a texture shift towards red when an area has been stung making it even more obvious that the Chefs ability to defend himself and the BBQ is impaired. In game the affect lasts just long enough for the Bees to make their move and get the upper hand, a full BBQ can be stripped of food quickly while the Chef fumbles around recollecting his tools and weapons.

Animation

The Chefs animation is broken up in to a few separate manageable clips. The face, the hands, the eyes and the body. For the face we are using several basic looping facial expressions with minor movement, each clip covered an emotion that the chef would be feeling when interacting with various objects inside the game.

The eyes, feet and facial expressions are all controlled by scripts and the body is directly influenced by the VR player all of which will be covered by Mark in our next article.

Thanks for reading

]]>http://popupasylum.co.uk/?feed=rss2&p=14570Devlog 9 / Mostly Eye Adaptationhttp://popupasylum.co.uk/?p=1332
http://popupasylum.co.uk/?p=1332#commentsMon, 30 Jan 2017 22:00:25 +0000http://popupasylum.co.uk/?p=1332Read more »]]>The last two weeks haven’t had as many revelations code wise as some previous weeks, but there’s been plenty of development on the script, trailer and logo fronts. The code has moved along and there’s a couple of maybe interesting bits below along with over explanatory descriptions and code to back it up.

1. Render the scene to a HDR buffer.
2. Average the luminosity of all pixels to approximate the light amount entering the camera.
3. Divide (in some way) each pixels original value by the exposure.
4. Use a tonemapping curve to bring HDR values in to LDR range.

Intuitively this is an expensive operation, performed individually for each camera, and not the right approach for a 90fps 5 player split screen, I need it to be faster.

Getting Fast Approximate Exposure

I really liked Shadow of the Colossus’s exposure zones (and Valve did something similar for Half Life 2: The Last Coast), these areas were set up ahead of runtime with exposure values, then at runtime when the camera entered the zone the exposure would animate to the new values. Sounds like a fast performing solution but most implementations seem to only take position into account and I imagine took a long time to set up, in BeeBeeQ I’m trying to avoid as much time consuming set up as possible in favour of automation.

I realized Unity already provides approximations of incoming light in any direction in the form of light probes, these store indirect (and optionally direct light) for specific positions and any direction in the scene in a way that can be interpolated. Unity provides a function to get an interpolated probe but neglected to add a function to decode it, and though the shader side is accessible by downloading the built-in shaders or looking in the .cginc files, the process of converting a SphericalHarmonicsL2 into the 7 float4 arrays they use in the shader is not documented. Luckily for me Bas-Smit figured it out and shared it on the Unity forums.

So in OnPreRender I sample a probe’s luminosity and set a global shader value for exposure, this is then used at the end of our uber shader (we’re using an modified version of Valve’s The Lab Renderer) where exposure and tonemapping is applied, this allows us to perform full HDR and approximated exposure without requiring any HDR buffers or image effects. This could work fully on its own since it’s possible for light probes to contain direct light, but in most set ups probes only contain indirect light, in those cases the exposure to the direct light has to be added, I decided to raycast from the camera to the light and add it if it passed.

This whole exercise was very much a reminder of how awesome the gamedev community is as people share information and techniques so freely, and everyone moves the industry forward. Moving forward I’d get rid of that GetComponent() that’s happening every frame, and consider only performing one raycast per frame, iterating over the lights in the scene.

PLAY

Smoother Bee Pickup

I switched the bee pick up mechanic back to a fixed joint from a spring joint. I had been working out how much stamina the bees should lose based on the stretch in the spring, this was supposed to encourage teamwork since bees trying to move the same object in different directions would lose stamina at double rate.

PLAY

The problem I was having was with mass, if I picked up a piece of meat on a spring I would expect the meat to bounce on the spring and my hand to stay pretty still. In BeeBeeQ the bees have a much lower mass than the meat so they ended up bouncing back onto the meat when they tried to lift it. Now it’s a fixed joint and stamina loss is based on the user’s input, the mass of the carried object and the number of bees carrying it. The result should be more predictable stamina loss but more importantly smoother and predictable movement.

Bits and Bobs

BeeBeeQ was running slow so I dug into the profiler, I found some OnWillRenderObject calls that were having an impact, moving these into the update loop helped performance. I also removed any of my own code that was causing allocations and for now it’s all back to 90 fps or more, and now some tools make velocity based “swoosh” noises when the chef swings them around.

So that’s another couple of weeks and I’ve written too much again, thanks for reading and I’ll try make it a bit briefer next time.

]]>http://popupasylum.co.uk/?feed=rss2&p=133225 quick tips when building VR focused environmentshttp://popupasylum.co.uk/?p=1134
http://popupasylum.co.uk/?p=1134#respondTue, 17 Jan 2017 14:52:52 +0000http://popupasylum.co.uk/?p=1134Read more »]]>I just wanted to outline a few of the things I’ve discovered that have helped us keep our environments feeling rich and still work for the up to 6 camera set up we have for BeeBeeQ.

2×2 meters and 5×5 meters, these are your magic boxes, keep your VR Player in the 2×2 meter box and your story in the 5×5 meter box.

Realtime reflections are not always your friend in VR, all that extra rendering can draw much needed power away from making your game playable so utilise reflections probes, for anything that can be built with a straight edge use box projected reflection probe. Below you can see the lake from our park level which ordinarily would probably have a slight organic wobble to it, but given the distance it is a fair trade off to build the lake edge straight and use a box projected reflection probe, aside from the signs not reflecting you wouldn’t know the difference, and the signs can be duplicated flipped vertically and rippled with a vertex offset in the shader to solve that.

The details. Your players are going to get extremely close to everything you make, so this means a few things, firstly your textures have to be high res, fortunately VR ready graphics cards have plenty of memory for you to fill with textures, but if your textures can’t be high res then they are going to need to tile well, in BeeBeeQ we use a combination of both, and where tiling is visible in the distance we use maps in the DetailAlbedoMap channel, this is usually clouds or grime, something to break up the texture. The important thing is to make sure your players aren’t seeing pixels. Below you can see grass in the foreground and the same texture on a different material in the background with the DetailAlbedoMap. Likewise in reverse use modeled assets to create your tillable textures, then use the modeled assets to break up the textures around important game areas.

If at all possible choose a simple style for your textures, high contrast high detail textures wont hold up as well close up compared to simple gradients and basic colours,

Light rays can be expensive, so model the shapes you need in non dynamically lit areas and use the particle additive built in shader for simple light rays, or build your own shader for more elaborate effects. Finish the effect off with an emission texture to match where the light is casting on to the carpet.
Thanks for reading, I hope some of this helps.

]]>http://popupasylum.co.uk/?feed=rss2&p=11340Devlog 8 / Experiments in VR Player/Environment Collisionshttp://popupasylum.co.uk/?p=1171
http://popupasylum.co.uk/?p=1171#respondFri, 13 Jan 2017 14:23:10 +0000http://popupasylum.co.uk/?p=1171Read more »]]>So I finally found some time to get creative with some VR UX to improve how held objects behave when the VR player puts their hands somewhere they shouldn’t.

It’s pretty normal in VR (and any game using a physics engine) to expect that an object acting under gravity will be prevented from intersecting unrealistically with other objects and the virtual environment using physics engine collisions, but in current generation VR the player can put their hands wherever they want, unconstrained by the virtual environment and the behavior of a held object in that circumstance can’t really be resolved by the physics engine (as shown by the gif below) and is not totally established in VR as a whole yet.

PLAY

Inspiration

One thing I spotted while playing “I Expect You To Die” (mostly in the office area because in the levels I was too absorbed to do any analysis) was how collisions behaved differently if an object was held, a held object would be allowed to pass through the environment as the hand does, and by setting it up that way the virtual hand and my real hand were always in sync, which just felt right.

Implementation

I went through a few iterations to work this into BeeBeeQ which ended up in a neat solution, and now having finished it it feels like it should have been obvious… but this is gamedev, not kicking myself to hard.

So my first thought was just to disable all collisions between the environment and the tool while it’s held, basically what “I Expect You To Die” does (though they disable all collisions, with some extra speed based stuff as described in this far more interesting blog post). This worked for them but in BeeBeeQ this would have allowed the VR player to take a swing at Bee players through walls, in order to keep things fair a held object that’s intersecting a wall (or any part of the environment) should not be able to hit the bees.

After experimenting with changing layers in OnCollisionEnter (then not being able to detect OnCollisionExit), then changing the colliders to triggers in OnCollisionEnter and back in OnTriggerExit (OnTriggerExit was not reliably called) I eventually found a solution I feel works.

In Awake of an interactive object I duplicate all the colliders that make it up (we keep colliders and render objects separate so this doesn’t result in any extra geometry) and set the duplicates to be triggers. When the object is held the non-trigger colliders are set to a layer that doesn’t collide with the environment, triggers still do. In OnTriggerEnter with an environment (static) collider I store the static collider in a list and change the non-trigger colliders layer again to one that doesn’t collide with the environment or the bees, then in OnTriggerExit I remove the static collider from the list, and if the list is empty re-enable collisions with the bees. Then finally when the object is dropped I re-enable collisions with the environment.

The flow actually works very well and though I don’t like how it needs double colliders I can live with that.

The last thing to do was provide some visual feedback to the VR player to let them know they are performing an illegal move, this is done by changing the objects material to a 2 pass Fresnel shader, one pass renders the un-intersected area with ZTest LEqual, the 2nd pass renders the intersected area slightly more transparent with ZTest Greater. This shows the object through the environment but still makes it easy to see where the intersection is happening.

PLAY

On a lucky day I get 2 hours to work on BeeBeeQ, this all happens pretty much automatically for any new intractable we add which is exactly what I needed. Whether all this will make it into the final game or not is going to depend on some intensive play testing, but so far I like this behavior a whole lot better than watching the physics engine struggle to maintain order!

The glass I see in games is usually handled in 3 ways. Most commonly it looks like it’s using an alpha blended material possibly with cubemap based reflections. Sometimes transparency is forgone entirely, simply rendering the glass as a highly reflective opaque surface. Games with a bit more graphics performance to spare might render the opaque geometry first to a texture then read that texture when rendering the glass surfaces, applying uv offsets to achieve some distortion. This is a nice effect but has a high(ish) performance cost, can only refract objects that are on screen and must be done for each camera, of which BeeBeeQ has up to 6 (Adrian Courrèges @ado_tan has written a great breakdown of Dooms implementation of this glass style).

The Problem

BeeBeeQ is a VR game so we want to hit a constant 90fps to prevent any sickness, all glass surfaces in BeeBeeQ can be seen by the VR player and the bee’s cameras and the user can pick up glass objects to closely inspect them, it’s got to look good.

The Solution

Using box projected refraction seemed like a good way to leverage Unity’s built-in Reflection Probe system, where each renderer is automatically supplied with a cubemap representation of their local environment and optionally parameters for proxy geometry box and a 2nd probe for blending, this is used to look up a reflection text from the cubemap based on the view direction vector reflected on the models normal. All that’s needed for this is to pass a world space position, a world space direction and a value for roughness, and having implemented it I think the result adds an extra level of interest that I like.

PLAY

Unlike the previous post here I’m not really aiming for physically based realism, just something that looks glassy, here’s the code I used, this is worked into the vr_standard shader from Valve’s The Lab Renderer on the Asset Store but the variable names are easy to understand and the math is universal.

Custom Refraction Probes

Just to extend it a bit I added a variant with which I could pass my own cubemap and box parameters rather than Unity’s built in ones, this allows objects to refract a different probe to the one they reflect, in the image below the window is rendered as opaque geometry reflecting the garden while refracting a cubemap generated from the kitchen level, so we don’t have to load any of the kitchen geometry to have the environments feel connected.

PLAY

Taking it further

I’m happy with the solution for now but there are plenty of cases where it couldn’t hurt to take it a bit further. Rather than using uniforms for the parameters a texture could be used, particularly in the case of the wine glass above the rays entering the stem and the rim of the glass would exit in a very different way to the bowl, the bowl would have a _PA_NormalInfluence value of zero since the inside and outside of the bowl glass act like shell, while the stem and rim could have a _PA_NormalInfluence value of 1 to make them feel like solid glass. The _PA_Thickness value could also be baked into a texture to get a bit more accuracy and variation. While these are both not hard to implement, generating the assets (particularly a texture for _PA_NormalInfluence) could be tricky and would add yet another texture to lookup and keep in memory

]]>http://popupasylum.co.uk/?feed=rss2&p=11420Physically Based Planar Refraction Shaderhttp://popupasylum.co.uk/?p=1109
http://popupasylum.co.uk/?p=1109#commentsSat, 29 Oct 2016 22:18:43 +0000http://popupasylum.co.uk/?p=1109Read more »]]>A while ago I spent some time figuring out how to get glass “aquarium tank” style refraction working in a vertex shader targeting mobile VR and thought I’d share the results.

Background

I’m lucky enough to be working a great deal with desktop VR currently, both at Popup Asylum and my day job, but there’s been a lot of mobile VR developments recently and I wanted to explore the VR capabilities in that area, thinking about what effects could be achieved on restricted hardware that might not have been tried before. With Popup Asylum having a fairly large library of mobile ready underwater assets at my disposal as well as PA Particle Field being fairly adept at handling large schools of fish, I began to consider a single room non-euclidean aquarium that would show off the assets but also fit well with the design and performance restrictions of mobile VR.

With that idea in mind I started googling aquariums for reference and a common visual factor of aquarium videos was the way the glass refracted the contents of the tank, refraction is usually considered a heavy effect and I like a challenge so I decided this would be a good place to start.

What struck me was that this kind of planar refraction could be achieved in the vertex shader as each point in the real image mapped to a single point in the refracted image, unlike rippled water and distorted glass where multiple points in the refracted image can map to a single point of the real image. By defining a geometric plane with a refractive index in the shader we can achieve a ray traced style refraction, the Refract function below is commented but I’ll go through it step by step;

Shader Function

The function takes 3 arguments, the vertex position (or any position) in world space, a float4 describing the normal and position of the plane and 1/refractive index, and returns a refracted point for that position in world space.
View the code on Gist.

Step by Step

The function starts by setting up variables that will define the initial ray, namely a position and a direction

//ray origin is the camera position
float3 viewerPosition = _WorldSpaceCameraPos.xyz;
//ray end is the vertex's undistorted position
float3 vertexPosition = position;
//get the vector from the camera to the vertex
float3 worldRay = vertexPosition - viewerPosition;
//normalize it for direction
float3 worldRayDir = normalize(worldRay);

This takes the built in _WorldSpaceCameraPos variable as the ray start and the direction from the camera to the vertex as the direction.
Next the plane is defined, which consists of a normal and a position in world space along that normal.

//surface is a vector4 that defines a plane
float3 worldPlaneNormal = surface.xyz;
//define a known position on the plane
float3 worldPlaneOrigin = worldPlaneNormal * surface.w;

Now the initial ray and plane is defined, the ray direction is refracted on the plane normal with the refractive index, cg/hlsl/glsl has a built in function for this

//get the vector result of the worldRay entering the water
float3 refraction = refract(worldRayDir, normalize(worldPlaneNormal), refractionIndex);

This gives us the direction that a ray crossing the plane would take. Normally in a ray tracing engine this direction would be queried to find where it intersects with some geometry then return the resulting pixel color, in this case we want to do the opposite, we already have the pixel color (it will be looked up in the fragment shader), what we need to know is where to draw it on screen. This can be approximately achieved by performing ray-plane intersection from the vertex position to the plane in the reversed refracted ray direction.

Finally, getting the position from the camera through that intersection point with the initial ray’s length gives the refracted position.

//get the vector from the camera to the intersection, this is the perceived position
float3 originToIntersection = intersection - viewerPosition;
//starting from the camera, move along the perceived position vector by the original ray length
return viewerPosition + normalize(originToIntersection) * length(worldRay);

This can then be fed into the rest of the vertex shader.

The result is a refraction that behaves realistically and displays the geometry from a slightly different view point as a real refraction does. This isn’t totally accurate since the refracted ray is calculated based on the initial ray direction, for full accuracy the refracted ray would need to be calculated using the vector from the camera to the intersection position, but this would require integration and I felt the result was close enough with out it.

This results in a very clean refraction like the glass of a fish tank, but for a distorted refraction like rippled water (still planar overall) I would still use this approach. Usually a distorted refraction is a render texture of the scene from the current camera, looked up with UV offsets sourced from a texture, not physically based at all but creates a nice mock refraction distortion effect. The style of refraction outlined above could be used to generate the render texture with some degree of realism, then the mock distortion effect could be applied to that texture.

The only other thing to add here is that now it’s quite easy for something to be unintentionally culled as it’s outside the cameras regular field of view but still in view with the refraction, to prevent the object being culled I used a behaviour that modified the objects position to its refracted position before the camera’s culling.

]]>http://popupasylum.co.uk/?feed=rss2&p=11092#UnityTips – Preview Frustum Cullinghttp://popupasylum.co.uk/?p=945
http://popupasylum.co.uk/?p=945#respondTue, 12 Jan 2016 09:11:52 +0000http://popupasylum.co.uk/?p=945Read more »]]>Thought I’d share this script, Unity does frustum culling based on each renderers bounds and the cameras field of view, near and far clipping planes. This little script, attached to a camera, let’s you preview that camera’s culling in the Scene View, useful for optimizing and debugging your levels.