Tag Archives for 3d Graphics

Those who have the privilege of owning a Virtual Reality (VR) headset I’m sure has shown VR to someone who hasn’t experienced it before, I like to tag these people as VR first timers (jokes), I don’t actually like labelling things but for the sake of this post I will use it. Anyway back to the subject at hand.

I own a HTC Vive and it’s currently set up in my spare bedroom at home, I feel very privileged to own this hardware and to actually have a PC that runs it. I take advantage of this by allowing anyone to have a go on my VR rig.

Most of my family and friends have never heard of VR, they aren’t exactly geeky tech heads like myself, one of my friends thought it was some weird spy camera rig set up, having described that it’s the next big thing called Virtual Reality, my friend still didn’t get it and just brushed it off, I even had to plea with him just to give it a go, I was begging him to try it to see…I had a similar experience with my girlfriend as well.

I have demoed VR to a lot of people at various technology conferences, typically at a Unity booth at a conference we would have some form of VR demo set up, whether it’s an Oculus VR, HTC Vive or GearVR. The advantage at a technology conference is I don’t have to beg people to try it.

What’s the different between the three? Quite drastic in my opinion and this question is relevant to the topic of this post because you need to think about what type of VR experience you would like to show to a VR first timer. I don’t want to delve into this question too much though but in simple terms Oculus is a sitting or standing VR experience connected to a PC, GearVR is a mobile experience, a device that has a slot to place your Samsung device into and the HTC Vive (saving the best ’till last) which is a room-scale VR headset connected to a PC (can also be a standing or sitting experience).

GearVR has probably been the most accessible VR experience available on the market at present, given there’s some hundreds of millions of Samsung devices in the hands of consumers that are VR ready puts Oculus and Samsung in very powerful position, I personally believe in using the highest quality experience for showing VR to newbies, so that leaves Oculus and HTC Vive. In my experience demoing a mobile level quality VR demo to VR first timers hasn’t delivered the same experiences as other VR headsets, I would love to hear if someone has seen the opposite experience so please comment on this post below.

Let’s start with Oculus, Oculus delivers a higher level immersion, from my experience demoing to consumers, reactions to the Oculus have been along the lines of higher quality graphics, better immersion, this is partly due to the technology which in turn allows developers to push the barriers of immersive experiences. Considering Oculus runs off a high-end PC and not a mobile device (same for the HTC Vive). I’m not saying GearVR apps can not create high levels of immersive experiences…Immersion is not defined by graphics quality and or technology but it can help.

Moving on to the HTC Vive, In my opinion and from my experience demoing to VR first timers and talking to professional developers, the HTC Vive delivers the best VR experience on the market. Partly due to the room-scale features it offers, this is a complete game changer in a VR experience, being able to physically move around the virtual room or space you are experiencing is extremely powerful and is a superb feature for us developers to experiment with with new game designs.

What’s really strange is the ability to move around a virtual room seems to confuse consumers at first, almost every VR first timer who tried the HTC Vive didn’t think about walking around the room, it seems to come across as an unnatural thing to do, which is true, think about it; Almost all core gaming experiences are static, since the beginning of video games you have never had the opportunity to physically move around a virtual space to effect the gameplay, the Wii was really the first global success in doing physical movement to effect gameplay (is there a term for that?) but you were limited as you had to face towards the TV.

Content on the HTC Vive has introduced various input designs that allow you to traverse virtual environments in various ways, there has been a few talks on this and is a constant topic at present, many developers experimenting with new ideas and also defining input standards within VR. Teleportation is one common technique which doesn’t really work for all experiences, the ability to teleport already defines the genre of your game, I mean..surely your game has some sci-fi based advancements right? As teleportation is not a natural experience for a human, the power of teleportation can break the immersive experience you are having within the content. In a general positive VR empowers you to experience what a teleportation might feel like, in reality it’s a win win situation for a consumer.

From a VR first timer perspective the teleportation technique completely freaks people out at first, I tend to show VR first timers Valve’s demo called “Aperture Robot Repair” which has a cool teleportation implementation, but it takes a few times to get used it at first, especially amusing is when the first time they use it and mess it up by teleporting up against a wall it strikes shock. Other implementations involve various techniques which I won’t go into in this post.

My brother tried a VR experience which placed the player up on a high beam, this totally freaked him out, having a fear of heights didn’t help but for my amusement seeing him try and balance himself on a flat surface (i.e. the bedroom floor) was rather good.

So, moving on from this, I would highly recommend showing VR first timers the HTC Vive, it really delivers the best experience VR can currently offer, the room-scale capabilities enhance the experience tenfold.

A cinematic experience which introduces VR first timers to room-scale VR with no interaction required, limiting the senses and allows us to adapt to the virtual environment and power of room-scale VR. Everyone I have shown this to has been blown away by this experience, especially when a big whale swims by. Generally people are super convinced at this point.

Adding an extra layer into the room-scale VR experience with the ability to be able to interact with objects in a virtual room, the human senses are going a bit more crazy with this experience, usually receive a lot of nervous laughter and “whoa!!” outbursts. Introduces teleportation input technique, which freaks people out at first. Generally; feedback has been very positive with this experience.

An artistic VR experience, empowering the user to be creative with painting and drawing in VR, user can move around and interact via painting strokes or objects into a virtual space. Usually at this point a majority of users have adapted to a VR room-scale experience and everything is starting to feel more natural. As far as I know there’s no teleportation technique implemented.

After they are done with these three experiences, I offer a choice of content next, this means they can pick something that might interest them or pick something that might enhance a human emotion such as fear as in horror or fear of heights, so here is my list in no particular order:

Job Simulator

The Rose and I

Fantastic Contraption

Space Pirate Trainer

AudioShield

The Brookhaven Experiment

CloudLands: VR Minigolf

Skeet: VR Target Shooting

Selfie Tennis

The Lab

I acknowledge content and peoples interests plays a big part in personal VR experiences and notice I mentioned “personal”, from what I’ve seen everyone’s VR experience is unique and personal, people react to different experiences differently, it can effect them emotionally, such as being excited, nervous, funny, scared, peaceful etc…it would be great to hear what content you like to show to VR first timers and also the reactions from these experiences you’ve seen, please comment below.

So what have I learn’t exactly, lets summarise:

Demo the best technology on the market i.e. HTC Vive with room-scale VR

Ask the individual if they have any fears before letting them play

Always ensure the player doesn’t step on the headset cable

Start with the most basic / limited experience for example theBlu which is awesome

Notice I haven’t covered nausea in this post, content on the HTC Vive and the technology shouldn’t cause this but may be experienced with specific people, I haven’t had one person experience nausea with the HTC Vive but I’m sure someone out there has.

Unity 5 has implemented a HUGE update on the rendering / graphics side of the engine, introducing new lighting workflows with realtime GI, a Physically Correct Shader (Metallic & Specular workflows supported) among many other things..

I wanted to do an experiment today, where I test out HDR Skybox’s in Unity 5 to see how drastically the lighting and mood of a scene can change.

HDR:

High-Dynamic-Range is an imaging photography technique to produce a higher dynamic range of brightness. This is achieved by capturing different exposures of your chosen subject matter and combining them into one image.

In Unity we can use these HDR images to help blend 3D models into the environment, this can drastically add to the belief that the 3D model is actually in the environment.

I’ll be showing how these 3 HDR images help create a completely different feel and look to the scene:

I have daytime, evening and night-time HDR’s, they should all create drastically different lighting conditions in Unity.

Time’s up chumps..let’s do this:

Unity 5’s new standard shader has built-in support for image based lighting (IBL), this really helps with the belief that the 3D model is in the environment I am setting with the HDR skybox.

Here’s the model with just Unity 5’s current default procedural skybox:

With a bit of trickery (not really, just pushing a few buttons in Unity), let’s now see what happens when I add the daytime HDR Skybox:

I haven’t messed with any lights at all, all I have is the default Directional Light in the scene with default settings. looks rather impressive after just adding a new HDR Skybox to the scene.

Let’s try the evening HDR Skybox now:

Awesome, I am really enjoying playing around with this, I love how quickly I can completely change the lighting conditions which in turn drastically changes the mood and atmosphere in the scene. I can’t wait to see how games are going to utilize these new features.

Okay, last one, the night-time HDR Skybox, I’m a little skeptical about this one, I’ve no idea how this will look and I imagine it’s not really desirable to use a night-time HDR image. Anyway, let’s see what it looks like:

Actually turned out rather well, the image doesn’t show the white spots on the model as much as in the editor, these white spots are produced by the specular smoothness on the Standard Shader, i.e the more smooth I make it the more white spot artifacts are produced, not sure why this is more present in the night-time scene, I’m sure there’s a setting I’ve missed or something..

Overall, I’m really impressed with this, especially how quickly I can change the mood and lighting of the scene and the visual output from Unity 5’s new rendering / graphics update.

My job is based on supporting Unity’s customer base in the EMEA region, to do a good job that means I need to learn all of the Unity things, features and new services, at least on a high level.

I tend to experiment a lot with Unity’s new features, today I wanted to share with you some of my R&D with Unity 5’s new Graphics overhall, this includes the new Physical Based Shader, Realtime Global Illumination and the new Scene Render Settings.

My experiments are usually aimed at producing a small scaled demo that can squeeze in as many features as possible, this enables me to demonstrate these features easily to customers while in the field.

PBS (Physically Based Shader):

Unity’s new Physically Based Shader a.k.a one shader to rule them all a.k.a standard shader (actual name) allows us to create materials for a wide range of natural surfaces such as Metals, Dielectrics (non-metals): monolithic materials = rock, water, plastic ceramic, wood, glass etc..cloth and organic.

The new PBS also plays nice with IBL (Image Based Lighting), we can setup a skybox cubemap in the new Scene Render Settings to help with really blending our objects into the surrounding environments:

One demo (not developed by me) shows a nice range of different surfaces used by the new standard shader in Unity 5:

We can see there’s at least six different surfaces represented here with the usage of just one shader – Ceramic, Cloth, Metal, Glass, Rock and Wood. The Scene Render Settings really help blending the Doll model into the surrounding area, helping us believe that the Doll is in the Forest environment.

The new shader includes many different texture slots, allowing you to add really nice detail to models. The shader includes a multi-variation of smaller shaders with versions for mobile and high-end.

Our built-in shader code is always available for download and currently with Unity 5 this will include the new standard shader as well – Could change but I doubt it.

Geometry needs to be marked lightmap static but you can relight geometry using Light Probes that are updated in realtime with the GI generated from the static geometry. In my little demo I’ve combined the usage of Realtime GI, PBS, Reflection Probes and Light Probes, the majority of the objects marked as static apart from a few props which demonstrate the usage of Light Probes for non-static objects:

Couple of shout outs, I’ve used the Medieval Assets Pack by Naresh and the Medieval Boat from Mr Necturus which are available on the Asset Store for most of the props. The wooden cottage model center piece is a free model from some site I can’t remember.

Here’s a Vine I recently sent out, little outdated comparing to the above screenshots, but demonstrates the realtime GI in editor with play mode on:

There’s more of this to learn as Unity 5 develops through the beta stage. Note: Screenshots from beta version of Unity 5 – May look different when released.

I must say I experiment a lot with Lightmapping in Unity using Beast and I have never wrote anything about it, so I wanted to share what it is and how it might be useful to some of you.

Since the time of Quake which was the first video game to utilize lightmapping, Unity has integrated the Beast lightmapping technology for both the pro and free version, yes! you can use expensive rendering power for free on your games, I know what your thinking…this is awesome, and it is, lightmaps can really bring your environments to life.

What are lightmaps?

If you have read this far without knowing what lightmaps are, I’ll give a quick overview, a typical lightmap is light data stored into a texture and applied to static objects in your scene, lightmaps in Unity take into account meshes, textures, materials and lights.

Single lightmap mode uses the far lightmap only, this is a baked texture, so pre-computed and contains full illumination.

Dual Lightmaps use a near and far lightmap texture, the near lightmap texture contains indirect illumination from lights set to auto, full illumination if your lights are set to BakeOnly, emissive materials and sky lights.

This near lightmap is only used when within the distance of the camera, when within the camera distance auto lights are rendered as realtime allowing for specular, bump and realtime shadows, this is not a feature for single lightmaps.

On the right hand side on the image showing dual lightmapping, the near lightmap texture is used, this is because we are within the camera’s distance, near lightmap texture is showing both specular and bump, as if it was realtime lighting. If we were to pan the camera away from the leaf we would eventually not see the bump and spec because it will fallback to the far lightmap, which is shown on the left.

Using Directional lightmapping in Unity has the ability to bake specular and normal maps into the texture without using any realtime lighting, the lightmaps encode the direction of the light so that it can be used in the pixel shader to vary the lighting depending on the per pixel normal and calculates specular for the average light direction.

The image below shows baked specular using directional lightmaps, I have a simple mouse orbit script attached to the camera which allows me to see the specular highlights on all the cubes in the scene.

Other area’s of lightmapping which I haven’t mentioned in any real detail or at all are using emissive materials, they can give some really interesting effects, as well as area lights, light probes and lightmapping dynamic objects, that’s a whole new dimension of fun.

Maybe I’ll write about those some other time, for now I encourage you all to have a play around with lightmapping. Catch you soon!