Virtually Realhttp://andytsen.com
Nonspecific Posts about Virtual Reality, Startups, and MyselfSun, 21 May 2017 21:18:22 +0000en-UShourly1https://wordpress.org/?v=4.9.4http://andytsen.com/wp-content/uploads/2016/12/cropped-download-32x32.jpgVirtually Realhttp://andytsen.com
3232Sniper Scope in VRhttp://andytsen.com/2017/05/21/sniper-scope-in-vr/
Sun, 21 May 2017 21:18:22 +0000http://andytsen.com/?p=94In a recent Game Jam with my buddies Tony and Larry, we came up with the idea of doing a tower defense game in VR. In this short blog post I’ll go over how to easily setup a sniper scope in Unity, so that your players can actually peer through a tactical scope in VR in order to snipe at their enemies from afar.

In my experience, there’s just something about having a second screen experience that enhances the immersion and sense of presence. A second screen experience in VR is a mechanism that provides a different perspective from the player avatar’s viewpoint. Examples include watching a TV that projects the image of a security camera, watching someone in the real world do something live on a TV in VR, or in our case today use a sniper scope to zoom in on an enemy and take them out.

I think there are a few reasons why this type of experience may be compelling:

Anything that has several layers of interactability in VR is interesting. A sniper rifle + scope is a much more interesting item then the gun itself. Another example is a camera with a selfie stick where one can take pictures of our virtual representations in VR ala Facebook Spaces

Having a screen within a screen is a recursive a pattern which “tricks” the brain into forgetting that it actually has a VR device on it. By adding a second screen, we are creating a more believable reality by adding reference points that are anchored to our reality

These are just hypotheses of course. However, one thing is certain, the subjective experience of peering through a second screen is just flat out cool. As it turns out, settingup a basic experience in VR without the bells and whistles is pretty simple. Here’s an example from a weekend gamejam I did with Larry Charles and Tony Nguyen

We were able to create this sniper rifle tower defense game over the course of the weekend and the sniper scope itself was easy to setup, as you’ll see below.

To setup a sniper scope of your own, follow this cookbook:

You can get an awesome ACOG scope and set of modern weapons from the asset store for free here: https://www.assetstore.unity3d.com/en/#!/content/14233

You’ll want to resize the scope of the model you find here to be at least double the size. Although this makes the sniper rifle less realistic, from a gameplay perspective, it makes it much easier to see what’s on the scope.

In my example, I rotated the scope so that the lens with the larger surface area was facing me.

Create a render texture (A render texture is a texture which takes input from a camera and display’s it on the surface of the texture)

On Windows it’s Toolbar/Assets/Create/Render Texture

For the sniper rifle, because the player will be getting their face right to the edge of the scope, I set the render texture to 1024×1024

Create a material – A wrapper for shader properties and other parameters to tell unity how to render a particular object. In this case, it will be the thing that links your render texture with the actual rendering of the scope view on a plane you’ll create in step 5

Toolbar/Assets/Create/New Material

Drag the render texture you created in step 1 onto the Albedo map of the material.

Create a second camera, this will serve as camera view of the scope

Drag the render texture onto the “Target Texture” field of your scope camera

Set the field of view for the camera to your desired zoom level. The smaller the field of view, the higher the zoom level.

Position the camera to be at the end of the barrel of the gun. Although this is another sacrifice for realism, you’ll obviously want the bullet firing where the gun is aiming.

Create a Plane Game Object — The plane is the object that you’ll drop your texture onto. It’s what renders the camera view you just created

Toolbar/GameObject/3D Object/Plane

Under the Materials section of the Mesh Renderer Component on the plane you created, drag in the material you created in step 3.

At this point, you should see the entire plane you created render the camera view that you assigned to the render texture.

Resize, reposition, and rotate the plane so that it matches the size of your scope. Make sure you drag the plane in the hierarchy to be a child of the sniper scope so it moves with the rifle.

At this point you should be able to press play and see your sniper scope on your second lens when you zoom in. Further enhancements to this project could be to create a shader or circular “plane” which renders the sniper rifle so you don’t see the square edges, as well as some distortion to make the scope seem like an actual lens, but are outside the scope of this tutorial.

Feel free to let me know if you have any questions about implementation on twitter @Andy_Tsen or dropping me a line via: andy@andytsen.com

]]>Reducing Simulator Sickness Part I: Screenspace Shadershttp://andytsen.com/2017/05/10/reducing-simulator-sickness-part-i-screenspace-shaders/
Wed, 10 May 2017 03:44:32 +0000http://andytsen.com/?p=87This is a technical discussion on how to use render textures and screen space shaders in order to reduce nausea in VR. Reader should have general understanding of programming concepts and Unity. I learned basic shaders by reading this excellent shader tutorial series by Alan Zucconi. You can find my updated shader code on github here under MIT license.

One of my favorite experiences in VR is Google Earth. Seriously, though — Who wouldn’t love flying through the world like superman, visiting any destination, from the comfort of your living room? One of the reasons Google is able to pull off dynamic movement is that they pay close attention to the factors that make users sick in VR. A few months ago, I spent a few weeks prototyping and building a quick and dirty version of Google Earth’s flight mechanisms. In the following few discussion, I will go one of the techniques Google Earth VR uses to reduce simulator sickness: Field of View (FOV) reduction

FOV reducation has become a popular technique used by VR games to reduce motion sickness. In addition to being seen in Google Earth VR, you may have also noticed this technique in Ubisoft’s Eagle Flight. If you are interested in the intricacies of why this works, check out the scientific study Columbia released on the subject here. The gist of it is that simulator sickness is caused by vestibular disconnect between your inner ear and visual sensory systems, similar to how vertigo and motion sickness affect some people. A lot of the visual information that can make you sick in VR is caused by the movement you perceive in your peripheral vision. Therefor, by reducing the player’s peripheral vision developers can significantly reduce the nausea the player experiences.

So how does Google do this? Well I’m not exactly sure! But I jerry-rigged a ghetto version myself in Unity, and it works pretty well, so I thought I’d share that today.

Here’s the abstract of how my system works:

Two cameras (one for the left eye, and one for the right eye) for the “background” world. These cameras will have culling masks set such that they only render the background world’s scene objects.

Two render textures that the two cameras render to. One render texture for the left eye, and one texture for the right eye.

One script, which is attached to the camera, that blends the two background render textures with the default VR view by passing the inputs into a screenspace shader.

Finally — One Shader (probably poorly written given my shader skillz) written in CG, that blends between the textures and returns an image.

Below is the rough outline for how I approached on the Oculus Rift. Note, this is not meant to be a step by step tutorial because of time constraints, but it should definitely be enough to get you started. Vive has a slightly different implementation due to how the projection matrices work for each of the platforms, but if there is any interest, let me know and I’d be happy to walk through it.

Render Textures:

Create two render textures, one for each eye.

Set the resolution of these render textures to the per eye resolution of the HMD you are using.

Cameras:

If you have any objects you’d like to render outside of the scene horizon, create them, and give them a special layer.

I used the standard OVR Camera Rig. I also wrote my shader in a way that works with single pass rendering. Since you are adding two additional background cameras, we want to save as many render and draw calls as possible.

Attach the background cameras to the “Left Eye Anchor” and the “Right Eye Anchor”

You’ll need to set the FOV of each of these cameras manually to 96 degrees. (Oculus does this at runtime)

Set the culling mask of the camera to nothing, or the layer you set in the first step

Set the cameras to render to the render textures you define

As of Unity 5.5, there was no easy way (that I knew of) to tell unity to send camera texture information from both eyes onto one texture, and therefore no way to have just one texture/one camera for both eyes. Hopefully someone finds a more efficient solution later.

Script

I programatically define when the view begins to fade from the game world to the “background” world — although you could just as easily do this with an Alpha Mask. I wanted to be able to easily change the fade parameters to optimize the FOV that would allow users to see the most without feeling nauseous.

You’ll want the script to contain variables for each render texture, as well as the start fade, and end fade values to pass into the shader.

I dynamically generate the material at runtime based off of the shader, float values of the fade parameter, and the render textures in the Awake() call.

In OnRenderImage, Blit between the source texture (the default camera’s render texture) and the shader that you will use instead to render to the HMD.

Shader

You can find the updated shader code on github here under MIT license. Feel free to use and distribute, but if you do find some value out of it, please feel free to send me a message :). Love to hear if I’ve made an impact in someone’s dev cycle.

I use a fragment shader to blend between the two render textures and the regular VR camera view.

I look at the distance from the origin that the point we are looking at is, an linearly interpolate between the BG camera and center camera according to the fade parameters that are passed into the shader from the script.

unity_StereoEyeIndex can be used to return which eye is being rendered (0 for the left eye and 1 for the right eye)– this is be very useful for choosing which render texture to use (you can lerp between the two)

You can combine the above techniques with your locomotion system (detecting changes in velocity in your character controller, etc.) to activate this reduced FOV mode when users are moving. It works pretty well in practice for me.

]]>Different Methods for Character Movement in Unityhttp://andytsen.com/2017/03/19/different-methods-for-character-movement-in-unity/
Sun, 19 Mar 2017 21:42:11 +0000http://andytsen.com/?p=84While the types of games and experiences you can build in Unity are practically endless, chances are any game you can think of will probably involve moving something across the screen. Whether your game is about animating a word for a word puzzle game, stepping on monsters as a psychedelic mushroom eating plumber, or blasting away the forces of evil with a BFG. As a result Unity provides many different methods for moving things around, which can be confusing to wrap your head around initially.

In this short article I provide a list of functions and a short description on how they are used. This list is not exhaustive, I’m sure there are others that I have missed. No movement method is strictly better than the others. Which function to use depends on the scenario.

Basic Movement functions

These functions move the character based exactly on the inputs you provide. They are typically the simplest and most straightforward to use. The downside of these movement functions is that they typically ignore physics or have funky interactions with physics and other objects.

These types of movements require a character controller to be associated with the moving object. Character Controllers are a mix between physics/collision driven movement and the static movement seen in methods like Linear Interpolation. They were built for games like FPS and Third Person Shooters in mind by , allowing for more finely grained control than Physics, including collisions and basic physics simulation. If you don’t have a complex movement scheme envisioned then use SimpleMove as it does some basic physics for jumping. Use Move if you have some custom behaviors you want to model. Also note that the model is self contained — that is, if you call some sort of function such as AddForce on the parent GameObject, the velocity added by that force will not be reflected in the character controller.

Movement function based on Physics

Use the physics engine to simulate the application on a force with mass. Good for providing realistic physics simulations, but not always ideal for game play movement. It’s hard to do finely tuned controls with Physics, for example, getting something to stop on a dime. However, this type of movement is good for simulating certain games such as realistic racing, or flight sims, or realistic collisions.

Using the Animator or an animation curve to move character

Outside of the scope of scripting is to use animators to move characters or objects. Two popular ways to do this are either to: 1, record an animation curve in the animation window so you can animate objects motion with key frames, or 2, Apply the Root Motion of an animation to make the character move based off of what the animation is doing.

Don’t waste your time trying to talk to famous speakers after their talk. Most are friendly enough, but it’s akin to hitting on the hot girl or guy at the bar — they’ve got something everyone wants, and so their guard is up automatically.

If you are going to ask questions, ask questions that make the speaker talk about themselves, since that’s everyone’s favorite subject, you are also likely to get more passionate, real responses this way.

The best way to meet people is by doing something together. I was a Conference Associate this year, and the networking opportunity was amazing because we all had a shared purpose and set of tasks we were achieving. Whether it’s volunteering, hacking together, or just going to an after hours event together, the best type of conversation is done not for the sake of itself, but because folks are sharing an experience together.

At after hours events, it’s much harder to meet random people than it is to get introduced into a group. The after hours events themselves tend to be pretty closed off to outsiders. Make sure you know people at the event that can introduce you to others.

Smile and maintain a positive attitude, your enthusiasm (or lack thereof) can be infectious. This may seem obvious, but I mean it literally. Even if you feel like crap, try smiling for a few minutes. Research shows that smiling can actually cause you to feel emotions of joy and happiness, even if you are “faking it” in the beginning. By the way, this is not license to be fake around people in general smart folks can smell a phony from a mile away.

If there is a conversation you are not enjoying, do not feel obligated to stay. Politely excuse yourself whenever there is a slight lull.

Ask around about the good events, you’ll never know when someone will have an extra ticket, etc. NOTE: Good events are not held in loud obnoxious nightclubs. Ideally, you want an event that gives you room to breathe and meet other people.

Try to get into the event for free by volunteering your time — this is a better networking opportunity than the event itself as a matter of fact. I was surprised by how different GDC felt as a conference associate!

This weekend, I attended the Global Game Jam (GGJ for short) at the Unity HQs in San Francisco. The GGJ is a distributed, global hackathon where teams produce a game over the course of the weekend. This year, the major sites hosting included Facebook, Google, and Unity, among others. Bath Buddies is a virtual reality game set in a bathtub, where two players have to coordinate in order to defend the Rubber Ducky from the tyranny of toy pirate ships and torpedo launching underwater subs. The game is free to download from that link and all the code is open sourced and available for you to see, although a lot of it is admittedly hacky. Our team was comprised of 6 people – Jono, Erik, Alex, Quinton, Nicholas, and I. Between the three of us we had four programmers, one modeler, and a composer.

View of “Commander” view from outside VR on the computer screen.

Overall, our project was a great hit — although it was hard to explain at the beginning (we didn’t have time to do a tutorial!), once people got it, they really had a blast playing.

Hackathons, you attend go to them.

Anyone in tech should seriously consider attending hackathons regularly. It’s an incredibly efficient use of time, if you structure you weekend correctly. Here are a few reasons why you should attend hackathons.

Learn more about starting a company than almost any other activity.

Team Formation – At the beginning, you’ll find a group of people that you gel well with and start working on an idea.

Ideation/Design – You’ll have the opportunity to brainstorm and decide as a team what you’ll spend the rest of the weekend working o

Scoping – You’ll need to scope down your project properly, especially given most hackathons are 1 – 2 days, in order to finish

Development – The bulk of your time will be spent developing your project!

Presentation – You’ll typically either give a presentation about your project, or talk about it

User Testing – You’ll get to see people actually test your product and get real feedback

Learn to collaborate, and who you’d like to collaborate with – You will be working with several people, some perhaps, you’ve never met before. You’ll be forced to compromise, and debate, and work together. Ultimately, in these environments, it quickly becomes clear how well you will gel with the people you are working with. If there is someone you are considering for a long term project, or as a Cofounder, for example, invite them to a hackathon!

Accelerated skill learning – Whatever you end up working on, you’ll definitely learn a ton about how to do it. The time pressure forces you to think on your feet, and stay on the grind. Plus, if you choose your team wisely, you’ll each end up benefiting from each other’s experience.

Networking – You’ll get to meet tons of like minded people that you could potentially work on projects with again in the future! More importantly, you’ll actually get to know them over the course of the weekend. If you are like me, networking events can seem contrived and awkward. Not so with hackathons! A word to the wise, do not go to a hackathon with the express goal of networking and meeting people. Do the work, and the networking comes naturally. Nobody wants someone that spends the whole time hanging out and distracting other people.

Tips for having a great hack

Find a well organized hackathon, especially for your first one. Large well established hackathons tend to be well structured and lead to better experiences. Ask around and do your research.

Find a team at the hack, or bring a friend, but try not to work alone. The whole point of a hackathon is to meet people and collaborate on awesome projects!

Try to diversify your team. While most hackathons tend to have a fair amount of developers, make sure you have some other talent on your team. Especially if it’s a game jam, having a well rounded perspective is super important for success.

Everyone at a hack is volunteering their weekend. Don’t be that guy that tries to own the entire project and order people around. Try to let ideas flourish and let every one on the team have their voice heard.

Work hard, but get sleep. You don’t want to crash during crunch time, an hour before submission

That’s it for this week!

]]>Designing Gameshttp://andytsen.com/2017/01/09/designing-games/
Mon, 09 Jan 2017 03:25:59 +0000http://andytsen.com/?p=28Recently, I’ve been reading a couple of really great books on game design. If you have ever been interested in designing games, or, indeed, consumer products in general, I would advise that you try your hand at prototyping a few games for fun. Game prototypes needn’t be digital works of art, or indeed, even digital, and they can challenge you to think “outside the box”

In this post, i’ll briefly summarize learnings I’ve had while beginning to prototype game designs for the past two weeks. For those of you who are interested in the books that these concepts derive from, check out the books: Challenges For Game Designers and Art of Game Design: A book of lenses. The first book is chock full of challenging exercises for aspiring game designers to undertake, and the second book is considered by many as the canonical text for introduction to game design courses.

Any game design books you think I should read next? Let me know!

What is a game?

The first thing to understand about being a game designer is that anyone can design games. What it takes is a desire and passion to understand how to create and design experiences that are fun. In the Art of Game Design, Schell implores readers to repeatedly repeat the Mantra “I am a Game Designer.” until readers believe it. One reason you might decide to design your first game is because when it comes down to it, game design is the study of fun, and how to manufacture it. Without getting too much into the weeds, in order to first understand how to design games we have to understand what exactly games are and what makes them fun.

What is fun? Anybody who has ever been engrossed at work on a challenging problem, or in an intense, exhausting sports match, or a difficult but rewarding problem set at school knows that it is possible to have fun even when performing an activity that wouldn’t necessarily be considered fun. Why is this? The human brain is wired for solving problems, when our brain is challenged appropriately, we are learning. Learning is a crucial part of mastery, and mastery of a skill or subject brings about great pleasure. Part of the reason it delivers this rush of dopamine and seratonin is because it makes us feel powerful and in control of our destinies. Games become addicting because they allow us to gain that sense of mastery, delivering it in regular, easily digestible chunks. In real life, it’s much more challenging to find activities which regularly and with relative predictability allow us to feel mastery while providing the creative outlet that games do. Some of you may be familiar with the concept of Flow. Coined by Mihály Csíkszentmihályi, “[flow] is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity.” Basically, if you’ve ever felt like you are “In the zone” completely in the moment, and enjoying the activity, then you know what flow is.

The funny thing is that a flow state can be induced in even the most monotonous of tasks. For factory workers that have to attach widgets on a manufacturing line, a game can be invented where the worker tries to get a new high score every day, coming with new and innovative approaches to attaching the widget. What matters is the attitude and approach towards the activity in question. One corollary to that is that any sort of activity that is enforced and involuntary and is approached from that perspective will be hard to enjoy. For example, imagine if instead of browsing the internet and social media for fun, you were forced to do it 8 hours a day, ordered to like a certain percentage of posts, and create a certain number of updates. That could hardly be considered fun, right?

My favorite definition of a game (also from the Art of Game Design) is that it is “A problem solving activity approached with a playful attitude.” It is my favorite definition because it is elegant, and defines perfectly what we laid out in the previous paragraphs.” Therefore, when you are designing a game, what you are doing is creating a structure which maximizes for players the opportunity to experience flow.

How can I design my first game?

Much like the Nike slogan, just “Do It”! Game design is a discipline that takes practice. You can read all the books in the world, and play all the games, and you still wouldn’t be a game designer. The best way to be a game designer is just to design your first game. The more you do it, the better you’ll be, so you’d better get started as quick as possible. Keep in mind that game design doesn’t require a fancy electronic prototype. In the past two weeks for example, my friends and I have prototyped two board games, a trading card game, and several puzzles. In fact the best way to test compelling game designs is to start non digitally. While electronic video games and consoles allow for exciting experiences, they often obscure the design itself with window dressing and unnecessary engineering complexity.

If you want to flex your game design muscles, just buy some dice, some index cards and get started! The game is essentially a set of core game mechanics. Core game mechanics are the rules of the game. Start simple. Invent some rules, and play with other people.
I’m out of time now, but next week I’ll actually cover some of the steps to prototyping games and maybe even give an exercise

]]>Meditations 3+ years and New Year Resolutionshttp://andytsen.com/2017/01/03/23/
Tue, 03 Jan 2017 19:57:30 +0000http://andytsen.com/?p=23This past year has been relatively eventful:

Moved back to California after spending 3.3 years in Boston

Sold my AMG and bought a Civic for the gas mileage and reliability (*single tear rolls down cheek*)

Based on these life events, how I feel now, and my past 3.3 years in Boston, I want to share some things I think are important to overall happiness. The teachings hold true for me, but your mileage may vary. I’m definitely not claiming to be the first person to come up with these ideas.

Push your boundaries, do things that are challenging for the sake of the challenge. Life should always feel slightly uncomfortable.

Do what you love – You won’t be able to do your best work unless you love what you do. In the immortal words of the messiah Steve Jobs “Keep Looking. Don’t Settle.”

Corollary to the above. Discovering love takes time and investment-It’s very hard to determine what you love just by intellectualizing about loving it. Take the time to learn the basics, and get past the hump. You won’t know if you have a knack for something until you can perform it proficiently.

Sometimes, it’s okay to let go – You can’t fix everything.There are certain things that no matter how hard you try, you can’t fix. Try to not be bogged down by things outside our control.

Have a strong support network – It’s okay to need help. Especially if you are into startups and are just starting off, it can be lonely. For me at least, It’s been extremely valuable for me to have people (and cats) I can fall back on, when the going gets tough. Who are the people around you that actually care about you?

Appreciate your friends – When I moved to Boston, I realized that I had taken my friends in California for granted. In Boston, it took me over 3 years to build a support network of friends. You don’t know what you have until it’s gone. It really made me appreciate what I have here. Some ways to appreciate friends:

Make time for them. Invite them out to things — don’t always be the invitee

Dinner parties are always fun. Lunch, if you are short on time.

You don’t have to be friends with everyone – Pretty self explanatory

Be Financially Responsible – Money buys you optionality. Debt shackles you. Once you are no longer slave to the paycheck, you start to naturally think about the things that are actually important to you. At least I did.

New Years Resolutions:

Become a better game designer – Design 50 puzzles/non-digital-games/digital games in aggregate

Become a better developer – Code at least 5 out of 7 days a week.

Get better at game production – Get an indie game green lit on Steam

Get more energy and be more healthy – Do 45 minutes of exercise at least 4 times a week

Improve Collaboration and Leadership – Get out of my comfort zone and collaborate 10 teams on 10 separate projects.

Learn Game Rendering Pipeline – Learn the basics of 3D modeling in either Blender or Maya LT.

Be Happier – Be kinder, nicer, and more there for friends and family.

Are there any life lessons you’ve learned that you can share with me? I’d love to hear it.

]]>Virtual Reality IK for Humanoid Avatars using Final IK in < 15 minuteshttp://andytsen.com/2016/12/24/virtual-reality-ik-for-humanoid-avatars-using-final-ik-in-15-minutes/
Sat, 24 Dec 2016 04:02:51 +0000http://andytsen.com/?p=12Note: to follow along with this week’s blog post on HTC Vive development, you may need to drop some dough on a Unity Asset called FinalIK for $90. Well worth the investment IMO.

If you’ve ever worked on a VR application, then you know one of the biggest challenges facing developers is how to map motion from motion controls, such as the Oculus Touch, or the HTC Vive controllers into the virtual space. Games like Batman Arkham VR and Hover Junkers do a pretty good job mapping your hands, head rotation, and position onto your virtual avatars, but until recently, if you wanted to do this as an indie developer, you would need to develop your own custom IK solution, which would be a complicated process that would probably take several weeks to get right.

As of a few weeks ago one of my favorite Partel Lang, updated his IK framework, Final IK to feature a super simple VR IK solver that allows developers to quickly use any rigged humanoid model and map their movements realistically to your own movements in real world space. In this blog post, I’ll describe how to easily set up your own humanoid model in VR. Disclaimer: these are based on some tutorials I found online, and have not been vetted by Partel. To the best of my knowledge, no documentation exists as of yet for the VRIK solver. If anyone has a better way or more efficient way to do this, I’d love to know!

Create a new project and import all the assets

If you are using Morph3d, make sure to go to Assets/MORPH3D/packages/ and click on Shaders first to import and then the character model second. order matters

Delete the camera in your default scene and add the [CameraRig] from SteamVR

You’ll want the position of the camera rig at origin (0,0,0)

Add your humanoid model into the scene view

If there are any animator controllers attached to your model’s animator make sure to remove them, as this can interfere with the IK.

Duplicate the right hand, left hand, and head

The bones may be named something different, but if it’s a standard humanoid model, you should be able to tell by the game object hierarchy, which bone to duplicate.

Drag the duplicate right hand, duplicate left hand, and duplicate head out to the scene view right below the scene name, set the position of all of these objects to (0,0,0)

Leave the rotation of the objects alone

Drag the game objects into the right places under [CameraRig]

right hand under Controller (right)

left hand under Controller (left)

head goes under Camera(head)

Attach the VRIK component to your model and assign the duplicate bones to the right position as displayed

Right Hand to Right Arm, Left Hand to Left Arm, Head to Head

Press Play, and it should work!

Here is an example video I uploaded of my experiments with VRIK. It’s a simple, silly, character creation screen.

FAQ

I can see through my character mesh (I can see their eyeballs, teeth, etc!) What do I do?

A couple of methods can be used here. The easiest of which is to just move the Camera (Eye) slightly out, in front of the model so you don’t see through them

Another method is to use a mirror in order to visualize the character in the mirror instead of directly by looking down. I found that this was less intrusive than having the model always there. Batman Arkham VR does this really well. In order to implement mirrors for the Vive, use the HTC stereo rendering package available for free on the asset store

Note that the mirror camera will sometimes will sometimes be occluded by the wall behind the mirror. You can set the mirror camera to ignore the wall mesh.

Make sure you have removed the animator controller from the character. Even idle animation loops can screw this up.

]]>Learnings, Tips, and Tricks. One Week Into Funemploymenthttp://andytsen.com/2016/12/18/learnings-tips-and-tricks-one-week-into-funemployment/
Sun, 18 Dec 2016 22:34:55 +0000http://andytsen.com/?p=9Many people that have my respect have said that keeping a personal blog is valuable for many reasons, including, but not limited to: personal brand building, improving written communication, synthesizing thoughts, and testing out new concepts. I’m going to keep this blog updated at least once a week, even if it’s just a few random sentences.

This week, I’m going to provide some productivity tips and tricks for readers, focusing mainly on products and techniques that I’ve tried, and found useful.

Toggl – Track the amount of time it takes to complete each task – Useful for measuring efficiency and productivity. I’m using it to measure how long writing this blog article takes!

RescueTime – Track how much time you are spending on each website and app. If you have trouble focusing, this add-on will track your computer usage throughout the day and splits it into categories. For example, yesterday I only spent 43% of my time on software development. I can definitely improve.

Sublime Text Editor – No frills text editor that features syntax and code highlighting. Now I use it mostly for dumping random notes that aren’t important enough to Google Doc’d or don’t need to be recorded. But in the past, I’ve used it for everything from coding to SQL queries

Tomato Timer – A timer that I use for the Pomodoro Technique to maximize focus. (If you haven’t heard of that technique, I highly recommend it. Read more about it in the next section below)

Google Docs – Self explanatory.

Productivity Techniques:

Set Goals – This is even more important when you are working for yourself. Personally, I’ set weekly goals, one month, 3 month, 6 month, 12 month, and 2 year goals for myself. The one week and one month goals are very granular, where as the goals with a longer time horizon are much more nebulous, but give me something to strive towards. I’m using the OKR setting process used at Google as well as other companies I’ve previously worked at, but any sort of quantifiable goal setting system is probably good as well. Read more about OKRs here.

Pomodoro Technique – Short focused bursts of intensity are more productive than hours of nonstop work. Spend 25 minutes working as hard as you can, then take a 5 minute break repeat and on the 4th iteration take a 15 minute break. Rinse and repeat. Read more about this technique here

Mindfulness – Otherwise known as meditation. It is really great for refreshing the mind and retaining focus, but Mindfulness and Meditation have some branding issues. Many people think it’s some crazy hippy new age thing with Ohms and Chakras, but I’ve been meditating on and off for the better part of a decade now and I can say that it works to enhance productivity and overall well-being, and my experience is backed by many peer reviewed studies. Some great apps to try are Calm and Head Space.

Exercise – 45 minutes of cardio and resistance training 3-4 times a week will give you more energy, work harder, relieve stress, and stay in shape. I will run 2-3 miles and then follow that up with some resistance training.

Daily task review – Every morning while I sip on coffee, I take 15 minutes to think about the most important things to accomplish today. Before I clock out, I spend 15 minutes thinking about the things I have to do tomorrow.

Regular learning review – In order to help me retain more important information, I’ll write notes whenever I learn something new. At the end of the day I’ll spend 30 minutes reviewing what I’ve learned. At the end of the week I’ll quickly review all the notes that I took throughout the week, and then organize them into different subjects, putting them into google docs.

No Excuses – Don’t have them.

I’ve found these techniques to be useful, but I’d love to hear more suggestions on how I can be more efficient and productive with my time. If you have any more suggestions about productivity and efficiency, let me know!