Wednesday, August 5, 2015

I now have a VR-ocean up and working with real time GPU ray tracing for Oculus Rift using G3D::VRApp. It's very soothing to watch the waves. Open source code and demo will be coming soon after code cleanup.

You can see the calibration isn't quite right when I lean over to shut off the video at the end. Definitely need an automatic calibration method, too hard to get right (spent way too long on it today) and too easy to hit out of whack.

My next steps are definitely:

Integrate an automatic calibration method, for getting the relative transform between the oculus camera and the kinect.

Wire up my 2nd kinect

Try generating a mesh instead of a point cloud.

This was a super fun jam, and I know how I'd approach things differently if I had to do it all over.

Since my last post I have worked on a couple of small things to make the hand tracking and visualization a little neater. I also worked to set up the initial experience that led me to work with the leap motion in the first place.

For those that remember my original goal was to create a VR experience where the subject is placed in a dark environment and is able to control with their hands the only light source. Below is a video of what I have done so far.

In the original example the light source takes the form of a torch that is constantly on. In my experience I made it so that the light source the player controls is in the form of an orb that can be summoned by looking at your right while it is flat and facing upwards. If you tilt your hand or close your hand past a certain amount the light source disappears.

Anyone using VRApp now has access to a virtual 2.0m-wide monitor (the DK2 is so low resolution that you need the virtual monitor to be large). It is in body space, so it is always near you, but you can move your head to see various parts of the screen at higher resolution. Everything is open source and committed to http://g3d.codeplex.com.

I'm Jamie Lesser, a student at Williams College and a researcher in the Williams College Graphics Lab. I will be joining for the second half of the VR Jam and hope to explore sea motion. Is the visual component of sitting on a bobbing boat on the ocean enough to feel like you are really bobbing? I find this question especially interesting because it requires no movement from the user, and so should be convincing in most settings. The main drawback would be increased motion sickness for those who are sea sensitive.

I made some changes last night, working from my laptop without a DK2. This verifies that you can use VRApp without the physical hardware installed (my brand-new laptop with Apple's idea of a "fast" GPU can only render at 6 fps, so you wouldn't want a DK2 attached).

Instead of hitting "publish" on the last blog post and going to sleep, I stayed up and manually aligned the camera (by flying it around and micro adjusting, nothing even smart), to get the first video of the generated avatar. Ugly as heck (even left a debug coordinate frame in...) but really cool in-person. I can't wait to improve it, but I really do need sleep to make this jam the best it can be.

Monday, August 3, 2015

Lots of hardware issues and some software ones mean this update isn't the most exciting, but it is obviously better than before! I am rendering a point cloud generated from one kinect (which wasn't the one I started with... hardware issues...) stably in world space. Here I did a programmer dance for you:

Next steps:

Turn off the computer

Instead of taking a break, do wire management and figure out a decent setup for capture in my apartment.

Pressing the Tab key now moves the G3D GUI (everything in onGraphics2D) from the debugging mirror display into the VR world. It appears in front of the HMD, floating about 1.5m away from the viewer. The effect is surprisingly cool even with a boring debugging GUI when you're in the HMD--it is augmented reality for a virtual reality world.

At the end of my last post I said I was going to start working with the leap motion controller in a vr environment. Well I have attached it to the front of a Oculus DK2 using masking tape and have begun working.

After a bit of confusion as to the relative frames of the leap motion controller and Oculus I was able to get get the controller sort of working. Below is a video showing me wiggle both hands in front of the oculus.

The red rectangle in the center of the view is a 2D surface directly composited onto the HMD. The key bug was that I was positioning it behind the head before.

It appears that changing the HMD compositing quality may have been affecting the vertical axis direction the rendering rate. I'm not sure how this happened, but for the moment I disabled it to avoid the problem. Now, on to actually rendering a 2D GUI into that red square.

I created an ovrLayerType_QuadHeadLocked layer and initialized it. I'm just rendering it solid red for the moment. Such a layer has a 3D position. From looking at the Oculus samples, it appears that rotation is ignored on such a layer and the translation should have a negative Z value.

Right now the state is not good. Not only does the GUI layer not appear on screen correctly, about 30% of the time the 3D view flips upside down. It is extremely unpleasant when this happens if I'm wearing the HMD. I suspect that this may be an unintended consequence of my automatic effect disabling code based on frame rate.

The right eye also starts flashing cyan (the G3D default background color) every now and then. I think that both visual errors are separate from the GUI work that I've been doing.

Because of the extremely helpful instructions on the leap motion site, I have been able to make good progress. I have gotten the leap motion SDK hooked up to my project and have begun experimenting with the hand tracking. While eventually I would like to include a proper hand model, for now I have just been drawing out the joint positions. Below is a photo of my current representation of a hand. The blue spheres represent the joints in the hand, while the green sphere is the center of the palm.

I am Daniel Evangelakos, a recent graduate of Williams College and a researcher at NVIDIA. For this VR jam, I was inspired by a video of a VR demo shown to me by Morgan McGuire (https://www.youtube.com/watch?v=3qEBK1p4ct8).

In the video the player is in a dark cavern with a single light source. The light source is a torch that is held out in front of player. He or she is able to illuminate different parts of the cavern by moving the torch around.

For this VR jam I want to create something similar to this demo. I want to have the player control the only light source. Furthermore I want the player to control this light source not with conventional inputs such as a controller or a mouse, but instead with their hands.

To do this, I will be working with the leap motion controller pictured below. This device is meant to track hands and other objects in real time and return 3D models of them.

Since there is no current support for the Leap Motion controller in G3D I will first have to get it working. Therefore, while I will still keep in mind my eventual goal I will consider this VR jam a success if I can get the leap motion working with G3D.

I'm Michael Mara, graduate student at Stanford University under the advisement of Pat Hanrahan and researcher at NVIDIA. For the past week I had been waffling back and forth between a large amount of possible projects for the VR Jam. This is a partial list of what I seriously considered:

AR by scanning room with kinect beforehand and importing model into VR scene.

Sunday, August 2, 2015

I'm Morgan McGuire, professor at Williams College and researcher at NVIDIA. For the game jam, I'm designing an adapter class that will allow using the G3D Innovation Engine GUI inside of the HMD view. I'll work on the G3D::VRApp class directly using Oculus DK2.