Demonstrators were on hand when it came to most of the titles, with the press contingent being able to get their hands on each of the games in turn. We rocked up just as the smallest cheeseburgers in the world were being handed out by the waiting staff, but decided that we’d forgo the refreshments and jump right on in.

Kinect for Xbox One

The first thing to be shown was Kinect. In a purposefully-darkened corner of the venue, we took our seats on a big comfy sofa, and waited to be wowed.

If we’re entirely honest, we thought that we knew what to expect from Kinect. Just like last time, we thought, sometimes it will work perfectly, and sometimes it won’t. We assumed that the device would still need masses of space in order to work properly, despite claims to the contrary. We thought that if the lights were down low, the tracking would be thrown off.

How wrong we were.

The Microsoft demonstrator showed how “Active IR” works. In the very murky conditions which we’d compare to being something akin to late evening in the lounge with the lights off, the demonstrator was absolutely crystal clear on the screen, with facial features easily picked up. To try to throw the sensor off, he held up his phone with a torchlight app running on it. In “regular” mode, the new ambient light from the phone obscured his face entirely, but after switching to Active IR mode, that light was completely removed from the scene, meaning that tracking continued to work perfectly. This means that there’s no need to mess around with shutting the curtains when you want to play, or ensuring that the right combination of lights are switched in order to improve tracking. Kinect uses the regular and Active IR modes at the same time, we’re told, meaning that tracking should be relatively spot on no matter what conditions you play in. The claim was made that this would work in absolute pitch darkness, if you wanted to play a game that way, but in the interest of total honesty we have to tell you that ability was not shown to us. It’s something that we certainly believe would be the case, based on what we saw, though.

Skeletal tracking has also been improved, to the point that each finger has a tracking point at the end of it, and the thumb has multiple points, meaning that the device can tell if you have an open or closed hand. On top of that, the sensor can tell which direction your limbs are facing. This may not seem like a big deal, but the example we were given is that of a tennis game. The game would be able to tell if you were playing a backhand shot or a forehand shot based not only on your movement, but on the direction that your forearm is facing. This theoretically means better fidelity, as there are multiple ways of checking on the type of shot being played. We did notice that as the room had a stone floor which threw up a few reflections - the skeletal tracking of the demonstrator's feet was a little bit all over the place from time to time. We were assured that the device was being confused by having so many people in the room at a time (there were 7 of us including the demonstrators, and a Kinect Sports Rivals demo going on in the background, in view of the sensor) but we can't say for sure if that was the case. Again, based on what we saw, we'd be surprised if something such as a reflection from a floor would cause an issue.

Confirmations were made that Skype would be available in full 1080p, and that the sensor would be usable in a much smaller playspace than was required before. The demonstrator got to about 4ft away from the screen before his head was unceremoniously chopped from the picture, so the field of view is clearly much, much more usable this time around. Microsoft claims that it’s been improved by some 60%, all told. Facial features were tracked with the console being able to tell if the demonstrator was looking at the screen, even when he shifted his vision away without moving his head. Facial expression tracking was in place, but looked a little rough and didn’t seem to be recognising things properly. We were reminded that we were looking at in-development features, and that this is being worked on specifically to ensure that it works as it should. We don’t want to bring it down before we’ve seen it, but we'd say that facial recognition was working probably around 60% of the time at this stage.

One of the most impressive things to be shown during this demonstration though, was the ability for Kinect to read and interpret forces. When the demonstrator jumped, his on-screen skeleton went green while he was in the air, meaning that no force was being applied by his body. As he landed, his knees, quadriceps, and core areas turned red on screen, meaning that Kinect could tell exactly where force was being applied. This could be used to further improve tracking accuracy, but we instantly thought of the Wii Balance Board. Imagine a snowboarding or skiing game where you didn’t have to make wild movements in order to tweak your direction. Or how about the fish catcher mini-game in Wii Fit, where you’re tilting the iceberg to the left and right by putting more or less weight on either foot? That could – again, theoretically – be carried out without using anything other than the equipment that comes with your Xbox One console. The demonstration also showed how the power of punches could be tracked. The demonstrator threw a weak punch, and a small circle appeared on screen. When he really threw his shoulder, the circle was much bigger. This obviously has ramifications for fighting titles, and even fitness products, to an extent.

We went in assuming that Kinect was a gimmick, very much as it was last time around. With the exception of the unfinished facial expression tracking though, and a couple of minor quirks that could very well be down to the fact that we were looking at in-development software in a room packed full of people, we came out with our heads swimming with game ideas. We know that a lot of the things that we've mentioned are pretty much common knowledge. The Active IR, the smaller space requirements, improved fidelity, and the extra skeletal tracking points have been pointed out many times before. However, words mean very little, and it’s only when you actually see it in action as we did that you really start to believe that the bullet-point lists of improvements are actually true. Perhaps that's because we were so let down by the original Kinect sensor. Perhaps it's because Microsoft have truly got it right this time around. Only time will tell.

Some people just don’t want motion-controlled games and that’s fair enough. But for those of us that do, there’s an absolute ton of promise here. More than we realised up to this point, in fact. It’s also promise that to some extent, we've seen fulfilled with our own eyes by the likes of Project Spark and Kinect Sports Rivals, two titles we had a chance to take a closer look at, and that really caused us to start believing.

Good to hear. Zelda: Skyward Sword, Metroid Prime Corruption, & Mass Effect 3 were 3 games last gen that made good use of motion control / voice commands. Hopefully these improvements mean the Kinect won't mistake crotches & knees as limbs & cut bombs in Fruit Ninja anymore, assuming they make that for the Xbox One. What a wonderful way to lose, right? The new Kinect's accuracy sounds impressive. In a normal household, no matter what the conditions for the most part (except if it's crowded w/ people walking around, of course), the Kinect seems like it will be pretty accurate. Sounds more worthwhile than the last, & w/ Microsoft promoting it more, it will get used more often when it can add subtle improvements (Mass Effect 3 did a good job of this w/ the last one). More games completely dependent on it that are good like Zelda: Skyward Sword & Metroid Prime Corruption were on the Wii might appear on the Xbox One now as well.