Videosynth and augmented reality at Microsoft TechFest

I'm at Microsoft HQ just outside Seattle to see what the company has up its sleeves in terms of research and early-stage development - something called TechFest. Research is a huge part of what MS does, and the company funds labs across the world to try and develop ideas that could be useful to it in the future.

This morning we were treated to a brief Q&A session with two big cheeses; Craig Mundie (who took over part of Bill Gates's remit to become chief research and strategy officer) and Rick Rashid, who has headed up MSR since its inception in 1991.

From those early days, says Rashid, research inside Microsoft has really opened up to cover a huge variety of different areas. "Biology, astronomy, chemistry, physics - a broad collection of things that computer science is now relevant to," he says.

"We think of it as software," says Mundie, although he admits that "sometimes we have to do a little hardware around the sides".

Enough wibbling, though: what we're really here to see are the gadgets and software that's under development.

First up is a video version of Photosynth, the innovative image stitching application that got a lot of press for creating huge panoramas from images taken from lots of different people (see CNN's huge picture of Barack Obama's inauguration if you want a straightforward example).

Researcher Ayman Kaheel says that the system can take video being shot on multiple mobile phones and piece them together to create a bigger, real-time shot (from say, a street scene or inside a sports stadium).

"It picks up similarities in the different videos and stitches them together," he says - including understanding the frame rate and information in the videos so that they match up in real time.

There is plenty of work being done in Augmented Reality - systems that overlay digital data onto the real world - a sort of HUD for life. One researcher, Darren Edge, is behind a system that lets you keep virtual sticky notes in an imaginary space off your computer screen - a little reminiscent of Minority Report, I have to say.

Another researcher, Simon Winder, showed me a system that can smartly project information over footage you're looking at through your phone's video camera (say directions, or data about the place you are looking at).

Using a database of squillions of images, the system is able to work out where you are and give you information back. The initial dataset is slimmed down by using your phone's GPS to determine your location, and then computer vision is used to work out where you are and project your augmented information on top. At the moment they are running a test project which takes in all of Seattle - and still the whole process, he says, takes less than a second to work out where you are and render the information.

At the moment they're playing treasure hunt-style games using it, but the applications are multiple - directions, recommendations, location-sensitive information and so on.

It's actually one of the best implementations of AR that I've seen - using some of the systems behind Photosynth and other MSR projects to build a very smooth system.

More from TechFest coming soon; I'm doing some more exploring and interviews as the day goes on.