If you think you’ve seen what’s possible with Photosynth, then you’ve seen nothing yet. The collaborative research team from the University of Washington and Microsoft Research who only two years ago in 2006 published their paper “Photo Tourism” and their technology demonstration “Photosynth” have again pushed the boundaries of what can be achieved by intuitively processing the abundance of digital images shared on the web.

This week at SIGGRAPH 2008 they’re sharing with the world some even better technology they’ve been working on which they call “Finding Paths through the World’s Photos“. Don’t let the name fool you, it’s damn cool. If you’re not much of a reading person like me, take a look at this video demonstration. (Watch it till the end)

This technology is much better than Photosynth simply because instead of just presenting individual photographs in a cool 3D environment, it actually manipulates the photo to give you a seamless and more lifelike experience. It’s one thing to click around different photos taken at a particular museum, it’s a whole other story to “walk through” the museum.

Now if you want to know exactly how they did it, and you’re a rocket scientist, take a look at their conference paper. For the rest of us, just take it for granted.

Put enough of these frames together and you could have a 360 degree 3D movie sans glasses.Speaking of Microsoft Research projects; go to the MS Research project page and pick up their super cool astronomy project.

Now if only they could eliminate the people in these pictures with enough of them; assuming high enough quality we could get the best 3D models with just pictures… I think I sound random, but the possibilities of this technology are almost endless! I would like to see this implemented into Live Maps very soon!

@fred: its pretty evident that they are deriving 3d data from the photos. It’s clear they are working on presenting the 3d data. perhaps a problem is one of isolating the object in question from the background.

@himanshu: If you try the photosynth demo, you’ll see that it is a viewer only. The viewer works pretty well, but works on images that have already been analyzed. But microsoft haven’t released the software to analyze the images, find common keypoints and assemble them into some kind of 3d data set, and its not clear if they will ever do that.

@chustar: flickr is owned by yahoo. microsoft was trying to buy yahoo recently, but the deal fell through.

@Fred: that’s the point of the algorithm, picking out key features and mapping them into 3D space. It’s not a complete model, but with more data (more, better pictures) it could be.

@dexotaku: I disagree, Microsoft funds a lot of research; and after the manipulative behavior they used to corner the market, and the damage they’ve done to software development’s progress for the last 30 years, it’s really the least they can do.

I cant help but wonder what an organization like the NSA or CIA could do with technology like this. Imagine including satalite data, IR data, Landsat data, LIDAR, blueprints, etc all on a large multi-touch screen! That would be fun.

The computation required to correlate images is high, but not too high for multi-core desktop computers, never mind the computers of 2013. One strong implication of this technology is that it can be applied – with enough correlating input – towards facial recognition, or to combine multiple images of a given car model, or similar-looking dogs… all kinds of applications as a heuristic algorithm useful to streamlining image analysis of all kinds. Implications for intelligence gathering are clear, and I shudder to think what will happen when this technology is combined with porn.

That is the tool that calculates the 3d data points. I do not know, given that it is actually from the Phototour – Univ. of Washington side of the aisle, whether it produces data that can be fed directly to Photosynth or not.

well it seems interesting but not really mind blowing to me, i would like to see a view with complete photo views that white fuzzy background is horrible, security is going to be a huge issue as multiple groups of people would and could utilize this for the wrong reasons as well…Hell! you can already walk the streets in google maps…

i wish there was a way to enable much more cleaner transitions, it looks as though they had some good ideas on removing that sort of strobing effect from night and day pics being together, but it didnt really seem like it did much other then removing all the night pics and left day ones in there place, creating a less animated effect…

i dunno maybe um stuck on older concepts like phototriage, phodeo and timequilt concepts for browsing my person photo collection not sumthing mashed together with someone elses pics as well…

i would still like to check it out though, cause like i said, it does look interesting!!

I honestly think that Google’s freeware program SketchUp is much more useful for viewing real 3D objects. With the ability to create complex 3D shapes quickly and easily using a diverse range of extruding options, people have made accurate, scale models of Ships, Buildings, city streets, and even animals. I’ve personally made accurate 3D models of some of my products, as well as a few real buildings – buildings that look almost identical when I walk in them.

This program uses a lot of creative technologies that make 3D imaging easy, but it’s not useful 3D imaging. Google SketchUp, Rhino 3D, AutoCAD, SolidWorks, and other Computer Aided Drafting programs allow any degree of detail to exist, and the context of the image (which is actually an environment consisting of an infinite number of images) is far more valuable for any professional application.

no matt – quicktime VR is used to view a panorama (or object) which is composed of photos made under very carefully controlled conditions.

this stuff takes a random collection of pictures of a scene, recovers the 3d geometry of the scene from all the pictures, and then from that geometry can place where the pictures were taken. then you can do panoramas or flybys of objects using the original, uncontrolled shots.

its really awesome and steve and his team have had 2 (or maybe 3) consecutive best of siggraph papers based on this work, which is unprecedented.

Nice, but is this actually technology … or a presentation of what’s possible. More vaporware? Please release a demo rather than just spread the fact there’s a good idea! Because these are very old ideas!

also, jay, you pretty much have no idea how university-level graduate computer science research works. people don’t get PhD’s for just coming up with an idea. they have to implement it as well. everything you see in that video is the result of a computer algorithm running on a PC – its not a powerpoint presentation or something like that. furthermore the predecessor to this, photosynth, IS actually available as a demo program from microsoft labs.

BTW, I used to work on videos about research at MIT’s Media Lab, and I’m as excited as the next gal about research. I’ve seen a lot of exciting demos and code over the years. Now, however, my primary interest is how intellectual property is transferred, over years, to the consumer market, and how it can change the world.

@Some CADD geek: Comparing this to Sketchup or other 3D modeling tools is pointless. It’s not about 3D modeling. It’s about displaying a large image library in such a way that you can navigate through them in such a way that you can tell where each picture was taken.

I just wanted to point out that the path-finding work has finally begun to be integrated into Photosynth.

Since 2009 April 21, when you use the Highlights list (on the right hand side of any photosynth whose author has added highlights) you will be flown along a path calculated through any photos between your current position in the synth and the highlight you have just selected, just as shown in the Photo Tourism work above.

Also of note is that this new feature coincides with the switch to the Photosynth viewer being written in Silverlight 2.0 which means that all synths should be viewable on all Intel Macs. If all goes well with Moonlight, Linux users will be able to view synths by September 2009.