Photosynth 3D: Microsoft’s plan to turn the entire world into an explorable 3D panorama

This site may earn affiliate commissions from the links on this page. Terms of use.

When Blaise Aguera y Arcas (then of Microsoft Research, but now at Google) demonstrated Photosynth at TED 2007, it became an immediate hit and has since become one of the most-watched and discussed tech demos of all time. While the original iteration of Photosynth was certainly cool, the new version — Photosynth 3D — will blow your mind.

The Photosynth project was started almost a decade ago for the purpose of “reinventing the whole enterprise of photography for ordinary people,” says Agüera y Arcas said. The first iteration of Photosynth, released in 2007, stitched together thousands of photos — cobbled together from all over the web — to create a seamless 2D image that you can explore. It was basically a clever computer vision algorithm combined with a “gigapixel” panorama builder/viewer. While the underlying tech was undoubtedly cool, it was the slickness of the interface that really wowed people. The original Photosynth video is embedded at the end of the story.

The new version, Photosynth 3D, takes that clever computer vision algorithm and incredible interface slickness into the third dimension. Photosynth can now take a bunch of photos and turn them into four different 3D views: Spin, Panorama, Walk, and Wall. The best way to demonstrate these four views is to watch the video, or to play around with some of the embedded synths below.

As you can see, all of these views are very similar to the original Photosynth, but now it’s also possible to move in space, rather than just panning and zooming a 2D plane. The 3D Panorama and Wall views are actually very similar to the original Photosynth, but the addition of 3D parallax makes it feel like you’re actually there, or that you’re watching a video.

Spin is a new mode that basically turns the panorama inwards, towards an object. Instead of turning on your feet to shoot a panorama of a scene, a Spin view is created by walking around a subject and taking dozens of photos. The Walk view, as the name implies, is basically a series of photos captured while you walk forward, and stitched together to create a 3D space. For all four modes, remember that when you stop the camera, you have full access to the original high-res images — it’s still like a gigapixel panorama in that regard. (Read: Autodesk Catch: Make a 3D print of anything, just by walking around it with a camera.)

Technologically, while the original Photosynth used computer vision to align a large number of images in two dimensions, Photosynth 3D uses the spacial gap between each image to generate 3D models of the objects in each scene. Then, depending on your position in the scene, textures (which have been cut out of the original photos) are overlaid on those objects. It’s fairly ingenious, and the new, mega-slick Photosynth viewer really adds to the experience. If you get a chance, try hitting the “c” or “m” keys while in the new Photosynth viewer; C reveals the 3D interpretation for each image, while M shows you the (scarily accurate) reconstructed path taken by the camera.

The future of Photosynth

As exciting as the original Photosynth was, we never really saw the tech come to fruition. In theory it is built into Bing Maps, allowing it to bring up geo-tagged synths, but it never really hit the critical mass required. For the most part, the Photosynth website seems to be Yet Another Gigapixel Panorama repository. (Read: Ricoh Theta: The first camera that can take spherical 360-degree panoramas.)

With these new 3D views, though, it’s easy to see the correlation between these new 3D views and competing services such as Google’s Streeview — especially when you consider that the Photosynth team moved from Microsoft Research to the Bing Maps department a few years ago. For now Photosynth 3D is just a tech preview, but hopefully Microsoft can find a way to bring it to the mass market. The tech is simply too cool to keep hidden away in the vaults. Copyright issues aside, imagine if Microsoft just left a few hundred Photosynth servers running in the background, joining up all of the photos on Flickr and Facebook to create a 3D panorama of the entire world…

Tagged In

What he showed on the TED talk was pretty impressive with the 3D models that were created, though looking at the actual panorammas was a bit disappointing because of how jumpy it was between each frame. Seems it would be smarter to do this with video so that the frame transitions are smother, though I am sure people will argue that you’ll lose resolution. But serioulsy a smart idea to get the 3D information from the paralax and use it for this. Can’t wait to see where this goes.

But the main point is creating 3D scenes from disparate still images, I think — not from user-uploaded sequential pics.

Dozerman

Video is much, much different than Structure Fom Motion, although the two are compatible. SFM from video is something I’ve been asking for for years on the internet, first with BundlerSFM, then with Photosynth, and finally with 123D catch (which is, honestly much better than photosynth). So far, it still hasn’t happened. Guess I’ll just have to write my own… God, I wish I had the time…

Dozerman

No, you’re right, video would make a better option. From my experience, when it comes to this kind of software, the number of frames is more important than the resolution. Back in 2010, I got a new camera that produced 14 MP images. My old one had a burst mode that produced 3MP images at 7 FPS. I did two synths, one with 30 14 MP images, the other with 50 3MP images of the same guitar. The 3D reconstruction with 50 images turned out way better than the one with 30 14 MP images. The reason is that Photosynth looks for the same features over a series of pictues, the 3D affect being directly drawn from motion between the images. If the features show up in a smaller resolution image, they’ll show up in a hires version as well, but if you don’t get as much resolution of motion when you have fewer images so you get a more sparse point cloud.

Dozerman

This is not mew. I’ve been using photosynth in this form for over four years. What the author refers to as the “original” Photosynth is actually a renamed project that started asMicrosoft ICE. Microsoft destroyed the real photosynth through neglect and poor coding, eventually leading to microsoft depreciating it because no one used it anymore, which was, ironically enough, microsoft’s fault. They then gave ICE PS’s name and created mobile versions of it. They are essentially releasing a much needed upgrade as a new product.

chojin999

Very old technology. And the results from the Microsoft implementation are really bad.

alidaxla627

My Uncle Isaac just got a nice 12 month old
Jeep from only workin on a pc at home… Read Full Article B­i­g­2­9­.­ℂ­o­m

NothingIsTrueAll_IsPermitted

Seems like in the future, the “art department” of AAA video game developers will be one schmuck jet setting around the world taking 3d photos of visually interesting places and spaces. If they get to where you can navigate these reconstructed environments using WASD then I imagine games will soon become literally photorealistic–because they’ll actually use photos to make the graphical assets on some of them.

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.