Topic: Video

Possibly take the video and separate the frames, using each frame as an image for the synth?
sounds cool, and would be usefull... super-simplify the process of synthing: shoot video of an area, walk around a bit, upload video, done!
-Z

A few people have done this. Here's one example: http://photosynth.net/view.aspx?cid=2c954b62-5526-4ddb-bc52-bdc09e1c2592. It combines a few shots taken with a still camera plus some video.
The two main problems with video are:
1) Motion blur on the frames (You need to move very slowly and use only small fraction of the frames, and have good lighting.)
2) Low resolution. Even with the nicest video cameras you're not going to capture anything that is worth zooming in to.

Of course this only really works if there are enough stationary objects in the scene that remained the same between the photosynth and the video.
Were you talking about just the ability to see videos move around synths or specifically the ability to see that happen when the video is being sent to you live? Either one is exciting to me.

From a strictly practical standpoint, video has a few great uses in the current version of Photosynth. The most compelling use is the "Rotational Move" that results in the donut-like functional button in the Photosynth viewer. Using videos to circle a single object greatly increases the fluidity of the motion and naturally creates a path of images for the rotation to follow. So long that a circular arch was show around a set point, Photosynth should have an easier time following a handheld video arch than a shot-by-shot series of single-shot guesses.
I recommend manually setting cameras to "shoot fast" when capturing such videos where the camera actually moves because otherwise the shots will be blurred. Consumer video cameras also lag far behind SLRs' many interchangable lenses, making matters even moredifficult. Furthermore, I expect that few people use video decompilers often. Nonetheless, with recent advances in HD Camera tech, I want to see testing.

Have you ever realised that sometimes many videos are uploaded on the web, that were shot at the same show but from different points of view? I mean public events like concerts, parades, plays, press conferences...
I wonder if that kind of videos could be of any use if you can collect enough of them and find the way to align them in time. You could then get one frame from each and every video source and use the frames for a photosync. Then, step one frame ahead (or some certain time fraction) and repeat.
Wouldn't that give you some kind of "videoSync"?
(Hope this makes any sense at all. Sorry my English)

More along the lines of what you describe has actually been demonstrated in a research project from ETH Zurich and University College London called Unstructured Video-Based Rendering. It does need a synth to register the videos to, but that shouldn't be a problem.
Video summary: http://www.vimeo.com/12062502
Project website with downloadable interactive demo and additional videos: http://cvg.ethz.ch/research/unstructured-vbr/

When one use video instead of still images some problems are occures in my synth as followed
1- The extracted frames from videos dose not have any information about focal length and other information related to the camera
2- their is a great distortion in the synth, for example I had used a video on a mobile van, then I extracted frames about 300 frames, I can see that my point clouds have been distorted to some shape like panoramic whereas it should be a straight line surfaces (http://photosynth.net/view.aspx?cid=9cab958c-596f-414e-ad77-6869a5601f52)
3- Also, the camera calibration parameters have very big errors instead of focal length around 28 mm, I got around 200mm.
4- the relative position of camera are also fault..
I think the problem is occur because no information about camera are available in header of extracted frames from video.....