Hey fusionWannbe, you might wanna check out this video from Eric about UV unwrapping to for the animation on the floor and stuff. I think it should work, i roughly drafted it out for you( see comp )
You can use the same technic for the flowers part. For the buildings & unfolding trees, you might wanna check out Raf´s Fold tool, but some hand animation and setting up the buildings has to be done.

pavement.zip.txt169.27KB8 downloads
remove the (.txt)
Anyway, for me it a mixture a very good track, Raf´s new Shape Create 3D tools and lots of hair pulling
Goodluck.

Nice work, Dunn!
SE plus Fusion would be perfect for that job. Thing is: you can use that uvunwrapping as a guide (especially for roto) but ultimately you don't have to. I'd just put those animations (with alpha) onto ImagePlanes that are aligned in Syntheyes or in Fusion bases on a point cloud.
The animations can be done with various Krokodove tools for sure. Fold 3D comes to mind.

@Dunn - your pavement comp certainly works well for drawing the moving colored ribbons with shadows. Thanks!! It's quite complex with all the different renderers. I'm going to have to wrap my head around it to understand how it all works.

@Tilt - so I should be able to use this ribbon comp example to draw ribbons with shadows onto ImagePlanes - possibly using other Krokodove tools to accomplish interesting effects?

The ImagePlane itself will be a 'static' surface to draw on, but as the camera moves around in a room, the geometric shape that may start out as a nearly square rectangle may become something very different (at least in the image sequence).

Will I be able to draw on a constant size nearly square rectangle and let Fusion take care of warping it to fit the change in camera angle - or will the ImagePlane itself change in proportions (I'm thinking of the windows in the SynthEyes Planar Tracking tutorials that change aspect as the camera moves around the building) so that I'm drawing onto a surface that's changing?

BTW - Really looks like Syntheyes has done a nice job with the new additions - and your exporter should really make it seemless for use with Fusion.

I'm close to publishing the whole toolset. The python scripts and the regular Syntheyes exporter serve two different purposes: The regular exporter works the way all exporters in Syntheyes work and it's meant to write a new comp file from scratch with as many features as possible.

The Python connection from Syntheyes to Fusion is able to create a comp, but most importantly, it can update a comp intelligently. It will add new trackers or cameras but it will also find and update existing cameras if you run the script a second time (if you don't touch the tool names in Fusion). The downside is, it doesn't (yet) export obj meshes (I would need to write a Python obj exporter) and probably lacks lots of other advanced features of Syntheyes because I'm not a good enough matchmover to even know about these features. (LIDAR? surveys? motion capture? )

The third script, the one that sends a clip from Fusion to Syntheyes, is the most simple one, and it's just meant to be a useful shortcut if you already have a Loader with the footage in a comp.

Your efforts are certainly spurring me to spend some time with SynthEyes and get up to speed.

I would guess that once you get the next release working and have a video, if you send them to Russ, he might have some ideas for you - or possibly the other folks on the SynthEyes forum? Maybe he'd even put in some time himself on the coding, as he'd certainly benefit.

If he added the video - or created one of his own - for the SynthEyes YouTube channel, that would also give Fusion more visibility!

The Python connection from Syntheyes to Fusion is able to create a comp, but most importantly, it can update a comp intelligently. It will add new trackers or cameras but it will also find and update existing cameras if you run the script a second time (if you don't touch the tool names in Fusion).

maybe you should use nodes metadata to identify and update, you can create and search for your own data set

The Python connection from Syntheyes to Fusion is able to create a comp, but most importantly, it can update a comp intelligently. It will add new trackers or cameras but it will also find and update existing cameras if you run the script a second time (if you don't touch the tool names in Fusion).

maybe you should use nodes metadata to identify and update, you can create and search for your own data set

I tried this once but the tool metadata is only nice in theory. Because it will get copied along with the tool and you have no means in Fusion to view or edit that user data. Compared to a comment field you don't even get an icon if a tool has metadata or not. So you might end up with two cameras (if you want to do projections for example) that both indicate they are based on that particular Syntheyes track. But one might be a projector without animated keyframes now, or you just copied the cam to import new animations from a different app.And if you rename a camera in Syntheyes my script would no longer be able to find the correct camera in Fusion, with or without metadata.