I'm working on some long tracks(1500 frames) with the optical flow tracker. When solving there is no progress in the camera tracker-solve-window. It stays at 0%. I think it is doing it's job, but it would be great to see the progress.

It is a very difficult shot. In previous attempts solving took ~20 minutes, but I never managed to get an average solve error below 5. But same thing here: the window showed 0% progress.For this run I tracked an enormous amount of trackers. The filesize of the comp is 800mb and it solves now for 3 hours. I hesitate to kill it, because it might get finished in a minute! I'd love to see the %.

I have to add that my background is the animation industry and I do not have professional tracking experience.

If the solve is that large, you should try dividing the track up in shorter pieces and see if it will track.A 3D tracker needs to find a way to make sure that *everything* it tracks matches across the whole range of frames, the longer a shot is, the longer a solve will take (and the more residual errors will appear, most of the time).

You say it is a difficult shot: why is that? Is there any chance you can share it, so we can help a little better?

Fusion's 3D tracker can do quite a lot for what it is, but it's not a dedicated tracker like Syntheyes is for example.

I haven't actually tried the camera tracker in Fusion yet. But apparently the default settings has auto select seed frames on and in order for it to analyze that it takes a long time and just sits on 0% for awhile.Try turning the auto select off and set the seed frames manually and then try solve.

It's a pro bono project. I will ask the director if it is possible to upload footage.

It is a handcamera shot in the mountains. This is the hero-shot in a sequence with sky replacements and -if possible- fog to be added. I just began working on it and these are early tests. ATM I don't have the original footage and I use HD mp4s of the footage.

I'm considering to purchase Syntheyes, but I'm not too happy about it. As I said: I work in the animation industry and I probably won't use it much depart from this project.

Depending on how much the mp4s are compressed, that could also be part of the issue. If your tracking feature is really a compression artefact things get difficult pretty quickly.

I keep being surprised about the willingness from directors to ask the *most difficult things possible* as a pro bono project. If it's a one minute continuous take, I'm pretty sure that there are actors in the shot as well? This complicates tracking even more. Did you mask all the actors out before tracking?Either way this is a gargantuan undertaking and unless you're personally invested in the project I feel that noone should ask weeks and weeks of free work from anyone else.

The most time-consuming part of the solve is allowing Fusion to choose its seed frames. If you set them manually, it's much quicker. And the progress stays at 0% during that time because it actually hasn't started the solve itself yet.

The advice you've already been given is good, but it's the seed frames that are killing the solve time. To choose seed frames, look for two frames that have many continuous trackers in common but that show as much parallax change between them as possible.

Thanks for the replies.Don't worry. The directors is a good friend and he helped me out with my poor screen writing countless times in the last decade. We have the agreement that the afford I put into the project, has to be reasonable.

Good to know about the seed frames. I will cancel the solver now(it's still busy at 0%) and run another test with manual chosen frames.

Yes, actors and horses and sky is masked out.

btw. Is it possible to delete images after posting them here in the forum? I think the director would be more comfortable with the images being online only temporary.

Yes, I am aware of that. All tasks are related to the wheather. Weather/ lighting conditions changed a lot during shooting. A gloomy- and if doable- foggy look is desired for this sequence.For this shot my idea is to get some fog behind the hill in the BG. For this I would need a 3d-track I believe.But if this shot can't be tracked in a reasonable amount of time I would stick to the sky replacement only.

I had some issues with manually chosen seed frames- fusion crashes after a few seconds I started the solve.I added a fresh cameratracker-node and I'm back to less tracking points with autoseed (tracked 5254 tracks). Solver is doing its job again, but I am unable to get a decent result. A solve error of 5.4 is the best I achieved, but maybe I'm not using it right.Are there any tools to smooth out the tracking data?

I will wait until I get the HDD with the Alexa-footage and see if this makes a difference.

There is a smooth points command in the context menu of the spline view. Select some keyframes, right-click, and smooth points. I frequently find it better to just delete bad keyframes and use the tangent handles to to manage a curve rather than trying to use a smoothing algorithm. Obviously that's a matter of personal preference, though.

You'd want to do your smoothing on a copy of the actual exported camera keyframes. I don't think there's a way to filter the motion of the trackers themselves in Fusion, other than just having more of them, which it sounds like you've accomplished.

Sander de Regt wrote:If the solve is that large, you should try dividing the track up in shorter pieces and see if it will track.

Makes sense, but how would I actually do that? In the track tab I can define a track range, but when I track only parts the solver declines to work, because the tracks from the parts are not connected to him.

Like I mentioned before: without seeing the shot it's difficult to give you good advice.Looking at the images you provided, it appears that there is also focus pulling in the shot, which makes tracking even more difficult since whatever it is you're tracking will change shape and size as well.

How much distance is traversed in the shot? If you need fog in the background maybe a 2D track will work as well. But it all comes down to: what is the shot?

but when I track only parts the solver declines to work, because the tracks from the parts are not connected to him.

I am not sure what you mean by this. If you set your render range for example from 0-100 and you do your track in that range, it will come back with a solve (unless it's not solvable) but in itself the images you showed us have quite a lot of detail, so there should be plenty to track.

When tracking in pieces, you'll wind up with several different Camera3D nodes. It's an entirely separate track for each segment. You'll need to do some transforming to get the cameras into registration with one another for a seamless hand-off.