Double Point Tracking Tutorial?

Thanks to Jamie and others I'm now comfortable with the basics of single point tracking, and am finding it quite useful.

Double point tracking seems simple enough in theory, but in practice I don't seem able to get it working. I've watched a couple dozen vids on YouTube, but they are mostly about single point tracking, or more advanced techniques like Mocha which I'm not ready for yet.

Let's say I'm tracking a face which is moving towards the camera, ie. scaling. The red track box and any attached object should scale right along with the head, yes? When I do it the attached object gets smaller while the face is getting bigger. I'm clearly doing something wrong, but not sure what.

For starters, can anyone link to a vid tutorial specifically about double point tracking, perhaps with included materials shown in the vid so I can try to replicate what the teacher does in their tutorial?

Your point about the red not getting bigger, you are looking for functionality that isn't there. It stays the same becuase you told it that the area inside the red is what you want tracked. It is your focal point, the sweet spot. The green box is the area to look for the red content when the frame changes. Now both of these can be changes per frame as you track, so if your frame scale changes and the area in the red box becomes bigger than the orginal red box, you need to manually scale it up and start to track again. Sorry I know it sounds a pain but it just the way things are.

I did watch that video, and will watch it again. The video also introduced other interesting concepts which I hope to explore further down the road.

Let's see if I understand you.

With single point tracking of a face I need only track a small area on the face, say an eye. The eye will go up/down, left/right with the rest of the face, and that's enough to tell my top layer where to go when etc. I know how to do this.

With double point tracking I have to define the entire face with the red box, and I have to adjust the red box continually to keep it covering the face as the face scales and rotates. Yes?

If true, then double point tracking might be described as being more powerful than single point tracking, but also less automated. Is that a fair description? Do I get it?

If I do get it, the next question would be, is there a method available to Hitfilm which provides scale and rotation tracking in a more automated manner? Mocha perhaps? Something else?

@PhilTanny As you said, with single point tracking of a face you only need to track a small area on the face such as an eye.

With double point tracking, you need to track two small areas of the face, such as both eyes. Hence, double point tracking.

When you set the track type to "Double Points" you should see two sets of the red and green boxes. One for each point.

So you would put one over each eye (or you could use eyebrows, in case the actor blinks), and then HitFilm will track both points in the same way as for single-point tracking, but now because there are two points, it is also able to calculate scale and rotation.

HitFilm doesn't know face vs background. It's simply looking for the pattern of pixels that you established when you first set up the tracking boundaries. The red box defines the pattern you want it to match, while the green box is the area where the tracker will look on the following (or previous) frame to find that same pattern as it tracks forward or backward. That's why areas with high contrast -- such as dark eyebrows against lighter skin -- are best for tracking. The video that @HitfilmSensei linked points this out as well.

Let's say I'm attempting to replace a face in the video with a new face. If I'm going to have to continually tweak the red box as the original face scales, why not just scale the replacement face?

That's what I do in Premiere Elements. I position the replacement face on top of the original face, and then advance the timeline, adjusting the replacement face every so many frames. Elements creates the in between frames of course.

This does work, sort of. I would describe the results as not bad, but not that good either. The challenge is that when I adjust position and scale etc of the replacement face I often introduce small imperfections which tend to make the face's movement look unnatural upon playback.

I've improved on this somewhat by being careful to introduce as few of my own edits as possible. In the beginning I was creating lots of my own keyframes thinking that would improve accuracy, but as it turned out that isn't the way to go.

What I like (a lot) about single point tracking is that it's automated, which not only makes it easier, but more importantly makes the end result more accurate, smooth, and realistic. Single point tracking removes me from the equation, which removes the imperfections I introduce. But of course I can only go up/down, right/left.

Anyway, getting back on track here...

If I want a replacement face to scale along with the original face in Hitfilm, don't I need to define the original face with the red box? Doesn't Hitfilm need to know what object in the video the replacement face should be imitating?

@PhilTanny Maybe this is not explained correctly but in HitFilm interface it says "Single Point (Position Only)" or "Double Points (Position/Scale/Rotation)". As @Triem23 said, using two points for your tracking will allow you to track the scale and rotation, which is not possible with only one point.

One thing you could try to learn two point tracking would be to film yourself holding a picture frame facing the camera and try to replace the image inside it. Start from a few meters away and walk towards the camera, holding the frame in place so that it always faces the camera.

To track this, place one point in one corner of the frame and the second point in the opposite corner. Now Track your footage. Check that when you play the video, the points stay on the corners then when you are happy with your tracking, you can do step 2 to "apply" the tracking to a point for example. Import any image, resize it to match the original in the first frame and then parent the image to the point. The image should now scale up and down to match your movement.

I went over these steps a bit quickly but hopefully it can help you. This is an old tutorial but still relevant to understand 2D tracking:

Everybody above, thanks for the tips and vids. I think I understand what I'm supposed to be doing, and how it's supposed to work now. You've helped clear up some concepts I was obviously confused about.

I'm still having trouble getting useful results, but I think the solution now is better footage and more practice. On to that!