Basic premise

When working with still images one rarely has to worry about the alignment of an image after it has been placed. If you are inserting a logo into a photo for an ad you make the adjustment based on space and readability and it stays there until you say otherwise. We are not as lucky with moving footage.

The uses of motion tracking software are many.

  • Matching an inserted element with a background. For instance if your hero is running away from the dragon it won't do at all to film such an energetic scene with an unmoving camera. The camera needs to move and the actor has to appear to move with the background.

Reference points

Nearly no motion tracking software works by examining the entire scene you present it with. Trying to rebuild a 3D scene from 2D footage is very processor intensive and not always conductive to workflow. Even with good software and a powerful computer the nature of 3D re-creation introduces the possibility of many more errors than simple 2D tracking does and should be used only for complex visual effects where information has to be sent back and forth between footage and effect software (for instance a 3D modeler would need live action information to know what angle to render a computer generated monster from before it can actually be placed into that footage).

Instead it is usually cleaner to assign "points" within a piece of footage that a computer can track over time. These points are typically user defined to start with before the computer uses a combination of image values and heuristics to predict the movement of those points within the footage.

Let's look at what makes for smart tracking points.

  • High contrast areas.
  • Parts of the image that stay in focus.
  • Literaly points