When tracking multiple objects, does the visual system encode the location and trajectory of tracked objects? Is encoding only triggered from the abrupt changes that typically occur in the real world such as when objects disappear behind other objects? We extend our 2009 work examining the role of location-coding in Multiple Object Tracking (MOT) using a novel blink-contingent method, enabling us to control simultaneously: item disappearance and abrupt transitions.1 Here, we introduce backward-masking to the eye-blink paradigm to further control onset transitions. Observers were instructed to blink their eyes when a brief tone was presented midway into each 5s trial of tracking (4 of 8 circles). Eye-blinks induced two events: item disappearance (for 150, 450, or 900 ms), and onset of a mask, which occluded the entire display of items (either for the full disappearance time, or 75ms plus a blackout for the remaining interrupt). During their disappearance, objects either continued moving along their trajectories, or halted until their reappearance. Therefore, “move” objects reappeared further along their trajectory while “halt” objects did not. Results replicate Keane & Pylyshyn, (2006); and Aks et al., (2009) with better tracking when items halt [but here only reliably in the 450 and 900 ms trials]. These trends indicate that trajectory information is not encoded during tracking, and the visual system may refer back to past position samples as a ‘best guess’ for where tracked items are likely to reappear. Importantly, the “halt” advantage occurred in both blocked and randomized forms of object motion, suggestive of an automated and data-driven tracking mechanism; one not inclined to predict objects' trajectories even when presented in a repeated, and thus, predictable context. [1Although an eye-blink is a sudden event optically, we are typically unaware of it and it is likely not encoded as a transient event.]