When preparing to intercept a ball in flight, humans make predictive saccades ahead of the ball's current position, to a location along the ball's future trajectory. Such visual prediction is extremely accurate for even non-athletes, and can reach at least 400ms into the future. Furthermore, prediction is not simple extrapolation, but draws upon prior experience to account for likely target dynamics. This was demonstrated in a virtual-reality ball interception task in which subjects were asked to intercept an approaching virtual ball shortly after its bounce upon the ground. The subject's hand and eye movements were tracked with motion capture and an eye-tracker as they attempted to intercept a virtual ball seen through a head-mounted display. On the majority of trials, subjects made pre-bounce saccades to a location along the ball's eventual post-bounce trajectory, where they would fixate until the ball passed within 2˚ of the fixation location. Furthermore, the saccades demonstrated prediction of the eventual height of the ball at the time of the catch. In the current study, we use computational models to better understand the guidance of these eye-movements. Subjects performed an interception task in which fast moving balls left little time to guide the interceptive movement on the basis of the post-bounce visual information. Thus, at the time of the bounce, subject's hand height was predictive of the ball's arrival height. We modeled predictive hand placement as a combination of pre-bounce visual information and the predicted final arrival height (as indicated by the predictive saccades). Through a process of computational modeling, we can differentiate between behavior that is either biased towards a central tendency, that suggests a learned mapping between hand position and pre-bounce kinematics, or that indicates reliance on a Bayesian prior.