Many perceptual cue combination studies have shown that humans integrate information across modalities as well as within a modality in a manner statistically close to optimal. Here we asked whether the same rules hold in the context of movement planning tasks. We tested this during a pointing task where information about the target location was provided either by haptic or by visual or by both visual and haptic feedback during the pointing movement. Visual information was provided by briefly flashing three dots sampled from a Gaussian around the target position with a standard deviation of 4 cm. Haptic information was provided by pushing the index finger upwards using a PHANToM haptic interface. The strength of the force pulse (1 N to 3.5 N) indicated the target position. We measured the distance from hit point to target location and subjects earned money for minimizing this distance. We could well account for this data after extending the common maximum-a-posteriori (MAP) model for cue combination by adding a term that compensates for motor noise. Our model assumes that subjects select a target point by optimally combining a prior and all available sensory information. In addition, motor noise is present in both unimodal and bimodal trials and cannot be reduced further. The model parameters were fitted for all conditions simultaneously on a trial-by-trial basis. The model accurately predicts visual and haptic weights as well as subjects' performance. To test whether synchronicity influences the way the nervous system combines cues, we also analyzed situations in which visual and haptic information was presented with temporal disparity. We find that for our sensorimotor task temporal disparity had no effect. Sensorimotor learning appears to converge to the same near optimal rules for cue combination that are used by perception and to make use of all the available information.