In my head, I would assume there is some speed gain on the processor side since it only has to scan a much smaller area for highest contrast and it also doesn't have to sort of guess where you wanted the AF point.

However, if it guessed racking the lens the wrong direction, wouldn't it still go through the full range of the lens so essentially it becomes limited by lens speed? But on top of that, perhaps the algorithm is smart enough to realize it's going the wrong way before switching direction?

Yes, I've noticed it's faster. It makes sense to me. I've assumed the faster speed is because the processor doesn't first need to apply an algorithim to the data to determine which focus point to automatically select.