Implicit learning allows humans to exploit visual regularities without explicit awareness. For such a mechanism to provide maximal utility, it should be neither too stimulus-specific nor over-generalized. Some previous studies report task-general learning, while others report task-specific learning, and it is unknown why these results differ. What determines the generalizability of implicit spatial learning? Here, we manipulated task difficulty as a novel test of this question. We employed a probability cueing manipulation, in which search targets are more frequently presented in one “rich” quadrant of the display than in the remaining “sparse” quadrants. Previous work has shown observers to gradually bias their spatial attention toward the rich quadrant, yielding faster responses to targets in that quadrant. In this study, during an initial training phase, easy and difficult visual search trials were intermixed, and each had their own rich quadrant. Specifically, targets appeared more often in one quadrant on easy trials (“easy rich quadrant”) and in another quadrant on difficult trials (“difficult rich quadrant”). During the test phase, we transferred the observers to an intermediate difficulty search task, in which targets appeared equally frequently in each quadrant. We found the bias toward the easy rich quadrant not only on easy trials but also on difficult trials, whereas the bias toward the difficult rich quadrant appeared only on difficult trials. Moreover, the bias toward the easy rich quadrant – but not the difficult rich quadrant – generalized to the intermediate difficulty trials during test. Further experiments showed that the failure to generalize from the difficult task was not due to either weak probability cueing (Experiment 2) or interference between the two simultaneous rich quadrants (Experiment 3). These findings accord well with learning theories that predict asymmetric generalization based on task difficulty and extend those theories to the domain of implicit spatial learning.