Description

We have seen this issue from one of our clusters: when running terasort map-reduce job, some mappers failed after reducer started, and then MR AM tries to preempt reducers to schedule these failed mappers.

After that, MR AM enters an infinite loop, for every RMContainerAllocator#heartbeat run, it:

As a result, we can see total #requested-containers increased 1024 for every MRAM-RM heartbeat (1 sec per heartbeat). The AM is hanging for 18+ hours, so we get 18 * 3600 * 1024 ~ 66M+ requested containers in RM side.

And this bug also triggered YARN-4844, which makes RM stop scheduling anything.