Details

Description

Currently we limit the size of listStatus requests to a default of 1000 entries. This works fine except in the case of listLocatedStatus where the location information can be quite large. As an example, a directory with 7000 entries, 4 blocks each, 3 way replication - a listLocatedStatus response is over 1MB. This can chew up very large amounts of memory in the NN if lots of clients try to do this simultaneously.

Seems like it would be better if we also considered the amount of location information being returned when deciding how many files to return.

Suresh Srinivas
added a comment - 16/Jan/14 18:35 a listLocatedStatus response is over 1MB
These are short lived objects and are garbage collected in young generation. This causes lot of issues?
Seems like it would be better if we also considered the amount of location information being returned when deciding how many files to return.
Can you please add details about the solution?

They are usually short-lived but a bit longer-lived when we can't push them out the network in a timely manner. Then due to lack of flow control in the RPC layer we can fill up the heap with these given a large enough average response buffer per call and enough clients. See HADOOP-8942.

This change mitigates the issue for listLocatedStatus since a much smaller response payload means it takes a lot more simultaneous clients to consume an equal amount of heap space.

Jason Lowe
added a comment - 16/Jan/14 19:51 They are usually short-lived but a bit longer-lived when we can't push them out the network in a timely manner. Then due to lack of flow control in the RPC layer we can fill up the heap with these given a large enough average response buffer per call and enough clients. See HADOOP-8942 .
This change mitigates the issue for listLocatedStatus since a much smaller response payload means it takes a lot more simultaneous clients to consume an equal amount of heap space.

Suresh Srinivas
added a comment - 16/Jan/14 22:42 Then due to lack of flow control in the RPC layer we can fill up the heap with these given a large enough average response buffer per call and enough clients.
Jason Lowe , thanks for the pointer.
We can certainly reduce the number of files returned in each iteration. But it would increase the number of requests to be processed by NameNode though.

A simple solution is:
Restrict the size to dfs.ls.limit (default 1000) files OR dfs.ls.limit block locations, whichever comes first (obviously always returning only whole entries, so we could send more than this number of locations)

Yes, it will require more RPCs. However, it would seem to lower the risk of a DoS.

Nathan Roberts
added a comment - 16/Jan/14 22:59 A simple solution is:
Restrict the size to dfs.ls.limit (default 1000) files OR dfs.ls.limit block locations, whichever comes first (obviously always returning only whole entries, so we could send more than this number of locations)
Yes, it will require more RPCs. However, it would seem to lower the risk of a DoS.

For a bit more context, we had about ~6-7k tasks (erroneously) issuing listLocatedStatus. Each limited response was over 1M. The handler attempts a non-blocking write for the response. If the entire response cannot be written, the call is added to the background responder thread. The kernel accepts well below 1M for a non-blocking write so all the responses were added to the responder thread.

The call response byte buffers track the position of the last write, thus the entire response buffer is retained until the full response is sent. Re-allocating a buffer with the unsent response will likely introduce additional memory pressure, so the most logical/simplistic change is limiting the response size of the located status.

The end result in our case was the heap bloating by over 8G. Full GC kicked in. The NN was unresponsive for up to 5m at a time. Each time it woke up it marked DNs as dead, causing a flurry of replications which further aggravated the memory issue. Due to other exposed bugs, the NN required a restart.

Although more RPCs are required to satisfy the large requests, I believe the tradeoff is reasonable. It's also not likely to be a common occurrence.

Daryn Sharp
added a comment - 23/Jan/14 16:45 For a bit more context, we had about ~6-7k tasks (erroneously) issuing listLocatedStatus. Each limited response was over 1M. The handler attempts a non-blocking write for the response. If the entire response cannot be written, the call is added to the background responder thread. The kernel accepts well below 1M for a non-blocking write so all the responses were added to the responder thread.
The call response byte buffers track the position of the last write, thus the entire response buffer is retained until the full response is sent. Re-allocating a buffer with the unsent response will likely introduce additional memory pressure, so the most logical/simplistic change is limiting the response size of the located status.
The end result in our case was the heap bloating by over 8G. Full GC kicked in. The NN was unresponsive for up to 5m at a time. Each time it woke up it marked DNs as dead, causing a flurry of replications which further aggravated the memory issue. Due to other exposed bugs, the NN required a restart.
Although more RPCs are required to satisfy the large requests, I believe the tradeoff is reasonable. It's also not likely to be a common occurrence.

Kihwal Lee
added a comment - 23/Jan/14 16:51 The location counting can be off if blocks are under-replicated or over-replicated, but spending more cycles to make it perfect will be a waste. So I am okay with this approach.
+1