[ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573510#action_12573510
]
Raghu Angadi commented on HADOOP-2907:
--------------------------------------
Regd buffers, for readers, HADOOP-2578 will remove one out of 3 BUFFER_SIZE buffer. We should
file another jira to use a smaller buffer size for crc file. Then we will have only one large
buffer per reader. Similarly while writing data.
> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
> Key: HADOOP-2907
> URL: https://issues.apache.org/jira/browse/HADOOP-2907
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is found in
the out file:
> Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError:
Java heap space
> Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java
heap space
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.