Advertising

Akira Ajisaka resolved HDFS-6682.
---------------------------------
Resolution: Not A Problem
Target Version/s: (was: 2.8.0)
Closing this issue since HDFS-10341 was fixed.
bq. As Andrew suggested, recording the rate of addition/removal from
UnderReplicatedBlocks would be useful and straightforward to me.
If someone needs this, please create a separate jira and link to this issue.
> Add a metric to expose the timestamp of the oldest under-replicated block
> -------------------------------------------------------------------------
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Akira Ajisaka
> Assignee: Akira Ajisaka
> Labels: metrics
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch,
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest
> under-replicated/corrupt block. That way client can know what file to retain
> for the re-try.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org