-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.

+1 javadoc. The javadoc tool did not generate any warning messages.

+1 javac. The applied patch does not increase the total number of javac compiler warnings.

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

Hadoop QA
added a comment - 21/Feb/12 21:38 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12515484/h2981_20120221.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner
+1 contrib tests. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1888//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1888//console
This message is automatically generated.

I disagree that it should be true by default. Apps like HBase which are latency-sensitive don't want to wait for a whole block to be re-transferred when a node in the pipeline fails. Apps which are long-running writes and non-latency-sensitive (eg log collection) can flip this to true on their own client, no?

Todd Lipcon
added a comment - 21/Feb/12 23:24 I disagree that it should be true by default. Apps like HBase which are latency-sensitive don't want to wait for a whole block to be re-transferred when a node in the pipeline fails. Apps which are long-running writes and non-latency-sensitive (eg log collection) can flip this to true on their own client, no?

Enabling the feature does not mean that re-transferring a block when a node in the pipeline fail. There is another conf property, dfs.client.block.write.replace-datanode-on-failure.policy, for configuring the policy. The default is

DEFAULT:
Let r be the replication number.
Let n be the number of existing datanodes.
Add a new datanode only if r is greater than or equal to 3 and either
(1) floor(r/2) is greater than or equal to n; or
(2) r is greater than n and the block is hflushed/appended.

Also, individual applications can set the policy to NEVER if it is desirable.

Tsz Wo Nicholas Sze
added a comment - 22/Feb/12 01:32 Hi Todd,
Enabling the feature does not mean that re-transferring a block when a node in the pipeline fail. There is another conf property, dfs.client.block.write.replace-datanode-on-failure.policy, for configuring the policy. The default is
DEFAULT:
Let r be the replication number.
Let n be the number of existing datanodes.
Add a new datanode only if r is greater than or equal to 3 and either
(1) floor(r/2) is greater than or equal to n; or
(2) r is greater than n and the block is hflushed/appended.
Also, individual applications can set the policy to NEVER if it is desirable.

Tsz Wo Nicholas Sze
added a comment - 22/Feb/12 02:43 The default in code currently is set to true.
//DFSConfigKeys
public static final boolean DFS_CLIENT_WRITE_REPLACE_DATANODE_ON_FAILURE_ENABLE_DEFAULT = true ;

-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.

+1 javadoc. The javadoc tool did not generate any warning messages.

+1 javac. The applied patch does not increase the total number of javac compiler warnings.

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

Hadoop QA
added a comment - 22/Feb/12 19:00 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12515484/h2981_20120221.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed unit tests in .
+1 contrib tests. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1891//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1891//console
This message is automatically generated.