s3a failure due to integer overflow bug in AWS SDK

Details

Description

Under high load writing to Amazon AWS S3 storage, a client can be throttled enough to encounter 24 retries in a row.
The amazon http client code (in aws-java-sdk jar) has a bug in its exponential backoff retry code, that causes integer overflow, and a call to Thread.sleep() with a negative value, which causes client to bail out with an exception (see below).

Error: java.io.IOException: File copy failed: hdfs://path-redacted --> s3a://path-redacted
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://path-redacted to s3a://path-redacted
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
... 10 more
Caused by: com.amazonaws.AmazonClientException: Unable to complete transfer: timeout value is negative
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:300)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:284)
at com.amazonaws.services.s3.transfer.internal.CopyImpl.waitForCopyResult(CopyImpl.java:67)
at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(S3AFileSystem.java:943)
at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:357)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.promoteTmpToTarget(RetriableFileCopyCommand.java:220)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:137)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
Caused by: java.lang.IllegalArgumentException: timeout value is negative
at java.lang.Thread.sleep(Native Method)
at com.amazonaws.http.AmazonHttpClient.pauseBeforeNextRetry(AmazonHttpClient.java:864)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Lei (Eddy) Xu
added a comment - 29/Jul/15 20:27 Hi, Aaron Fabbri As we discussed offline, we will close this as duplicated to HADOOP-12269 . Lets bump aws-sdk version in both trunk and branch-2 in HADOOP-12269 .
Thanks again for this effort.

Lei (Eddy) Xu
added a comment - 29/Jul/15 20:07 Hi, Aaron Fabbri . The patch itself looks good to me. Would you cooperate with Thomas Demoor and Steve Loughran regarding the version of aws-sdk to pull in?
Thanks a lot for the efforts.

Aaron Fabbri
added a comment - 28/Jul/15 02:22 Tested these two v2 patches with S3, ensuring behavior is the same around the 2GB-1 boundary for fs.s3a.multipart.threshold.
Thomas Demoor already started on patches for trunk, which will use latest-greatest aws-java-sdk.
I think we should move forward with these patches for the 2.6.x and 2.7.x branches (fixes bugs for existing customers who can't upgrade to trunk).

in HADOOP-11684 I have bumped to 1.9.x (we have been testing this for a month now and all is well). Note that other bugs fixed in the aws-sdk (multi-part threshold from int -> long ) require some code changes in s3a.

You will see in the comments that Steve Loughran requested to pull out the aws-sdk upgrade to a separate patch. I am doing that today, will link to the new issue then.

Another main benefit of 1.9+ is that s3 is a separate library. We no longer need to pull in the entire sdk.

Thomas Demoor
added a comment - 24/Jul/15 09:06 Hi Aaron,
in HADOOP-11684 I have bumped to 1.9.x (we have been testing this for a month now and all is well). Note that other bugs fixed in the aws-sdk (multi-part threshold from int -> long ) require some code changes in s3a.
You will see in the comments that Steve Loughran requested to pull out the aws-sdk upgrade to a separate patch. I am doing that today, will link to the new issue then.
Another main benefit of 1.9+ is that s3 is a separate library. We no longer need to pull in the entire sdk.

Aaron Fabbri
added a comment - 24/Jul/15 05:58 Bug fix was backported to AWS SDK 1.7.14. Officially, only last two releases are supported by Amazon. Currently this is 1.10.x and 1.9.x.
I suggest 1.7.14 SDK jar for 2.6.x and 2.7.x, and then moving to latest/greatest 1.10.x for trunk. Adding patches.
I tested the patches with some basic hdfs fs s3a:// commands.