Skip to content

Conversation

@taklwu
Copy link
Contributor

@taklwu taklwu commented Jun 21, 2022

Description of PR

Add option and make 400 bad request retryable, added fs.s3a.fail.on.aws.bad.request and default to true such that it's acting the same behavior without turning it on.

How was this patch tested?

Add a new unit test.

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

Integration Tests executed

I configured auth-keys.xml with using AssumedRoleCredentialProvider with a TemporaryAWSCredentialsProvider in auth-keys.xml. the region is us-west-2 and the endpoint is s3.us-west-2.amazonaws.com.


% mvn -Dparallel-tests clean verify -DtestsThreadCount=12 | tee ~/s3-test.log
% grep "\[ERROR] Tests run" ~/s3-test.log

### default-integration-test
[ERROR] Tests run: 14, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 222.383 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
[ERROR] Tests run: 11, Failures: 0, Errors: 11, Skipped: 0, Time elapsed: 9.573 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.delegation.ITestSessionDelegationInFileystem
[ERROR] Tests run: 11, Failures: 0, Errors: 11, Skipped: 0, Time elapsed: 10.08 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationInFileystem
[ERROR] Tests run: 7, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 21.432 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationTokens
[ERROR] Tests run: 20, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 40.107 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AConfiguration
[ERROR] Tests run: 6, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 70.07 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob

### overall first set of tests for default-integration-test has 36 errors 
[ERROR] Tests run: 1088, Failures: 0, Errors: 36, Skipped: 96



# sequential-integration-tests

[ERROR] Tests run: 9, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 546.824 s <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.9 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.tools.ITestMarkerToolRootOperations

### second set of tests for the sequential-integration-tests has 4 errors 
[ERROR] Tests run: 117, Failures: 0, Errors: 4, Skipped: 77

The default-integration-test failed because the following reasons, they looks fine to me.

  1. com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Cannot call GetSessionToken with session credentials
  2. ava.io.IOException: org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider instantiation exception: java.lang.IllegalArgumentException: Proxy error: fs.s3a.proxy.username or fs.s3a.proxy.password set without the other.
  3. java.nio.file.AccessDeniedException: s3a://abc/fork-0006/test: getFileStatus on s3a://abc/fork-0006/test: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
  4. org.apache.hadoop.service.ServiceStateException: java.io.IOException: Unset property fs.s3a.assumed.role.arn, this one is strange because I have set fs.s3a.assumed.role.arn in auth-keys.xml

For the sequential-integration-tests, the errors are.

  1. org.junit.runners.model.TestTimedOutException: test timed out after 180000 milliseconds
  • this time mainly caused by a the getFileStatus cannot be done, ERROR contract.ContractTestUtils (ContractTestUtils.java:cleanup(383)) - Error deleting in TEARDOWN - /test: java.io.InterruptedIOException: getFileStatus on s3a://taklwu-thunderhead-dev/test: com.amazonaws.AbortedException
  1. [ERROR] test_100_audit_root_noauth(org.apache.hadoop.fs.s3a.tools.ITestMarkerToolRootOperations) Time elapsed: 7.774 s <<< ERROR! 46: Marker count 2 out of range [0 - 0]

I'm wondered if I have a clean simple credential without using assumeRole setup, all above tests could passed and would not get into permission and the special GetSessionToken problem.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 55s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 70m 5s trunk passed
+1 💚 compile 0m 53s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 0m 45s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 0m 43s trunk passed
+1 💚 mvnsite 0m 51s trunk passed
+1 💚 javadoc 0m 39s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 40s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 26s trunk passed
+1 💚 shadedclient 23m 36s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 47s the patch passed
+1 💚 compile 0m 41s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 0m 41s the patch passed
+1 💚 compile 0m 32s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 0m 32s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 26s the patch passed
+1 💚 mvnsite 0m 38s the patch passed
+1 💚 javadoc 0m 20s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 27s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 12s the patch passed
+1 💚 shadedclient 23m 39s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 40s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 44s The patch does not generate ASF License warnings.
133m 43s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/artifact/out/Dockerfile
GITHUB PR #4483
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux edee9c8b9866 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / f9292dffe2c3cdc8d351a9f87010fea36c003074
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/testReport/
Max. process+thread count 601 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor

which s3 endpoint did you run the hadoop-aws integration tests against, and what was the full mvn command line used? thanks

@apache apache deleted a comment from hadoop-yetus Jun 22, 2022
Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made some comments. i do want that s3 endpoint test run declaration before i look at it again

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. needs to be all lower case with "." between words
  2. and javadocs with {@value)
  3. and something in the documentation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack and thanks, I will update it soon.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the normal retry policy -which is expected to handle network errors- be applied here, or something else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct me if I'm wrong but before our change, the response as AWSBadRequestException in fact is getting back with a HTTP 400 error code. It is different from other network failures that the fail/RetryPolicies.TRY_ONCE_THEN_FAIL has been applied for.

@taklwu
Copy link
Contributor Author

taklwu commented Jun 23, 2022

@steveloughran I should have provided the test result that executed with integration tests in the description, they're not perfect but we can discuss how we move forward.

@taklwu taklwu requested a review from steveloughran June 23, 2022 04:17
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 59s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 40m 10s trunk passed
+1 💚 compile 0m 52s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 0m 45s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 0m 42s trunk passed
+1 💚 mvnsite 0m 52s trunk passed
+1 💚 javadoc 0m 38s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 40s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 28s trunk passed
+1 💚 shadedclient 23m 33s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 35s the patch passed
+1 💚 compile 0m 41s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 0m 41s the patch passed
+1 💚 compile 0m 31s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 0m 31s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 checkstyle 0m 23s the patch passed
+1 💚 mvnsite 0m 39s the patch passed
+1 💚 javadoc 0m 19s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 27s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 12s the patch passed
+1 💚 shadedclient 23m 9s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 45s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
103m 10s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/4/artifact/out/Dockerfile
GITHUB PR #4483
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 1e6c7306374e 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / f57025b
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/4/testReport/
Max. process+thread count 535 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/4/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 56s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 40m 21s trunk passed
+1 💚 compile 0m 53s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 0m 45s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 0m 42s trunk passed
+1 💚 mvnsite 0m 52s trunk passed
+1 💚 javadoc 0m 39s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 40s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 27s trunk passed
+1 💚 shadedclient 23m 40s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 33s the patch passed
+1 💚 compile 0m 41s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 0m 41s the patch passed
+1 💚 compile 0m 32s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 0m 32s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 24s the patch passed
+1 💚 mvnsite 0m 38s the patch passed
+1 💚 javadoc 0m 20s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 0m 27s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 1m 11s the patch passed
+1 💚 shadedclient 23m 16s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 45s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
103m 28s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/5/artifact/out/Dockerfile
GITHUB PR #4483
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 1c7232363729 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 60df0f6
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/5/testReport/
Max. process+thread count 581 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/5/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@mukund-thakur
Copy link
Contributor

Do we really need to introduce this config? Seems like an overkill.
I think 400 Bad Request are supposed to be non retry-able.

@taklwu
Copy link
Contributor Author

taklwu commented Jun 24, 2022

reading from the hadoop-aws page

The status code 400, Bad Request usually means that the request is unrecoverable; it’s the generic “No” response. Very rarely it does recover, which is why it is in this category, rather than that of unrecoverable failures.

That does not match the code that in fact S3A do not retry for 400 bad request (please correct me if I'm incorrect), in the case I'm reporting and yeah sorry it's not as usual as others, retry did help and help us to use the beauty of retry policy API

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Historically 400 errors have always been completely unrecoverable, apart from one or two stack traces we haven't seen again. Failing fast is the correct action in all the situations. We have had problems in the past where bad credentials caused things to hang for a long time as they keep retry and it's completely the wrong behaviour.

I understand the situation here is where a provider is issuing credentials which are out of date and this does not appear to be visible until the request is made of AWS. The retrying is being attempted in the hope that eventually the credential provider will notice that it needs to get new credentials and so a reattempted request will get through. As such it should help correct for some clock skew problem where the client is still catching them and has not noticed that they are out of date.

However, we have a few of things to consider here.

The normal retry policy is intended to cope with all values other than throttling, and is targeted at transient network failures. It does a basic back off with increasing delays and some jitter. Is this really the right recovery strategy for ID broker or or should it immediately go for a delay of a few seconds and keep retrying with a fixed interval of 10s for, say a minute or two?

Next question, do we only retry on calls we consider idempotent, or do we retry on everything? The policy your PR has put up says itempotent only, whereas I believe it should be something similar to connection failures where we assume that the request never got as far as the objects store and can be retried.

Finally, it would be better if we could somehow explicitly tell the credential providers that the request was rejected and that they need to trigger a refresh. The current PR is hoping that if we retry with delays things will sort themselves out -it may be better to go through each of the credential providers in turn and tell them to refresh their values letting them know to the last request was rejected. For EC2 IAM that could ensure that provided to the VM that would involve making a new GET request to the URL serving up credentials. For the ID broker client we would want it to send a request to ID broker notifying IDB that its last request was rejected.

We could even think about doing this always, but precisely once. If you're credential provider can't refresh when it's been told the previous call field then there is no hope for anyone. Slightly efficient here in that we would look to see if any of the credential providers in the list was capable of refreshing (and only force a refresh and retry if so. But: we would have to think hard about how to wire this up as retry policy it's simply Kaci exceptions and deciding what to do; credential providers are signing I wired up deep inside of the AWS code. We would probably need to add something into the call backs AWS provides and force that refresh on a 400 failure, and have the retry policy I am unable retries on the 400 hours if the policy was instantiated in an S3a client whose authentication chain is potentially recoverable


Testing. You know, I was wondering if we could actually add a real integration test here. Seriously. We would need a new credential provider which would be programmed two return some invalid credentials every N requests and the rest of the time provide and no credentials at all. If this provider was placed at the head of the provider chain and N was > than the retry count of the policy, then the entire Hadoop AWS test suite would be expected to run to completion. This would let us catch problems like retry around retry, and even failure to retry.

policyMap.put(AWSBadRequestException.class, fail);
RetryPolicy awsBadRequestExceptionRetryPolicy =
configuration.getBoolean(FAIL_ON_AWS_BAD_REQUEST, DEFAULT_FAIL_ON_AWS_BAD_REQUEST) ?
fail : retryIdempotentCalls;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. should retry on all calls, rather than just idempotent ones, as long as we are confident that the request is never executed before the failure
  2. I don't believe the normal exponential backoff strategy is the right one, as the initial delays are very short lived (500ms), whereas if you are hoping that credential providers will fetch new credentials, an initial delay of a few seconds would seem better. I wouldn't even bother with exponential growth here, just say 6 times at 10 seconds.

I think we would also want to log at warn that this is happening, assuming this is rare.

}

@Test
public void testRetryBadRequestIdempotent() throws Throwable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test looks ok.

* If it's disabled and set to false, the failure is treated as retryable.
* Value {@value}.
*/
public static final String FAIL_ON_AWS_BAD_REQUEST = "fs.s3a.fail.on.aws.bad.request";
Copy link
Contributor

@steveloughran steveloughran Jun 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I now think "fs.s3a.retry.on.400.response.enabled" would be better, with default flipped. docs would say "experimental"

and assuming we do have a custom policy, adjacent

fs.s3a.retry.on.400.response.delay  // delay between attempts, default "10s"
fs.s3a.retry.on.400.response.attempts // number of attempts, default 6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants