-
Notifications
You must be signed in to change notification settings - Fork 25.6k
add disable_chunked_encoding configuration #44052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Pinging @elastic/es-distributed |
|
@elasticmachine test this please |
|
@wujinhu This looks like a great change, thanks for contributing this! |
|
Thanks @martijnvg , very glad to contribute to es. :) |
|
Two tests failed because of time out. They are ok if I run REPRODUCE commands |
|
@wujinhu That failure looks unrelated to your change. I will trigger another build. |
|
@elasticmachine run elasticsearch-ci/2 |
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @wujinhu for the PR. I left a few minor comments. I also think we should randomly set this setting in the integration tests, and adjust AmazonS3Fixture to handle non-chunked uploads when the setting is set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slight preference for this, avoiding the boolean parameter:
| builder.withChunkedEncodingDisabled(true); | |
| builder.disableChunkedEncoding(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whitespace nit:
|
Thanks @DaveCTurner for your suggestions, I have optimized the code and documentation.
|
|
Before you go too much further @wujinhu, we have some concerns about this PR. On closer inspection it seems that Alibaba Object Storage Service does support chunked uploads: it's mentioned explicitly in their docs at https://www.alibabacloud.com/blog/how-to-use-node-js-to-upload-files-to-alibaba-cloud-object-storage_594077 and https://www.alibabacloud.com/help/doc-detail/31978.htm at least. Arguably any service that doesn't support chunked uploads isn't really S3-compatible. I think you'll need to seek support from Alibaba to determine why it's not working for you. |
|
Thanks @DaveCTurner . Before I created this pr, I have talked with them, this feature is not enabled for all regions. And it will take some time to enable remaining regions in their plan. So, they suggest us to disable chunked encoding this time. Let me explain this problem in detail. If we use https, aws sdk will use single chunked encoding, it's OK. However, if we use http, aws sdk will use multiple chunked encoding, Alibaba Object Storage Service does support it(just as error message said), and it will take some time to enable it in their plan. So, we add this configuration to disable it. |
|
Ok, to be clear, the absolute earliest this change could possibly be delivered is 7.4.0, and we have only just released 7.2.0 so it will be quite some time yet before this could be in a released version. Can you link to some docs showing which regions do and don't support chunked content-encoding? The docs do not necessarily need to be in English if that helps. Do you know how long it will be before all regions support it? It might be simpler to wait. We also have a concern about whether there is a difference in memory usage of snapshots when using this setting. We suspect it isn't significant, but this will need to be tested carefully. Apart from those two points, I think it is worth proceeding with this PR. |
|
@DaveCTurner Thanks for your support. I have talked with their support. They plan to support multiple chunked encoding in India/Japan/Singapore/Malaysia/Indonesia/Dubai regions this year, and support remaining regions next year(not sure). There is another advantage to set this setting. Just as aws sdk documentation says, it will has performance implications if we enable chunked encoding. I will proceed with this PR if you have no concerns. :) |
I think you mean disadvantage. Chunked encoding allows the SDK to upload the file having read it once, but if chunked encoding is disabled then the file must be read twice, which is slower. It seems that Alibaba users like yourself have no choice, however, and will not have a choice for some time. I will suggest some wording for the docs to explain this more clearly, but please go ahead with the changes to the tests and |
|
@DaveCTurner I have spent some time running the integration tests and submitted this change. Please tell me if I misunderstood your idea.:) |
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, great, that's pretty much exactly what I meant. I asked for a couple extra checks - see inline comments.
plugins/repository-s3/src/test/java/org/elasticsearch/repositories/s3/AmazonS3Fixture.java
Outdated
Show resolved
Hide resolved
plugins/repository-s3/src/test/java/org/elasticsearch/repositories/s3/AmazonS3Fixture.java
Outdated
Show resolved
Hide resolved
|
@DaveCTurner Please take a look at again. I add extra checks as you suggested. |
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I suggested a couple of minor changes but this all looks ok. However I find that ./gradlew :plugins:repository-s3:integTest sometimes fails for me, seemingly because chunked encoding is supposed to be disabled but some requests are still coming in using chunked encoding. I'm not sure why.
plugins/repository-s3/build.gradle
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer this to be deterministic. You can use new Random(Long.parseUnsignedLong(project.rootProject.testSeed.tokenize(':').get(0), 16)) to construct a seeded Random.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit:
| if (!disableChunkedEncoding) { | |
| if (disableChunkedEncoding == false) { |
Experience has shown that a vital ! is too easy to miss, so we prefer == false instead.
|
@DaveCTurner Maybe I found the reason why tests fail randomly. if I run command below, it's ok. for((i=1;i<=20;++i)); do rm -f plugins/repository-s3/build/fixtures/s3Fixture.properties && ./gradlew :plugins:repository-s3:integTest; doneIt seems this issue relates to updates of s3Fixture.properties. I will confirm it and update later. |
|
I found updates of s3Fixture.properties was after task s3Fixture. // If all these variables are missing then we are testing against the internal fixture instead, which has the following
// credentials hard-coded in.
@@ -244,6 +244,7 @@ task s3FixtureProperties {
doLast {
file(s3FixtureFile).text = s3FixtureOptions.collect { k, v -> "$k = $v" }.join("\n")
+ println 'doLast in task s3FixtureProperties'
}
}
@@ -251,6 +252,7 @@ task s3FixtureProperties {
task s3Fixture(type: AntFixture) {
dependsOn testClasses
dependsOn s3FixtureProperties
+ println 'in task s3Fixture'
inputs.file(s3FixtureFile)Result: So, if we move updates of s3Fixture.properties outside of @@ -242,9 +242,7 @@ task s3FixtureProperties {
"s3Fixture.disableChunkedEncoding" : s3DisableChunkedEncoding
]
- doLast {
- file(s3FixtureFile).text = s3FixtureOptions.collect { k, v -> "$k = $v" }.join("\n")
- }
+ file(s3FixtureFile).text = s3FixtureOptions.collect { k, v -> "$k = $v" }.join("\n")
} |
|
Yes that seems to have fixed it. This LGTM but I would like @original-brownbear to pass judgement too (particularly on the changes to |
|
@wujinhu change is merged, can you merge |
|
@original-brownbear ok, I‘m testing now. |
|
@original-brownbear it seems s3Fixture.properties is not updated after I rebased your change. |
|
@wujinhu yes that's right :) The file was a hack to pass the address the Minio Docker container was bound to, to the JUnit tests. Now that hack and all the problems it was causing is gone and it seems like it successfully executed for you now as well? Thanks again for your help on this one! |
18b30bc to
7558195
Compare
|
@original-brownbear Sorry, I am a little confusion about your meaning, do you mean I need revert this hack or not? + outputs.upToDateWhen { false }I have rebased master. |
|
@wujinhu yes, please revert the hack it shouldn't be necessary any longer. |
|
@original-brownbear ok, please trigger the test, thanks. |
|
Jenkins test this thanks @wujinhu ! |
original-brownbear
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks @wujinhu
Want to take another look as well @DaveCTurner ?
DaveCTurner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. @wujinhu in future please don't rebase (or force-push to) PR branches, because it loses history and comments and things. git merge master is friendlier to reviewers.
* Add disable_chunked_encoding setting to S3 repo plugin to support S3 implementations that don't support chunked encoding
|
@DaveCTurner @original-brownbear ok, thanks! :) |
Hi, es team. We are using repository-s3 to access s3-compatible service(Alibaba Cloud Object Storage Service). However, it does not support chunked encoding.
{ "error": { "root_cause": [{ "type": "repository_verification_exception", "reason": "[backup] path is not accessible on master node" }], "type": "repository_verification_exception", "reason": "[backup] path is not accessible on master node", "caused_by": { "type": "i_o_exception", "reason": "Unable to upload object [tests-jI3CAPLSTeWReGDUP_Pf_w/master.dat] using a single upload", "caused_by": { "type": "amazon_s3_exception", "reason": "Aws MultiChunkedEncoding is not supported. (Service: Amazon S3; Status Code: 400; Error Code: NotImplemented; Request ID: 5D22E6C1F73A3FB709ECAB2F; S3 Extended Request ID: hadoop-oss-test.oss-cn-zhangjiakou.aliyuncs.com)" } } }, "status": 500 }After we add chunked encoding as a configuration(disable_chunked_encoding), it works!
{"acknowledged":true}I created this PR for this change. :)