-
Couldn't load subscription status.
- Fork 9.1k
HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature #3249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature #3249
Conversation
|
Testing: S3 london; full suite in progress with acls set in auth-keys.xml Also verified via a "hadoop fs -touchz s3a://stevel-london/acl4" call <property>
<name>fs.s3a.acl.default</name>
<value>LogDeliveryWrite</value>
</property>then verify through AWS CLI |
3ffb85e to
1ed1e66
Compare
|
rebased onto a few patches behind on trunk, before the CSE support went in. that's triggering failures and I need to differentiate CSE-related regressions and anything from this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM 👍 .
This change makes a bit of a point with regards to createRequestFactory. Maybe it's better in the future to move to a method which explicitly requires all of its dependencies in the parameters instead of relying on the object's state, i.e. a pure function.
|
I've done a full test run with breaks all assumed role tests because the roles aren't being created with |
5cbe4de to
8ad0f7b
Compare
Fixes the regression caused by HADOOP-17511 by moving where the cannedACL properties are inited -so guaranteeing that they are valid before the RequestFactory is created. Adds * A unit test in TestRequestFactory to verify the ACLs are set on all file write operations * A new ITestS3ACannedACLs test which verifies that ACLs really do get all the way through. Change-Id: Ic96bc93edfc182d88f6d4ebc43c594a29f94d2cf
8ad0f7b to
0af71d6
Compare
|
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, some nit.
Tested on ap-south-1, using #3249 (comment).
"Grants": [
{
"Grantee": {
"ID": "<ID>",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/s3/LogDelivery"
},
"Permission": "WRITE"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/s3/LogDelivery"
},
"Permission": "READ_ACP"
}
]
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACannedACLs.java
Outdated
Show resolved
Hide resolved
| } | ||
|
|
||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 2 extra line spaces.
| STATEMENT_ALL_S3_GET_BUCKET_LOCATION | ||
| ); | ||
|
|
||
| public static final String CANNED_ACL_LOG = "LogDeliveryWrite"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Javadoc to explain the constant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually that wasn't in use. I'd also added it to S3ATestConstants and forgot about this one. Cut it
Change-Id: I3174baf5b0f2e10f0199bd5b762e07f762f76c33
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM +1.
Ran raw tests without setting this new acl key using ap-south-1 bucket. All good apart from some Timeouts because of vpn.
…3249) Fixes the regression caused by HADOOP-17511 by moving where the option fs.s3a.acl.default is read -doing it before the RequestFactory is created. Adds * A unit test in TestRequestFactory to verify the ACLs are set on all file write operations. * A new ITestS3ACannedACLs test which verifies that ACLs really do get all the way through. * S3A Assumed Role delegation tokens to include the IAM permission s3:PutObjectAcl in the generated role. Contributed by Steve Loughran Change-Id: I3abac6a1b9e150b6b6df0af7c2c70093f8f518cb
|
💔 -1 overall
This message was automatically generated. |
|
have I just broken everything? |
|
It is not you, it is we! So did anything break? I ran re-ran the tests on a PR up to date with trunk and nothing suspicious broke. |
|
ok. If it was a full breakage someone would have noticed, rolled back, re-opened, emailed me etc. It does happen from time to time. |
…pache#3249) Fixes the regression caused by HADOOP-17511 by moving where the option fs.s3a.acl.default is read -doing it before the RequestFactory is created. Adds * A unit test in TestRequestFactory to verify the ACLs are set on all file write operations. * A new ITestS3ACannedACLs test which verifies that ACLs really do get all the way through. * S3A Assumed Role delegation tokens to include the IAM permission s3:PutObjectAcl in the generated role. Contributed by Steve Loughran
apache#2807) The S3A connector supports "an auditor", a plugin which is invoked at the start of every filesystem API call, and whose issued "audit span" provides a context for all REST operations against the S3 object store. The standard auditor sets the HTTP Referrer header on the requests with information about the API call, such as process ID, operation name, path, and even job ID. If the S3 bucket is configured to log requests, this information will be preserved there and so can be used to analyze and troubleshoot storage IO. Contributed by Steve Loughran. MUST be followed by: CDPD-28457. HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature (apache#3249) CDPD-24982. HADOOP-17801. No error message reported when bucket doesn't exist in S3AFS Conflicts: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AbstractStoreOperation.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java Mostly related to shaded guava. this patch really needs CDPD-10473. HADOOP-16645. S3A Delegation Token extension point to use StoreContext; had to CP a file in, and even then the auditing may not be complete there. Will revisit, even though Knox and Ranger will both need a matching change Change-Id: Ic0a105c194342ed2d529833ecc42608e8ba2f258
…dit feature (apache#3249) Fixes the regression caused by HADOOP-17511 by moving where the option fs.s3a.acl.default is read -doing it before the RequestFactory is created. Adds * A unit test in TestRequestFactory to verify the ACLs are set on all file write operations. * A new ITestS3ACannedACLs test which verifies that ACLs really do get all the way through. * S3A Assumed Role delegation tokens to include the IAM permission s3:PutObjectAcl in the generated role. Contributed by Steve Loughran Change-Id: I3abac6a1b9e150b6b6df0af7c2c70093f8f518cb
Fixes the regression caused by HADOOP-17511 by moving where the
cannedACL properties are inited -so guaranteeing that they are valid
before the RequestFactory is created.
Adds
on all file write operations
do get all the way through.