Skip to content

Conversation

@mukund-thakur
Copy link
Contributor

Testing Bucket used : https://mthakur-data.s3.ap-south-1.amazonaws.com/file2
Ran all the UT's and IT's using default settings with S3guard.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 24m 22s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 23m 46s trunk passed
+1 💚 compile 0m 36s trunk passed
+1 💚 checkstyle 0m 24s trunk passed
+1 💚 mvnsite 0m 40s trunk passed
+1 💚 shadedclient 17m 11s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 27s trunk passed
+0 🆗 spotbugs 1m 3s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 1m 1s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 0m 34s the patch passed
+1 💚 compile 0m 29s the patch passed
+1 💚 javac 0m 29s the patch passed
+1 💚 checkstyle 0m 18s the patch passed
+1 💚 mvnsite 0m 32s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 shadedclient 15m 42s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 23s the patch passed
+1 💚 findbugs 1m 14s the patch passed
_ Other Tests _
+1 💚 unit 1m 25s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 32s The patch does not generate ASF License warnings.
91m 0s
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/artifact/out/Dockerfile
GITHUB PR #1838
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux 482fb6d9e33f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / d5467d2
Default Java 1.8.0_242
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/testReport/
Max. process+thread count 340 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

Copy link

@bgaborg bgaborg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but given that there was some ongoing discussion in https://issues.apache.org/jira/browse/HADOOP-16711 -I think @steveloughran should also take a look.
(just a nit: you don't need to specify the bucket you've tested against, but specify the endpoint instead.)

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

production code looks fine; I've commented more on the test suite, which without disabling FS instance cacheing isn't going to do what you want.

  1. I'd like to see a stack trace from something like getFileStatus() on failure here, as I don't want no bucket to be interpreted as "nothing at this specific path"

For example, FileSystem.exists() catches that FNFE and will return false, mapping from a bucket not found to something which will delay the outcome

  1. We'll need a new entry in troubleshooting/any existing entry on missing bucket to be reviewed. And in the performance section, make clear that if you set the check to 0, then missing buckets will surface later as FileNotFoundExceptions, so may confuse applications. That is: while it is an optimization, whoever uses it will have moved the failure point around -and get to deal with that problem.

Actually, on that topic: what if you set the fs.s3a.endpoint to "unknownhost.example.org" with check = 0 and instantiate an FS? We won't see any problems until that first IO operation (presumably), so again, errors will surface later.

The docs should make that clear: "Any problems with connectivity, authentication or the bucket's existence will surface when method calls are made of the filesystem."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the full stack here? Because I don't want the FNFE from a missing path to be confused with a getFileStatus failure, as that could go on to confuse other things

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the text in contained for verification.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

potentially brittle, but we can deal with that when the text changes. We have found in the past hat any test coded to look for AWS error messages is brittle against SDK.

Lets just go with this and when things break, catch up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You get a problem in this code because FileSystem.cache() will cache on the URI only; if there's an FS in the cache, your settings aren't picked up -you will always get the previous instance. That often causes intermittent problems with test runs.

  1. Use S3ATestUtils.disableFilesystemCaching to turn off caching of the filesystems you get via FileSystem.get
  2. and close() them at the end of each test case. You can do with with try/finally or a try-with-resources clause

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was worried about this but somehow new conf settings were getting picked up. Need to figure out how. Anyway I have disabled the FilesystemCaching such that we don't see intermittent failures.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless you want to share across test cases (you don't) or want to have cleanup in the teardown code, move this into a local variable in each test case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, given it's non-static it will be unique to each test case. You can just override the teardown() method and add an IOUtilcleanupWithLogger(LOG, fs) & so close the fs variable robustly if it is set. Do call the superclass after.

FWIW, the S3A base test suite already retrieves an FS instance for each test case, so you can pick that up, it's just a bit fiddlier to configure. Don't' worry about it here, but you will eventually have to learn your way around that test code

@apache apache deleted a comment from hadoop-yetus Feb 10, 2020
@mukund-thakur mukund-thakur force-pushed the HADOOP-16711-bucket-exists branch from dd6122e to a5798ac Compare February 11, 2020 12:01
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this extra cleanup is throwing FileSystem is closed! because of this call AbstractFSContractTestBase.deleteTestDirInTeardown() in the super class teardown after each test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. So what's happening then is that your tests are picking up a shared FS instance, not the one you've just configured with different bucket init settings. your tests aren't doing what you think they are

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My tests are running new FS instance only. I confirmed that using the IDE debugger. I think what is happening is, we are calling fs.close() twice one with the shared instance and other on my private instance which is stopping all the services for a particular Fs leading to mismatch.

Copy link
Contributor Author

@mukund-thakur mukund-thakur Feb 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found the issue. Rather than overriding the teardown method I implemented it which caused the Junit to call teardown() twice thus causing all the above problems.
Sorry My Bad. :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no worries

@mukund-thakur mukund-thakur force-pushed the HADOOP-16711-bucket-exists branch from a5798ac to 81c4162 Compare February 11, 2020 14:13
@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 30s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 21m 40s trunk passed
+1 💚 compile 0m 31s trunk passed
+1 💚 checkstyle 0m 23s trunk passed
+1 💚 mvnsite 0m 36s trunk passed
+1 💚 shadedclient 16m 13s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 25s trunk passed
+0 🆗 spotbugs 0m 59s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 0m 57s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 0m 31s the patch passed
+1 💚 compile 0m 26s the patch passed
+1 💚 javac 0m 26s the patch passed
+1 💚 checkstyle 0m 18s the patch passed
+1 💚 mvnsite 0m 30s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 shadedclient 15m 17s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 22s the patch passed
+1 💚 findbugs 1m 0s the patch passed
_ Other Tests _
+1 💚 unit 1m 25s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 28s The patch does not generate ASF License warnings.
62m 52s
Subsystem Report/Notes
Docker Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/artifact/out/Dockerfile
GITHUB PR #1838
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux cf0667027565 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / cc8ae59
Default Java 1.8.0_242
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/testReport/
Max. process+thread count 342 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Feb 11, 2020
@mukund-thakur mukund-thakur force-pushed the HADOOP-16711-bucket-exists branch from 81c4162 to b01cfad Compare February 11, 2020 18:56
@steveloughran
Copy link
Contributor

OK, production code all LGTM; just that test tuning

if closing the fs value triggers failures in superclass cleanup, then you are sharing an FS instance between test cases. (i.e you are actually picking up the last one created). If you disable caching you should get a new one, which you can then close safely

@mukund-thakur
Copy link
Contributor Author

All review comments addressed.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 28s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 22m 45s trunk passed
+1 💚 compile 0m 35s trunk passed
+1 💚 checkstyle 0m 23s trunk passed
+1 💚 mvnsite 0m 38s trunk passed
+1 💚 shadedclient 16m 26s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 26s trunk passed
+0 🆗 spotbugs 0m 59s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 0m 57s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 0m 34s the patch passed
+1 💚 compile 0m 25s the patch passed
+1 💚 javac 0m 25s the patch passed
+1 💚 checkstyle 0m 18s the patch passed
+1 💚 mvnsite 0m 30s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 shadedclient 15m 16s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 23s the patch passed
+1 💚 findbugs 1m 2s the patch passed
_ Other Tests _
+1 💚 unit 1m 23s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 28s The patch does not generate ASF License warnings.
65m 21s
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/artifact/out/Dockerfile
GITHUB PR #1838
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux 75831ca7b3e5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 9709afe
Default Java 1.8.0_242
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/testReport/
Max. process+thread count 341 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from mukund-thakur Feb 12, 2020
@apache apache deleted a comment from hadoop-yetus Feb 12, 2020
@steveloughran
Copy link
Contributor

testing myself, setting validation to 0 for the entire test run to field test it better.
one failure so far in the new test

[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.739 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ABucketExistence
[ERROR] testNoBucketProbing(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  Time elapsed: 1.258 s  <<< ERROR!
java.lang.IllegalArgumentException: Path s3a://random-bucket-442f6634-4892-4239-bd8c-ac5a2b0a3700 is not absolute
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:216)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.checkPath(DynamoDBMetadataStore.java:1851)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:718)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:205)
	at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getWithTtl(S3Guard.java:900)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2729)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2696)
	at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.lambda$testNoBucketProbing$0(ITestS3ABucketExistence.java:66)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453)
	at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testNoBucketProbing(ITestS3ABucketExistence.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
``

@steveloughran
Copy link
Contributor

Also once I fix that by adding a trailing /, the getFileStatus("/") fails to raise an FNFE, which is because S3Guard is enabled for all buckets on my test setup, and s3guard DDB will create a stub FS Status on a root entry.

java.lang.AssertionError: Expected a java.io.FileNotFoundException to be thrown, but got the result: : S3AFileStatus{path=s3a://random-bucket-11df1b68-2535-4a9b-9fd5-c6a4d5a6c192/; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null versionId=null

	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453)
	at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testNoBucketProbing(ITestS3ABucketExistence.java:59)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.lang.Thread.run(Thread.java:748)

We could consider shortcutting some of the getFileStatus queries against / in S3A FS itself -it's always a dir after all

@mukund-thakur
Copy link
Contributor Author

Two tests are failing. Will debug. Also will rebase from trunk and fix merge conflicts.

[INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
[ERROR] Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 48.592 s <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
[ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 15.053 s <<< FAILURE!
java.lang.AssertionError:
Expected no results from listLocatedStatus(/), but got 1 elements:
S3ALocatedFileStatus{path=s3a://mthakur-data/fork-0002; isDirectory=true; modification_time=1582129763970; access_time=0; owner=mthakur; group=mthakur; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false}[eTag='', versionId='']
at org.junit.Assert.fail(Assert.java:88)
at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.assertNoElements(AbstractContractRootDirectoryTest.java:218)
at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testListEmptyRootDirectory(AbstractContractRootDirectoryTest.java:202)
at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
[ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.819 s <<< FAILURE!
java.lang.AssertionError:
listStatus(/) vs listLocatedStatus(/) with
listStatus =S3AFileStatus{path=s3a://mthakur-data/test; isDirectory=true; modification_time=0; access_time=0; owner=mthakur; group=mthakur; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null listLocatedStatus = S3ALocatedFileStatus{path=s3a://mthakur-data/fork-0002; isDirectory=true; modification_time=1582129765040; access_time=0; owner=mthakur; group=mthakur; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false}[eTag='', versionId='']
S3ALocatedFileStatus{path=s3a://mthakur-data/test; isDirectory=true; modification_time=1582129765040; access_time=0; owner=mthakur; group=mthakur; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false}[eTag='', versionId=''] expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testSimpleRootListing(AbstractContractRootDirectoryTest.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

mukund-thakur and others added 4 commits February 19, 2020 23:48
Adds a new exception UnknownStoreException to indicate
"there's no store there"

* raised in verify bucket existence checks
* and when translating AWS exceptions into IOEs
* The S3A retry policy fails fast on this
* And s3GetFileStatus recognises the same failure and raises it

Except when the metastore shortcircuits S3 IO, this means all
operations against a nonexistent store will fail with a unique exception.

ITestS3ABucketExistence is extended to
* disable metastore (getFileStatus(/) was returning a value)
* always create new instances
* invoke all the operations which catch and swallow FNFEs
  (exists, isFile, isDir, delete)

Change-Id: Ide630ec9738ef971eba603b618bd612456fa064b
remove the @links to protected methods; add @value

Change-Id: I24d6a922cc6d3de48aeb39cd47713430011f41ab
Created a new class org.apache.hadoop.fs.s3a.impl.ErrorTranslation;
future work related to mapping from AWS exceptions to IOEs&C can
go in there rather than S3AUtils.

Moved the checks for an AmazonServiceException being caused by
a missing bucket to there; this cleans up uses of the probe.

Add a unit test for the recognition/translation.

Change-Id: If81573b0c379def4bae715e4395f3ac19857c08e
@mukund-thakur mukund-thakur force-pushed the HADOOP-16711-bucket-exists branch from 4cf556b to 9448a8f Compare February 19, 2020 19:21
@mukund-thakur
Copy link
Contributor Author

mukund-thakur commented Feb 19, 2020

Two tests are failing. Will debug. Also will rebase from trunk and fix merge conflicts.

Fixed merge conflicts. Above tests are succeeding now.

…or fixes

Change-Id: I379afa2a10dc7691abb2bd09014fd52a73e3f7f6
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 25s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 7 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 22m 42s trunk passed
+1 💚 compile 0m 31s trunk passed
+1 💚 checkstyle 0m 24s trunk passed
+1 💚 mvnsite 0m 37s trunk passed
+1 💚 shadedclient 16m 19s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 24s trunk passed
+0 🆗 spotbugs 1m 0s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 0m 56s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 0m 32s the patch passed
+1 💚 compile 0m 26s the patch passed
-1 ❌ javac 0m 26s hadoop-tools_hadoop-aws generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)
-0 ⚠️ checkstyle 0m 18s hadoop-tools/hadoop-aws: The patch generated 2 new + 30 unchanged - 0 fixed = 32 total (was 30)
+1 💚 mvnsite 0m 32s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 3s The patch has no ill-formed XML file.
+1 💚 shadedclient 15m 35s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 23s the patch passed
+1 💚 findbugs 1m 1s the patch passed
_ Other Tests _
+1 💚 unit 1m 12s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 28s The patch does not generate ASF License warnings.
65m 10s
Subsystem Report/Notes
Docker Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/Dockerfile
GITHUB PR #1838
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint xml
uname Linux 3f134164c323 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 181e6d0
Default Java 1.8.0_242
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/diff-compile-javac-hadoop-tools_hadoop-aws.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/testReport/
Max. process+thread count 425 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Feb 21, 2020
@steveloughran
Copy link
Contributor

+1; merged with a couple of final fixups of the test (add @deprecated) to the test class to shut up checkstyle

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants