Skip to content

Conversation

@saxenapranav
Copy link
Contributor

@saxenapranav saxenapranav commented Sep 5, 2023

JIRA: https://issues.apache.org/jira/browse/HADOOP-18883

This is inline to JDK bug: https://bugs.openjdk.org/browse/JDK-8314978.

With the current implementation of HttpURLConnection if server rejects the “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be thrown from 'expect100Continue()' method.

After the exception thrown, If we call any other method on the same instance (ex getHeaderField(), or getHeaderFields()). They will internally call getOuputStream() which invokes writeRequests(), which make the actual server call.

In the AbfsHttpOperation, after sendRequest() we call processResponse() method from AbfsRestOperation. Even if the conn.getOutputStream() fails due to expect-100 error, we consume the exception and let the code go ahead. So, we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which will be triggered after getOutputStream is failed. These invocation will lead to server calls.

This pr aims to prevent this. In the solution, if conn.getOutputStream() is failed due to expect100 error, we will not invoke any getHeaderField(), getHeaderFields(), getHeaderFieldLong(), getInputStream(). getResponseCode() is safe, reason being, when expect100 fails, it sets the responseCode variable in HttpUrlConnection object. This would be just a getter for that.

For a network trace without the change:
withoutchange.txt
Trace 27 fails the expect100 in abfsHttpOp.getOutputStream(). Then
Trace 36, 45, 54 are the server calls getting made from conn.getHeaderFields(), conn.getHeaderField(), getHeaderFieldLong() in abfsHttpOp.
Trace 65 is because of the retry from client (where we disable expect100 in the header and retry)

Network trace with the change:
withchange.txt
Trace 28 is because of the getOutpuStream(), then as you can see no other call is no repeated server call.
Trace 39 is because of the retry from client (where we disable expect100 in the header and retry)

Test run:

:::: AGGREGATED TEST RESULT ::::

HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
[INFO]
[ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

HNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

NonHNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 11
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=38).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
[INFO]
[ERROR] Tests run: 572, Failures: 1, Errors: 1, Skipped: 277
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44

AppendBlob-HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

Time taken: 45 mins 6 secs.
azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
commit b15468f (HEAD -> expect100resolution, origin/expect100resolution)
Author: Pranav Saxena <>
Date: Tue Sep 5 03:18:01 2023 -0700

remove redundant check

&& EXPECT_100_JDK_ERROR.equals(e.getMessage())) {
LOG.debug(
"Getting output stream failed with expect header enabled, returning back ",
e);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

351 and 352 lines can be merged

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure. It was done by IDE. I think keeping as before shouldnt be a problem. will take this.

+ "and inputStream.");
return;
}
processConnHeadersAndInputStreams(buffer, offset, length);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the method name can remain same as before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this to make it more testable. This is to abstract out the logic which actually reads the responseHeaders and inputStream. And in future, if some more logic would need to be added, that can be added in the processConnHeadersAndInputStreams

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it.

* and should qualify for retry.
*/
public static final int HTTP_CONTINUE = 100;
public static final String EXPECT_100_JDK_ERROR = "Server rejected operation";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this error string be returned by server only for this exception ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mockito.anyInt(), Mockito.anyInt());
}

private void readyMocksForAppendTest(final AbfsHttpOperation[] httpOpForAppendTest,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Name can be changed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please suggest a name that would be more apt. Reasoning for this name as that we indeed ready the mocks for append test. Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mockSetupForAppend ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

taken: mockSetupForAppend

LambdaTestUtils.intercept(FileNotFoundException.class, () -> {
os.close();
});
Assertions.assertThat(httpOpForAppendTest[0].getExpect100failureReceived())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertion statements missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertion is happening isTrue(). You mean something else?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the confusion, I meant assertion description

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

@saxenapranav
Copy link
Contributor Author


:::: AGGREGATED TEST RESULT ::::

HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=168).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154 » TestTimedOut
[INFO]
[ERROR] Tests run: 576, Failures: 1, Errors: 2, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

HNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=57).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 572, Failures: 1, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

NonHNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 11
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
[INFO]
[ERROR] Tests run: 589, Failures: 0, Errors: 1, Skipped: 277
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44

AppendBlob-HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

Time taken: 45 mins 49 secs.
azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
commit ef871b3 (HEAD -> expect100resolution, origin/expect100resolution)
Author: Pranav Saxena <>
Date: Wed Sep 6 02:43:44 2023 -0700

review comment refactor.

Copy link
Contributor

@snvijaya snvijaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added few queries around expectation of any exception case in getOuputStream().

* and should qualify for retry.
*/
public static final int HTTP_CONTINUE = 100;
public static final String EXPECT_100_JDK_ERROR = "Server rejected operation";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wont the JDK bug were redundant connections are attempted be an issue for any exception case in getOutputStream() ? Shouldn't the case to handle be irrespective of explicit server rejected request case ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For any IOException thrown by getOutputStream, no headers / inputStream will be parsed. The flow of code is such that for IOException other expect-100 error, exception will be thrown back to AbfsRestOperation which will retry again as per retry-policy.

private long sendRequestTimeMs;
private long recvResponseTimeMs;
private boolean shouldMask = false;
private boolean expect100failureReceived = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though the significance of the bug is seen in 100-Continue flow, the state of the previously established connection is the reason to prevent getHeaderField* APIs. Would suggest renaming this field to connectionDisconnectedOnError and also add a code comment that this is a workaround for JDK bug with link.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe, getOutputStream can throw only two major types of exception:

  1. Expect-100 error
  2. Other IOException
    In case of other IOException, we should immediately throw back to AbfsRestOperation to do the retry. We can take responseCode only in case of Except-100 error, because there was a valid server call made. In other IOException, they can contain, CT, connection-reset etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have taken it. Refactored the variable name.

if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) {
if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)
&& e instanceof ProtocolException
&& EXPECT_100_JDK_ERROR.equals(e.getMessage())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in an earlier comment, the HttpUrlConnection that we hold currently is disconnected while inside this catch block.

Do we want to prevent later API calls that trigger connections irrespective of throttled failures ? If so, setting the status of connectionDisconnectedOnError should be at the start of catch block ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken. The setting of connectionDisconnectedOnError in starting of the catch block.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the question by @snvijaya is Do we want to prevent later API calls that trigger connections irrespective of any failures?
If yes then why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At httpUrlConnection.getOutputStream, either the error could IOException(including ConnectionTimeout and ReadTimeout) or expect-100 error (this raises ProtocolException which is child of IOException). Server errors if any would be caught in processResponse and the treatment would be same as done with all other apis (analyse if needed to be retried and then RestOperation would retry it).

In the JDK's implementation of getOutputStream, For the IOExceptions, the connection is killed. So, if further APIs are let go ahead, they would be firing a new server call all together. So, other APIs, like getHeaderField() etc, would be returning the data as per the new server call which is undesirable.

Also, the implementation of httpUrlConnection is such that the other APIs (like getHeaderField()), would internally call getInputStream(), which would would first call getOutputStream() (if the sendData flag is true and doesnt hold strOutputStream object). Now, here two things can happen:

  1. Expect100 failure: no data capture, and again any next API on the httpUrlConnection would fire a new call.
  2. Status-100 : Now, it is not in the block where data can be put in the outputStream, the stream shall be closed which will raise IOException, and from here it will go back to retry loop. Ref: https://github.com/openjdk/jdk8/blob/master/jdk/src/share/classes/sun/net/www/protocol/http/HttpURLConnection.java#L1463-L1471

Hence, any further API is prevented on the HttpUrlConnection object which has got an IOException in getOutputStream.


this.statusDescription = getConnResponseMessage();

/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wont above getResponseCode() and getResponseMessage also lead to connections being established ? If yes, we should probably return right at the start of the method itself if connectionDisconnectedOnError is true.

Copy link
Contributor Author

@saxenapranav saxenapranav Sep 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of expect-100 error:
ProtocolException is raised: https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/sun/net/www/protocol/http/HttpURLConnection.java#L1364.
Caught by getOutputStream0(), we do: https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/sun/net/www/protocol/http/HttpURLConnection.java#L1488-L1493 -> wherein the responseCode is saved in object field.

In case of getResponseCode: it checks if the field is != -1. If yes, it returns the responseCode it has else, go via getInputStream route: https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/net/HttpURLConnection.java#L520-L522. Here we have it in the field and is not -1.

In case of getResponseMessage(), its kind of a getter. It internally calls getResponseCode(): https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/net/HttpURLConnection.java#L596-L599.

Hence, these two methods are safe.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 54s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 45m 57s trunk passed
+1 💚 compile 0m 37s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 compile 0m 32s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 checkstyle 0m 30s trunk passed
+1 💚 mvnsite 0m 38s trunk passed
+1 💚 javadoc 0m 36s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 31s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 4s trunk passed
+1 💚 shadedclient 38m 50s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 29s the patch passed
+1 💚 compile 0m 30s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javac 0m 30s the patch passed
+1 💚 compile 0m 26s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 javac 0m 26s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 19s the patch passed
+1 💚 mvnsite 0m 29s the patch passed
+1 💚 javadoc 0m 25s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 24s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 3s the patch passed
+1 💚 shadedclient 38m 36s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 1m 59s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 34s The patch does not generate ASF License warnings.
139m 3s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/3/artifact/out/Dockerfile
GITHUB PR #6022
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 00cd4433fec4 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / ef871b3
Default Java Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/3/testReport/
Max. process+thread count 576 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/3/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@saxenapranav
Copy link
Contributor Author


:::: AGGREGATED TEST RESULT ::::

HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=120).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 572, Failures: 1, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

HNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=90).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[INFO]
[ERROR] Tests run: 589, Failures: 1, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

NonHNS-SharedKey

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 11
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
[INFO]
[ERROR] Tests run: 589, Failures: 0, Errors: 1, Skipped: 277
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44

AppendBlob-HNS-OAuth

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
[INFO]
[ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329->Object.hashCode:-2 » TestTimedOut
[INFO]
[ERROR] Tests run: 572, Failures: 0, Errors: 1, Skipped: 54
[INFO] Results:
[INFO]
[WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41

Time taken: 51 mins 2 secs.
azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
commit 75c722a (HEAD -> expect100resolution, origin/expect100resolution)
Author: Pranav Saxena <>
Date: Thu Sep 7 02:50:45 2023 -0700

refactor undo

@saxenapranav saxenapranav changed the title Expect100resolution [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls Sep 7, 2023
@saxenapranav saxenapranav marked this pull request as ready for review September 7, 2023 11:21
@saxenapranav saxenapranav changed the title [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls Sep 7, 2023
@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 6s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 46m 28s trunk passed
+1 💚 compile 0m 40s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 compile 0m 32s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 checkstyle 0m 29s trunk passed
+1 💚 mvnsite 0m 38s trunk passed
+1 💚 javadoc 0m 39s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 35s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 16s trunk passed
+1 💚 shadedclient 39m 51s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 30s the patch passed
+1 💚 compile 0m 31s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javac 0m 31s the patch passed
+1 💚 compile 0m 27s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 javac 0m 27s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 21s the patch passed
+1 💚 mvnsite 0m 34s the patch passed
+1 💚 javadoc 0m 29s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 26s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 15s the patch passed
+1 💚 shadedclient 38m 46s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 1s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 38s The patch does not generate ASF License warnings.
141m 45s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/artifact/out/Dockerfile
GITHUB PR #6022
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux b63840926cab 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 75c722a
Default Java Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/testReport/
Max. process+thread count 531 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/5/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 56s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 51m 8s trunk passed
+1 💚 compile 0m 45s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 compile 0m 37s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 checkstyle 0m 34s trunk passed
+1 💚 mvnsite 0m 42s trunk passed
+1 💚 javadoc 0m 38s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 32s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 10s trunk passed
+1 💚 shadedclient 38m 34s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 33s the patch passed
+1 💚 compile 0m 34s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javac 0m 34s the patch passed
+1 💚 compile 0m 29s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 javac 0m 29s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 21s the patch passed
+1 💚 mvnsite 0m 33s the patch passed
+1 💚 javadoc 0m 29s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 26s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 1m 10s the patch passed
+1 💚 shadedclient 38m 34s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 5s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 34s The patch does not generate ASF License warnings.
145m 35s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/artifact/out/Dockerfile
GITHUB PR #6022
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 781acc985df6 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 75c722a
Default Java Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/testReport/
Max. process+thread count 586 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/4/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@snvijaya snvijaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@saxenapranav
Copy link
Contributor Author

@steveloughran @mehakmeet , requesting your review please. Thank you so much.

@saxenapranav
Copy link
Contributor Author

Hi @steveloughran @mehakmeet , requesting your kind review. Thank you so much.

@saxenapranav
Copy link
Contributor Author

Hi @steveloughran @mehakmeet . Requesting you to kindly review the PR please. This is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really awesome to get your feedback on this. Thank you so much.

@saxenapranav
Copy link
Contributor Author

Hi @steveloughran @mehakmeet @mukund-thakur , Requesting you to kindly review the PR please. This is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really awesome to get your feedback on this. Thank you so much.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 49s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 47m 42s trunk passed
+1 💚 compile 0m 38s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 0m 32s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 checkstyle 0m 29s trunk passed
+1 💚 mvnsite 0m 37s trunk passed
+1 💚 javadoc 0m 35s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 31s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 spotbugs 1m 3s trunk passed
+1 💚 shadedclient 38m 16s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 38m 37s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
-1 ❌ mvninstall 0m 25s /patch-mvninstall-hadoop-tools_hadoop-azure.txt hadoop-azure in the patch failed.
-1 ❌ compile 0m 29s /patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt hadoop-azure in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.
-1 ❌ javac 0m 29s /patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt hadoop-azure in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.
-1 ❌ compile 0m 25s /patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt hadoop-azure in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.
-1 ❌ javac 0m 25s /patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt hadoop-azure in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 18s the patch passed
-1 ❌ mvnsite 0m 26s /patch-mvnsite-hadoop-tools_hadoop-azure.txt hadoop-azure in the patch failed.
+1 💚 javadoc 0m 24s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 23s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
-1 ❌ spotbugs 0m 25s /patch-spotbugs-hadoop-tools_hadoop-azure.txt hadoop-azure in the patch failed.
+1 💚 shadedclient 42m 23s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 0m 31s /patch-unit-hadoop-tools_hadoop-azure.txt hadoop-azure in the patch failed.
+1 💚 asflicense 0m 32s The patch does not generate ASF License warnings.
139m 13s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/6/artifact/out/Dockerfile
GITHUB PR #6022
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 8f2afbd21040 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / bd38659
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/6/testReport/
Max. process+thread count 533 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/6/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 55s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 48m 1s trunk passed
+1 💚 compile 0m 39s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 0m 34s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 checkstyle 0m 29s trunk passed
+1 💚 mvnsite 0m 37s trunk passed
+1 💚 javadoc 0m 35s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 32s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 spotbugs 1m 4s trunk passed
+1 💚 shadedclient 39m 24s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 39m 44s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 29s the patch passed
+1 💚 compile 0m 30s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javac 0m 30s the patch passed
+1 💚 compile 0m 26s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 javac 0m 26s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 19s the patch passed
+1 💚 mvnsite 0m 29s the patch passed
+1 💚 javadoc 0m 25s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 23s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 spotbugs 1m 5s the patch passed
+1 💚 shadedclient 37m 53s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 1m 59s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 36s The patch does not generate ASF License warnings.
141m 1s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/7/artifact/out/Dockerfile
GITHUB PR #6022
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 0797b91afd29 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 01cc8eb
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/7/testReport/
Max. process+thread count 599 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6022/7/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@saxenapranav
Copy link
Contributor Author


:::: AGGREGATED TEST RESULT ::::

HNS-OAuth

[INFO] Results:
[INFO]
[WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=91).
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154 » TestTimedOut
[INFO]
[ERROR] Tests run: 541, Failures: 1, Errors: 2, Skipped: 24
[INFO] Results:
[INFO]
[WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41

HNS-SharedKey

[INFO] Results:
[INFO]
[WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
[ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154 » TestTimedOut
[INFO]
[ERROR] Tests run: 550, Failures: 0, Errors: 2, Skipped: 24
[INFO] Results:
[INFO]
[WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41

NonHNS-SharedKey

[INFO] Results:
[INFO]
[WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
[INFO] Results:
[INFO]
[WARNING] Tests run: 529, Failures: 0, Errors: 0, Skipped: 264
[INFO] Results:
[INFO]
[WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44

AppendBlob-HNS-OAuth

[INFO] Results:
[INFO]
[WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
[ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154 » TestTimedOut
[INFO]
[ERROR] Tests run: 545, Failures: 0, Errors: 2, Skipped: 24
[INFO] Results:
[INFO]
[WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41

Time taken: 50 mins 25 secs.
azureuser@Hadoop-VM-EAST2:/hadoop/hadoop-tools/hadoop-azure$
azureuser@Hadoop-VM-EAST2:
/hadoop/hadoop-tools/hadoop-azure$ git log
commit 01cc8eb (HEAD -> expect100resolution, origin/expect100resolution)
Author: Pranav Saxena <>
Date: Thu Jan 4 03:44:43 2024 -0800

compile error fix.

@saxenapranav
Copy link
Contributor Author

Hi @steveloughran @mehakmeet @mukund-thakur , Requesting you to kindly review the PR please. This is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really awesome to get your feedback on this. Thank you so much.

@apache apache deleted a comment from hadoop-yetus Jan 5, 2024
@apache apache deleted a comment from hadoop-yetus Jan 5, 2024
is enabled, we return back without throwing an exception to
the caller. The caller is responsible for setting the correct status code.
If expect header is not enabled, we throw back the exception.
connectionDisconnectedOnError = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setting this field here and using in processResponse() means that we won't be processing response for any IOException. But isn't the intent to not process only in case of JDK error?
So shouldn't this go inside the if (EXPECT_100_JDK_ERROR.equals(e.getMessage().......) check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to prevent any further API called on the httpUrlConnection if it throws IOException for the reason shared in https://github.com/apache/hadoop/pull/6022/files#r1444240890.

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
+1

@steveloughran steveloughran merged commit 7dc166d into apache:trunk Jan 21, 2024
saxenapranav added a commit to saxenapranav/hadoop that referenced this pull request Jan 22, 2024
… server calls (apache#6022)

Address JDK bug JDK-8314978 related to handling of HTTP 100
responses.

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error,
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
@saxenapranav
Copy link
Contributor Author

Thank you so much @steveloughran for the review. Have raised a backport pr on branch-3.3: #6484. Thank you so much.

steveloughran pushed a commit that referenced this pull request Jan 25, 2024
… server calls (#6022)



Address JDK bug JDK-8314978 related to handling of HTTP 100
responses.

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error,
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
saxenapranav added a commit to saxenapranav/hadoop that referenced this pull request Jan 29, 2024
… server calls (apache#6022)



Address JDK bug JDK-8314978 related to handling of HTTP 100
responses. 

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error, 
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
steveloughran pushed a commit that referenced this pull request Jan 30, 2024
… server calls (#6022)


Address JDK bug JDK-8314978 related to handling of HTTP 100
responses. 

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error, 
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
jiajunmao pushed a commit to jiajunmao/hadoop-MLEC that referenced this pull request Feb 6, 2024
… server calls (apache#6022)



Address JDK bug JDK-8314978 related to handling of HTTP 100
responses. 

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error, 
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
slfan1989 pushed a commit that referenced this pull request Feb 10, 2024
… server calls (#6022)


Address JDK bug JDK-8314978 related to handling of HTTP 100
responses. 

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error, 
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants