Skip to content

Conversation

@cbaenziger
Copy link

Description of PR

It is to ensure we have a file and have set permissions on the file before writing out data. I simply worked to rearrange the current logic and was unaware if there may be a better pattern to follow else where in Hadoop.

How was this patch tested?

This is an untested PR. I have merely verified it builds.

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • [N/A] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • [N/A] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • [N/A] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

Copy link
Contributor

@lmccay lmccay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You indicate that it is untested.
Have you tested that is actually works in an environment?
I do recall adding this logic and that I had trouble with the proposed order but though it was likely just me doing something wrong. Obviously, someone has to test this though. :)

I did add a comment that I think needs to be addressed as well.

} finally {
super.getWriteLock().unlock();
}
super.flush();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be inside the try block if we are attempting to not write to open permission keystore?
I think this would currently not address your #2 concern with CredentialShell or node failure during the set permission.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial assumptions were:

If we fail to set permissions, I would expect an IOError or the like to interrupt execution and for us to not get to the flush call.

I am a bit novice to Java ReadWriteLocks, and was thinking we could deadlock if we do not release the write lock as flush in the super class also attempts to acquire the writeLock.

Lastly, I was thinking nothing in Hadoop which would be setting more permissive permissions on this file.

My updated understanding thanks to your question:

I think the locks are handled per-thread though and not per-scope reading the JavaDocs for ReentrantReadWriteLock. And it appears one can acquire a write lock multiple times, so I think flush can move inside the initial lock.

Further, in reading the JavaDocs and other uses in Hadoop, it looks like the write lock acquisition should be outside the try/finally block.

Code updated as such.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 45s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 💚 mvninstall 39m 24s trunk passed
+1 💚 compile 23m 22s trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 compile 20m 51s trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 29s trunk passed
+1 💚 mvnsite 1m 55s trunk passed
+1 💚 javadoc 1m 31s trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 8s trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 2m 56s trunk passed
+1 💚 shadedclient 23m 37s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 1m 4s the patch passed
+1 💚 compile 22m 47s the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javac 22m 47s the patch passed
+1 💚 compile 20m 58s the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 javac 20m 58s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 30s the patch passed
+1 💚 mvnsite 2m 0s the patch passed
+1 💚 javadoc 1m 22s the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 2s the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
-1 ❌ spotbugs 2m 54s /new-spotbugs-hadoop-common-project_hadoop-common.html hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 shadedclient 23m 20s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 58s hadoop-common in the patch passed.
+1 💚 asflicense 1m 15s The patch does not generate ASF License warnings.
215m 22s
Reason Tests
SpotBugs module:hadoop-common-project/hadoop-common
Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147]
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile
GITHUB PR #4998
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 44ecefdc1706 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / b408f76
Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/testReport/
Max. process+thread count 3159 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 2s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 💚 mvninstall 42m 8s trunk passed
+1 💚 compile 25m 35s trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 compile 22m 1s trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 26s trunk passed
+1 💚 mvnsite 1m 54s trunk passed
+1 💚 javadoc 1m 25s trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 57s trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 3m 0s trunk passed
+1 💚 shadedclient 25m 53s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 1m 6s the patch passed
+1 💚 compile 24m 51s the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javac 24m 51s the patch passed
+1 💚 compile 22m 6s the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
+1 💚 javac 22m 6s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 18s the patch passed
+1 💚 mvnsite 1m 52s the patch passed
+1 💚 javadoc 1m 16s the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 57s the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
-1 ❌ spotbugs 3m 1s /new-spotbugs-hadoop-common-project_hadoop-common.html hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 shadedclient 26m 20s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 34s hadoop-common in the patch passed.
+1 💚 asflicense 1m 10s The patch does not generate ASF License warnings.
228m 23s
Reason Tests
SpotBugs module:hadoop-common-project/hadoop-common
Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147]
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/2/artifact/out/Dockerfile
GITHUB PR #4998
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 9019c7bbd85e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 5d04590
Default Java Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/2/testReport/
Max. process+thread count 3137 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/2/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor

where are we with this patch? can/should we get it into 3.3.5

FsPermission fsPermission = FsPermission.valueOf(
"-" + PosixFilePermissions.toString(permissions));
FileUtil.setPermission(file, fsPermission);
super.getWriteLock().lock();
Copy link
Contributor

@steveloughran steveloughran Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need for the super. prefix here

but: we do now require lock() to be reentrant

@steveloughran steveloughran self-assigned this Jan 30, 2023
Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this change we are relying on the lock being re-entrant; super.flush() also acquires the lock.

This holds for the standard impl, but its not in the field type, and there's an unused setWriteLock() which can change it.

What to do?

if anything did set the locks then only bad stuff could happen, including: decouple read and write, change lock while a locked operation is in progress,

Really those methods should be cut or made only for testing, though it's a bit late. This patch is, however, the first bit where re-entrancy is expected.

steveloughran

This comment was marked as duplicate.

FsPermission fsPermission = FsPermission.valueOf(
"-" + PosixFilePermissions.toString(permissions));
FileUtil.setPermission(file, fsPermission);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the method getOutputStreamForKeystore(), before sending outputStream, should it be checked that the file is empty. Reason being, between creatingFile and setting permissions, there could be that some other process puts something in the file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@saxenapranav I don't believe this is an issue. If this process has successfully got a write handle then it is assumed no one else is actively writing to the file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean to say is what if some other process writes into the file between
file.createNewFile() and FileUtil.setPermission(file, fsPermission);. In that case, the file would be having corrupted data. Kindly correct me if it looks wrong. Thanks.
@arp7

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 6m 42s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 💚 mvninstall 34m 55s trunk passed
+1 💚 compile 9m 1s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 17s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 0m 43s trunk passed
+1 💚 mvnsite 0m 56s trunk passed
+1 💚 javadoc 0m 45s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 0m 36s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 1m 30s trunk passed
+1 💚 shadedclient 21m 16s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 32s the patch passed
+1 💚 compile 8m 33s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 8m 33s the patch passed
+1 💚 compile 8m 9s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 8m 9s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 39s the patch passed
+1 💚 mvnsite 0m 55s the patch passed
+1 💚 javadoc 0m 43s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 0m 36s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
-1 ❌ spotbugs 1m 40s /new-spotbugs-hadoop-common-project_hadoop-common.html hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 shadedclient 21m 19s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 3s hadoop-common in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
146m 57s
Reason Tests
SpotBugs module:hadoop-common-project/hadoop-common
Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147]
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile
GITHUB PR #4998
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 8a78c1cdd11e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 5d04590
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/testReport/
Max. process+thread count 3020 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants