-
Notifications
You must be signed in to change notification settings - Fork 9.1k
HADOOP-18235. vulnerability: we may leak sensitive information in LocalKeyStoreProvider #4998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
…alKeyStoreProvider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You indicate that it is untested.
Have you tested that is actually works in an environment?
I do recall adding this logic and that I had trouble with the proposed order but though it was likely just me doing something wrong. Obviously, someone has to test this though. :)
I did add a comment that I think needs to be addressed as well.
| } finally { | ||
| super.getWriteLock().unlock(); | ||
| } | ||
| super.flush(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be inside the try block if we are attempting to not write to open permission keystore?
I think this would currently not address your #2 concern with CredentialShell or node failure during the set permission.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial assumptions were:
If we fail to set permissions, I would expect an IOError or the like to interrupt execution and for us to not get to the flush call.
I am a bit novice to Java ReadWriteLocks, and was thinking we could deadlock if we do not release the write lock as flush in the super class also attempts to acquire the writeLock.
Lastly, I was thinking nothing in Hadoop which would be setting more permissive permissions on this file.
My updated understanding thanks to your question:
I think the locks are handled per-thread though and not per-scope reading the JavaDocs for ReentrantReadWriteLock. And it appears one can acquire a write lock multiple times, so I think flush can move inside the initial lock.
Further, in reading the JavaDocs and other uses in Hadoop, it looks like the write lock acquisition should be outside the try/finally block.
Code updated as such.
|
💔 -1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
where are we with this patch? can/should we get it into 3.3.5 |
| FsPermission fsPermission = FsPermission.valueOf( | ||
| "-" + PosixFilePermissions.toString(permissions)); | ||
| FileUtil.setPermission(file, fsPermission); | ||
| super.getWriteLock().lock(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for the super. prefix here
but: we do now require lock() to be reentrant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this change we are relying on the lock being re-entrant; super.flush() also acquires the lock.
This holds for the standard impl, but its not in the field type, and there's an unused setWriteLock() which can change it.
What to do?
if anything did set the locks then only bad stuff could happen, including: decouple read and write, change lock while a locked operation is in progress,
Really those methods should be cut or made only for testing, though it's a bit late. This patch is, however, the first bit where re-entrancy is expected.
| FsPermission fsPermission = FsPermission.valueOf( | ||
| "-" + PosixFilePermissions.toString(permissions)); | ||
| FileUtil.setPermission(file, fsPermission); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the method getOutputStreamForKeystore(), before sending outputStream, should it be checked that the file is empty. Reason being, between creatingFile and setting permissions, there could be that some other process puts something in the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@saxenapranav I don't believe this is an issue. If this process has successfully got a write handle then it is assumed no one else is actively writing to the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean to say is what if some other process writes into the file between
file.createNewFile() and FileUtil.setPermission(file, fsPermission);. In that case, the file would be having corrupted data. Kindly correct me if it looks wrong. Thanks.
@arp7
|
💔 -1 overall
This message was automatically generated. |
Description of PR
It is to ensure we have a file and have set permissions on the file before writing out data. I simply worked to rearrange the current logic and was unaware if there may be a better pattern to follow else where in Hadoop.
How was this patch tested?
This is an untested PR. I have merely verified it builds.
For code changes:
LICENSE,LICENSE-binary,NOTICE-binaryfiles?