Skip to content

Conversation

@joachimdraeger
Copy link
Contributor

Use Apache commons IO to copy streams in repository S3 plugin to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192

…d SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). elastic#25192
@joachimdraeger
Copy link
Contributor Author

I haven't added NOTICE and LICENSE files for commons-io for now. I can do so, if we want to go ahead with this change.

@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a comment. Unless there's a good reason that I'm not seeing we should not add a new dependency here.

httpasyncclient = 4.1.2
commonslogging = 1.1.3
commonscodec = 1.10
commonsio = 2.5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think we need a whole new dependency for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can implement the stream copying, it's not too much code.

@abeyad
Copy link

abeyad commented Jun 13, 2017

+1, we shouldn't introduce a dependency on Apache Commons IO here. I think as long as you wrap the whole try block in the SocketAccess.doPrivilegedIOException call and keep everything else the exact same, it should take care of the problem.

Thank you @joachimdraeger

@joachimdraeger
Copy link
Contributor Author

@abeyad, the Streams.copy(), which issues the close, is already wrapped by doPrivileged. The problem is that, as mentioned in the docs, all calls going from doPrivileged need to be inside the plugin's jars. It's not even possible to use code from the core elasticsearch jar. See also #25192.

@jasontedor
Copy link
Member

@joachimdraeger Yes, that's right. Here I'm okay duplicating the code unless a broader need arises.

@jasontedor
Copy link
Member

I had a few thoughts while running tonight:

  • isn't the problem here the flush method on S3 output streams; can we not solve this by truncating the stack on flush calls?
  • alternatively, and still operating under the assumption that the problem is the flush method, can we not solve this by adding an additional override for Streams#copy that accepts a consumer for streams to wrap the flush calls, similarly truncating the stack?
  • I think we need a test for this situation, this is subtle

Between the two alternatives I present, I prefer option one, no need to muddy up core with a method that has one callee for one purpose. I think that a test is mandatory.

Does this make sense? What do you think?

… to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). elastic#25192"

This reverts commit 429f22f.
…o avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). elastic#25192
@joachimdraeger
Copy link
Contributor Author

I implemented the stream copying. Great, that the try/resource construct takes care of reporting exceptions returned by close() correctly.

I will have a look if it would make sense to move the doPrivileged into the DefaultS3OutputStream.flush() method.

What specifically would you like to test and how @jasontedor ?

@jasontedor
Copy link
Member

@joachimdraeger This is off the cuff, I have not looked closely enough to see if I'm thinking here is feasible but maybe you can explore this? We know that DefaultS3OutputStream is ultimately going to connect to S3 and this is the problem, this is not privileged. Can we override DefaultS3OutputStream#doUpload to try to connect to a local socket? I think that should blow up right now? That would enable adding a failing test. Then with the change we are discussing above about truncating the stack in the flush on the stream, the test should pass?

@joachimdraeger
Copy link
Contributor Author

I added a second PR #25254 which moves doPrivileged into DefaultS3OutputStream.flush(). I think both solutions are viable. I slightly prefer #25206 which re-implements stream copying because it keeps all doPrivileged in the same class and keeps a clean line between privileged and unprivileged code.

@joachimdraeger
Copy link
Contributor Author

Your idea with connecting to a local socket might be more feasible than I first though @jasontedor. Indeed, S3BlobStoreContainerTests exercises all the relevant code paths.

Have a look at the experiment that I did and let me know what you think.
joachimdraeger@e924786

@jasontedor
Copy link
Member

@joachimdraeger I think that test is a good start, can you bring it to #25254 and we can assess it more carefully?

@jasontedor
Copy link
Member

I'm going to close this PR, I think we will favor the approach in #25254. Thanks for taking the lead on this @joachimdraeger.

@jasontedor jasontedor closed this Jun 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants