Skip to content

java.lang.SecurityException: access denied ("java.lang.reflect.ReflectPermission" "suppressAccessChecks") for :Plugin Repository HDFS #26513

@risdenk

Description

@risdenk

Elasticsearch version (bin/elasticsearch --version):
Version: 5.5.2, Build: b2f0c09/2017-08-14T12:33:14.154Z, JVM: 1.8.0_121

Plugins installed:

  • repository-hdfs
  • x-pack

JVM version (java -version):
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-tdc1-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

OS version (uname -a if on a Unix-like system):
Linux HOSTNAME 3.0.101-0.113.TDC.1.R.0-default #1 SMP Fri Dec 9 04:51:20 PST 2016 (ca32437) x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:
Listing snapshots for the repository-hdfs plugin snapshot repository should work, but instead there is a java.lang.SecurityException: access denied ("java.lang.reflect.ReflectPermission" "suppressAccessChecks") from the JVM security manager.

Steps to reproduce:

  1. Install Elasticsearch
  2. Install repository-hdfs plugin
  3. Create Elasticsearch snapshot repository pointing to HDFS
  4. Try to list snapshots from that repository (curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' "https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty")

Provide logs (if relevant):
Stacktrace from missing security policy permission

...

# first call to curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' "https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty"

org.elasticsearch.transport.RemoteTransportException: [master-HOSTNAME][IP_ADDRESS:9301][cluster:admin/snapshot/get]
Caused by: java.lang.SecurityException: access denied ("java.lang.reflect.ReflectPermission" "suppressAccessChecks")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]
        at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]
        at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]
        at java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:128) ~[?:1.8.0_121]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:396) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
        at com.sun.proxy.$Proxy34.getServerDefaults(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:640) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:1755) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1761) ~[?:?]
        at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:210) ~[?:?]
        at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160) ~[?:?]
        at org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:581) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2933) ~[?:?]
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:815) ~[?:?]
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:740) ~[?:?]
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385) ~[?:?]
        at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:706) ~[?:?]
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:647) ~[?:?]
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:918) ~[?:?]
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:974) ~[?:?]
        at java.io.DataInputStream.read(DataInputStream.java:100) ~[?:1.8.0_121]
        at org.elasticsearch.common.io.Streams.copy(Streams.java:79) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:762) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:166) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

...

# second call to curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' "https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty"

[2017-09-01T14:08:10,615][WARN ][r.suppressed             ] path: /_snapshot/CLUSTERNAME/_all, params: {pretty=, repository=CLUSTERNAME, snapshot=_all}
org.elasticsearch.transport.RemoteTransportException: [master-HOSTNAME][IP_ADDRESS:9301][cluster:admin/snapshot/get]
Caused by: java.lang.IllegalStateException
        at com.google.common.base.Preconditions.checkState(Preconditions.java:129) ~[?:?]
        at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:160) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
        at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1681) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1665) ~[?:?]
        at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:257) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1806) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1802) ~[?:?]
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1808) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1767) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1726) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
        at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:930) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:908) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:746) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:166) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions