Skip to content

Red Cluster State: failed to obtain in-memory shard lock #23199

@speedplane

Description

@speedplane

Elasticsearch version: 5.2.0

Plugins installed: [repository-gcs, repository-s3, x-pack, io.fabric8:elasticsearch-cloud-kubernetes]

JVM version: 1.8.0_121

OS version: ubuntu xenial, running in a container managed by kubernetes

Description of the problem including expected versus actual behavior: Shard leaves cluster, did not have a replica setup so resulted in data loss :(

Steps to reproduce:

  1. Had a 5 node cluster that was mostly indexing for a full week (about 1B docs) across 5 different indexes.
  2. When it was almost done, I ramped up to 10 nodes.
  3. Things were working out just fine for a while, then one shard on one of the shards left, and I got into red state.

I looked through the logs and it appears there is a lock error. It may have resulted from a sporadic network failure, but I'm not sure. The error logs refer to a few indices, but the only one that went into red state and did not come back is da-prod8-other, probably because It did not have a replica.

Provide logs (if relevant):

[2017-02-15T00:17:46,359][WARN ][o.e.c.a.s.ShardStateAction] [node-2-data-pod] [da-prod8-ttab][0] unexpected failure while sending request [internal:cluster/shard/failure] to [{es-master-714112077-ae5jq}{KfcqAA57R02arAOj1kshuw}{8YGdiFciTWeiidJXI4uh3A}{10.0.3.58}{10.0.3.58:9300}] for shard entry [shard id [[da-prod8-ttab][0]], allocation id [-rtlh5w4QAqqVf_nLd8cVw], primary term [16], message [failed to perform indices:data/write/bulk[s] on replica [da-prod8-ttab][0], node[uviqqBXkR9a63SRtoW28Wg], [R], s[STARTED], a[id=-rtlh5w4QAqqVf_nLd8cVw]], failure [RemoteTransportException[[node-1-data-pod][10.0.25.4:9300][indices:data/write/bulk[s][r]]]; nested: IllegalStateException[active primary shard cannot be a replication target before  relocation hand off [da-prod8-ttab][0], node[uviqqBXkR9a63SRtoW28Wg], [P], s[STARTED], a[id=-rtlh5w4QAqqVf_nLd8cVw], state is [STARTED]]; ]]
org.elasticsearch.transport.RemoteTransportException: [es-master-714112077-ae5jq][10.0.3.58:9300][internal:cluster/shard/failure]
Caused by: org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [16] did not match current primary term [17]
        at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:291) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:674) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:653) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:612) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) ~[elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:17:46,372][WARN ][o.e.d.z.ZenDiscovery     ] [node-2-data-pod] master left (reason = failed to ping, tried [12] times, each with  maximum [2s] timeout), current nodes: nodes:
   {es-master-714112077-ae5jq}{KfcqAA57R02arAOj1kshuw}{8YGdiFciTWeiidJXI4uh3A}{10.0.3.58}{10.0.3.58:9300}, master
   {es-master-714112077-19hsw}{B9ss9idVQN-5EITg9jhhtw}{Rzabp7WaS_SilvEBWHuI9A}{10.0.4.53}{10.0.4.53:9300}
   {node-7-data-pod}{Zs7Q_tpgTZmpnwBNFZYi6w}{pGi8rVIqTE6j5OIceuFxdg}{10.0.36.3}{10.0.36.3:9300}
   {node-4-data-pod}{AuYiFtGDTvqJrBeI2wU_sA}{2wPJrJUiR6GSCPd6ZnObfA}{10.0.35.3}{10.0.35.3:9300}
   {node-2-data-pod}{HyrtTYWWRVODlbTCKTxdzw}{Nkv_gLwwQnyuCe5tTXD8fg}{10.0.34.3}{10.0.34.3:9300}, local
   {node-6-data-pod}{S_GBUaOfRHS4XW65x9OIhw}{Egd5QuxvTMOmNOehbZTtQQ}{10.0.33.3}{10.0.33.3:9300}
   {node-5-data-pod}{0aaunpa-Qkab66Ti5mFoTw}{BC27Z_hiRcGgaDQcmHgEaA}{10.0.37.3}{10.0.37.3:9300}
   {node-9-data-pod}{tcTjGDFLRCWlQMND7vlL6A}{rgaljuXoRr2SpjWuXCTqaA}{10.0.26.4}{10.0.26.4:9300}
   {node-3-data-pod}{kzr2o00tSzyuY-ekWuiNng}{x3RMwZicQ46ljZS-muWy-g}{10.0.32.6}{10.0.32.6:9300}
   {node-1-data-pod}{uviqqBXkR9a63SRtoW28Wg}{K9PqwDXLSuO5XTPmQds0aw}{10.0.25.4}{10.0.25.4:9300}
   {node-8-data-pod}{ATEayeK_SZWydO1cFfsZfg}{QE63FwDJQY2HL0S1Nys2gg}{10.0.27.4}{10.0.27.4:9300}
   {es-master-714112077-kh7ur}{nKYzKbxWRv-kvQBZVZJuGA}{SfV1jqmWSiS1jbWqnl-TPQ}{10.0.1.50}{10.0.1.50:9300}
   {node-0-data-pod}{uUZCt9RrS2aY_gqkSmNV5A}{SUYJlFEGQzyrF1tyocLj3w}{10.0.28.4}{10.0.28.4:9300}

[2017-02-15T00:17:46,439][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.1.50, transport_address 10.0.1.50:9300
[2017-02-15T00:17:46,438][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-ttab][0]] marking and sending shard failed due to [shard failure, reason [primary shard [[da-prod8-ttab][0], node[HyrtTYWWRVODlbTCKTxdzw], [P], s[STARTED], a[id=kgFYdXusT6ObzYvLd74PTQ]] was demoted while failing replica shard]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [16] did not match current primary term [17]
        at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:291) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:674) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:653) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:612) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) ~[elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.25.4, transport_address 10.0.25.4:9300
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.26.4, transport_address 10.0.26.4:9300
[2017-02-15T00:17:46,440][WARN ][o.e.c.a.s.ShardStateAction] [node-2-data-pod] [da-prod8-ttab][0] no master known for action [internal:cluster/shard/failure] for shard entry [shard id [[da-prod8-ttab][0]], allocation id [kgFYdXusT6ObzYvLd74PTQ], primary term [0], message [shard failure, reason [primary shard [[da-prod8-ttab][0], node[HyrtTYWWRVODlbTCKTxdzw], [P], s[STARTED], a[id=kgFYdXusT6ObzYvLd74PTQ]] was demoted while failing replica shard]], failure [NoLongerPrimaryShardException[primary term [16] did not match current primary term [17]]]]
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.27.4, transport_address 10.0.27.4:9300
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.28.4, transport_address 10.0.28.4:9300
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.3.58, transport_address 10.0.3.58:9300
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.32.6, transport_address 10.0.32.6:9300
[2017-02-15T00:17:46,440][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.33.3, transport_address 10.0.33.3:9300
[2017-02-15T00:17:46,441][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.35.3, transport_address 10.0.35.3:9300
[2017-02-15T00:17:46,441][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.36.3, transport_address 10.0.36.3:9300
[2017-02-15T00:17:46,441][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.37.3, transport_address 10.0.37.3:9300
[2017-02-15T00:17:46,441][INFO ][i.f.e.d.k.KubernetesUnicastHostsProvider] [node-2-data-pod] adding endpoint /10.0.4.53, transport_address 10.0.4.53:9300
[2017-02-15T00:17:46,493][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-pacer][2]] marking and sending shard failed due to [shard failure, reason [primary shard [[da-prod8-pacer][2], node[HyrtTYWWRVODlbTCKTxdzw], [P], s[STARTED], a[id=VSyGabxyQPKQf9q9ow1F_Q]] was demoted while failing replica shard]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [7] did not match current primary term [8]
        at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:291) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:674) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:653) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:612) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) ~[elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:17:46,494][WARN ][o.e.c.a.s.ShardStateAction] [node-2-data-pod] [da-prod8-pacer][2] no master known for action [internal:cluster/shard/failure] for shard entry [shard id [[da-prod8-pacer][2]], allocation id [VSyGabxyQPKQf9q9ow1F_Q], primary term [0], message [shard failure, reason [primary shard [[da-prod8-pacer][2], node[HyrtTYWWRVODlbTCKTxdzw], [P], s[STARTED], a[id=VSyGabxyQPKQf9q9ow1F_Q]] was demoted while failing replica shard]], failure [NoLongerPrimaryShardException[primary term [7] did not match current primary term [8]]]]
[2017-02-15T00:17:49,472][INFO ][o.e.c.s.ClusterService   ] [node-2-data-pod] detected_master {es-master-714112077-ae5jq}{KfcqAA57R02arAOj1kshuw}{8YGdiFciTWeiidJXI4uh3A}{10.0.3.58}{10.0.3.58:9300}, reason: zen-disco-receive(from master [master {es-master-714112077-ae5jq}{KfcqAA57R02arAOj1kshuw}{8YGdiFciTWeiidJXI4uh3A}{10.0.3.58}{10.0.3.58:9300} committed version [1538]])
[2017-02-15T00:17:54,466][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-2-data-pod] [da-prod8-ttab][0]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-ttab][0]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:17:54,467][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-2-data-pod] [da-prod8-pacer][2]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-pacer][2]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:17:55,007][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-other][3]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-other][3]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:00,044][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-scotus][0]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-scotus][0]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:05,075][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-2-data-pod] [da-prod8-scotus][0]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-scotus][0]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:18:10,120][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-other][3]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-other][3]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:20,175][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-other][3]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-other][3]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:30,250][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-other][3]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-other][3]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:38,070][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-pacer][2]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-pacer][2]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:43,071][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-ttab][0]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-ttab][0]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more
[2017-02-15T00:18:48,107][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-2-data-pod] [da-prod8-ttab][0]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-ttab][0]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:18:48,107][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-2-data-pod] [da-prod8-pacer][2]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-pacer][2]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-15T00:18:48,115][WARN ][o.e.i.c.IndicesClusterStateService] [node-2-data-pod] [[da-prod8-other][3]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.0.jar:5.2.0]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [da-prod8-other][3]: obtaining shard lock timed out after 5000ms
        at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.0.jar:5.2.0]
        at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.0.jar:5.2.0]
        ... 15 more

Metadata

Metadata

Assignees

No one assigned

    Labels

    :Distributed Indexing/DistributedA catch all label for anything in the Distributed Indexing Area. Please avoid if you can.>docsGeneral docs changes

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions