-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Description
Elasticsearch version: 2.2.1
JVM version: java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
OS version: CentOS release 6.7 (Final)
Description of the problem including expected versus actual behavior:
Thanks in advance for your help.
I inherited a poorly-behaving Elasticsearch cluster that I've been tuning on and off for a few months. Some things have improved, others have not.
About a month ago, another team needed to reboot the (AWS hosted) nodes. I turned off shard allocation in the cluster with an API call and got a success response. However, it didn't take. So I also turned off allocation in kopf. But, rebooting a node caused shards to be allocated anyway. So we rebooted one box per day and let the cluster rebalance each time.
Some time after the reboots, I noticed the cluster was still yellow. It had stopped allocating shards. I figured the API call had finally taken, so I turned allocation back on (and got another success message). According to kopf, allocation is enabled. But the cluster did not automatically assign shards.
I tried toggling shard allocation on and off with both kopf and the API. But the cluster would not allocate shards.
I complained about this on Twitter and an Elastic employee recommended upgrading. So, I upgraded the cluster from 1.5.2 to 2.2.1. However, the behavior persists.
If I assign shards manually with the API, they allocate just fine.
Steps to reproduce:
- Turn off shard allocation with API call and kopf
- Turn on shard allocation with API call and kopf
- Cry a lot
Provide logs (if relevant): I didn't see any relevant logs, but if you tell me what to look for I can grep. I'm happy to gather diagnostics for you, but please note that I no longer own this cluster as of 3pm PDT Friday.