Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 20 additions & 14 deletions docs/reference/cluster/reroute.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,26 @@ Reasons why a primary shard cannot be automatically allocated include the follow
the cluster. To prevent data loss, the system does not automatically promote a stale
shard copy to primary.

As a manual override, two commands to forcefully allocate primary shards
are available:
[float]
=== Retry failed shards

The cluster will attempt to allocate a shard a maximum of
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
up and leaving the shard unallocated. This scenario can be caused by
structural problems such as having an analyzer which refers to a stopwords
file which doesn't exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
will attempt a single retry round for these shards.

[float]
=== Forced allocation on unrecoverable errors

The following two commands are dangerous and may result in data loss. They are
meant to be used in cases where the original data can not be recovered and the cluster
administrator accepts the loss. If you have suffered a temporary issue that has been
fixed, please see the `retry_failed` flag described above.

`allocate_stale_primary`::
Allocate a primary shard to a node that holds a stale copy. Accepts the
Expand All @@ -108,15 +126,3 @@ are available:
this command requires the special field `accept_data_loss` to be
explicitly set to `true` for it to work.

[float]
=== Retry failed shards

The cluster will attempt to allocate a shard a maximum of
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
up and leaving the shard unallocated. This scenario can be caused by
structural problems such as having an analyzer which refers to a stopwords
file which doesn't exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
will attempt a single retry round for these shards.