11[[disk-allocator]]
22=== Disk-based Shard Allocation
33
4- Elasticsearch factors in the available disk space on a node before deciding
5- whether to allocate new shards to that node or to actively relocate shards
6- away from that node.
4+ Elasticsearch considers the available disk space on a node before deciding
5+ whether to allocate new shards to that node or to actively relocate shards away
6+ from that node.
77
88Below are the settings that can be configured in the `elasticsearch.yml` config
99file or updated dynamically on a live cluster with the
@@ -15,29 +15,33 @@ file or updated dynamically on a live cluster with the
1515
1616`cluster.routing.allocation.disk.watermark.low`::
1717
18- Controls the low watermark for disk usage. It defaults to 85%, meaning ES will
19- not allocate new shards to nodes once they have more than 85% disk used. It
20- can also be set to an absolute byte value (like 500mb) to prevent ES from
21- allocating shards if less than the configured amount of space is available.
18+ Controls the low watermark for disk usage. It defaults to `85%`, meaning
19+ that Elasticsearch will not allocate shards to nodes that have more than
20+ 85% disk used. It can also be set to an absolute byte value (like `500mb`)
21+ to prevent Elasticsearch from allocating shards if less than the specified
22+ amount of space is available. This setting has no effect on the primary
23+ shards of newly-created indices or, specifically, any shards that have
24+ never previously been allocated.
2225
2326`cluster.routing.allocation.disk.watermark.high`::
2427
25- Controls the high watermark. It defaults to 90%, meaning ES will attempt to
26- relocate shards to another node if the node disk usage rises above 90%. It can
27- also be set to an absolute byte value (similar to the low watermark) to
28- relocate shards once less than the configured amount of space is available on
29- the node.
28+ Controls the high watermark. It defaults to `90%`, meaning that
29+ Elasticsearch will attempt to relocate shards away from a node whose disk
30+ usage is above 90%. It can also be set to an absolute byte value (similarly
31+ to the low watermark) to relocate shards away from a node if it has less
32+ than the specified amount of free space. This setting affects the
33+ allocation of all shards, whether previously allocated or not.
3034
3135`cluster.routing.allocation.disk.watermark.flood_stage`::
3236+
3337--
34- Controls the flood stage watermark. It defaults to 95%, meaning ES enforces
35- a read-only index block (`index.blocks.read_only_allow_delete`) on every
36- index that has one or more shards allocated on the node that has at least
37- one disk exceeding the flood stage. This is a last resort to prevent nodes
38- from running out of disk space. The index block must be released manually
39- once there is enough disk space available to allow indexing operations to
40- continue.
38+ Controls the flood stage watermark. It defaults to 95%, meaning that
39+ Elasticsearch enforces a read-only index block
40+ (` index.blocks.read_only_allow_delete`) on every index that has one or more
41+ shards allocated on the node that has at least one disk exceeding the flood
42+ stage. This is a last resort to prevent nodes from running out of disk space.
43+ The index block must be released manually once there is enough disk space
44+ available to allow indexing operations to continue.
4145
4246NOTE: You can not mix the usage of percentage values and byte values within
4347these settings. Either all are set to percentage values, or all are set to byte
@@ -67,12 +71,12 @@ PUT /twitter/_settings
6771`cluster.routing.allocation.disk.include_relocations`::
6872
6973 Defaults to +true+, which means that Elasticsearch will take into account
70- shards that are currently being relocated to the target node when computing a
71- node's disk usage. Taking relocating shards' sizes into account may, however ,
72- mean that the disk usage for a node is incorrectly estimated on the high side,
73- since the relocation could be 90% complete and a recently retrieved disk usage
74- would include the total size of the relocating shard as well as the space
75- already used by the running relocation.
74+ shards that are currently being relocated to the target node when computing
75+ a node's disk usage. Taking relocating shards' sizes into account may,
76+ however, mean that the disk usage for a node is incorrectly estimated on
77+ the high side, since the relocation could be 90% complete and a recently
78+ retrieved disk usage would include the total size of the relocating shard
79+ as well as the space already used by the running relocation.
7680
7781
7882NOTE: Percentage values refer to used disk space, while byte values refer to
0 commit comments