Skip to content
Merged
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
f9dd093
Update shards per resource guidance
original-brownbear May 9, 2022
a15ef7d
CR comments
original-brownbear May 9, 2022
70c1491
non dedicated master comment
original-brownbear May 9, 2022
5a4917d
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear May 9, 2022
869ffc9
CR: david
original-brownbear May 9, 2022
dcb5067
meh wl
original-brownbear May 9, 2022
9e460a8
small b'
original-brownbear May 9, 2022
b75194f
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear May 10, 2022
936d9da
CR: adjustments
original-brownbear May 10, 2022
af59baa
fix double space
original-brownbear May 11, 2022
5dcc0b9
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear May 11, 2022
f3428c6
Update docs/reference/how-to/size-your-shards.asciidoc
original-brownbear May 11, 2022
cca3ac9
Update docs/reference/how-to/size-your-shards.asciidoc
original-brownbear May 11, 2022
b4f0e0f
Wordsmithery
DaveCTurner May 25, 2022
0df5ef7
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear Jun 1, 2022
d8b6484
Update docs/reference/how-to/size-your-shards.asciidoc
original-brownbear Jun 1, 2022
f6d2e87
Merge remote-tracking branch 'origin/drop-obviously-broken-docs' into…
original-brownbear Jun 1, 2022
ad6af0f
CR: comments
original-brownbear Jun 1, 2022
86c22fe
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear Jun 2, 2022
43d0bc6
add require more phrase
original-brownbear Jun 2, 2022
38c95b6
Merge remote-tracking branch 'origin/drop-obviously-broken-docs' into…
original-brownbear Jun 9, 2022
4d1a8c1
Merge remote-tracking branch 'elastic/master' into drop-obviously-bro…
original-brownbear Jun 9, 2022
588bc87
remove duplicate mapping details
original-brownbear Jun 9, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 27 additions & 10 deletions docs/reference/how-to/size-your-shards.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ thread pool>>. This can result in low throughput and slow search speeds.

[discrete]
[[each-shard-has-overhead]]
==== Each index and shard has overhead
==== Each index, shard and field has overhead

Every index and every shard requires some memory and CPU resources. In most
cases, a small set of large shards uses fewer resources than many small shards.
Expand Down Expand Up @@ -175,17 +175,17 @@ index prirep shard store

[discrete]
[[shard-count-recommendation]]
==== Aim for 20 shards or fewer per GB of heap memory
==== Aim for 3000 indices or fewer per GB of heap memory on each master node

The number of shards a data node can hold is proportional to the node's heap
memory. For example, a node with 30GB of heap memory should have at most 600
shards. The further below this limit you can keep your nodes, the better. If
you find your nodes exceeding more than 20 shards per GB, consider adding
another node.
The number of indices a master node can manage is proportional to its heap
size. The exact amount of heap memory needed for each index depends on various
factors such as the size of the mapping and the number of shards per index.

Some system indices for {enterprise-search-ref}/index.html[Enterprise Search]
are nearly empty and rarely used. Due to their low overhead, you shouldn't
count shards for these indices toward a node's shard limit.
As a general rule of thumb, you should aim for 3000 indices or fewer per GB of
heap on master nodes. For example, if your cluster contains 12000 indices then
each dedicated master node should have at least 4GB of heap. For non-dedicated
master nodes, the same rule holds and should be added to the heap requirements
of the other roles of each node.

To check the configured size of each node's heap, use the <<cat-nodes,cat nodes
API>>.
Expand All @@ -205,6 +205,23 @@ GET _cat/shards?v=true
----
// TEST[setup:my_index]

[discrete]
[[field-count-recommendation]]
==== Allow 1kB of heap per field per index on data nodes, plus overheads

The exact resource usage of each mapped field depends on its type, but a rule
of thumb is to allow for approximately 1kB of heap overhead per mapped field
per index held by each data node. You must also allow enough heap for {es}'s
baseline usage as well as your workload such as indexing, searches and
aggregations. 0.5GB of extra heap will suffice for many reasonable workloads,
and you may need even less if your workload is very light while heavy workloads
may require more.

For example, if a data node holds shards from 1000 indices, each containing
4000 mapped fields, then you should allow approximately 1000 × 4000 × 1kB = 4GB
of heap for the fields and another 0.5GB of heap for its workload and other
overheads, and therefore this node will need a heap size of at least 4.5GB.

[discrete]
[[avoid-node-hotspots]]
==== Avoid node hotspots
Expand Down