Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions pages/clustering/high-availability.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,6 @@ metrics which reveal us p50, p90 and p99 latencies of RPC messages, the duration
in the cluster. We are also counting the number of different RPC messages exchanged and the number of failed requests since this can give
us information about parts of the cluster that need further care. You can see the full list of metrics [here](/database-management/monitoring#system-metrics).

<Callout type="info">

When deploying coordinators to servers, you can use the instance of almost any size. Instances of 4GiB or 8GiB will suffice since coordinators'
job mainly involves network communication and storing Raft metadata. Coordinators and data instances can be deployed on same servers (pairwise)
but from the availability perspective, it is better to separate them physically.

</Callout>


## Bolt+routing

Directly connecting to the main instance isn't preferred in the HA cluster since
Expand Down Expand Up @@ -127,6 +118,16 @@ queries, not coordinator-based queries.

## System configuration


When deploying coordinators to servers, you can use the instance of almost any size. Instances of 4GiB or 8GiB will suffice since coordinators'
job mainly involves network communication and storing Raft metadata. Coordinators and data instances can be deployed on same servers (pairwise)
but from the availability perspective, it is better to separate them physically.

When setting up disk space, you should always make sure that there is at least space for `--snapshot-retention-count+1` snapshots + few WAL files. That's
because we first create (N+1)th snapshot and then delete the oldest one so we could guarantee that the creation of a new snapshot ended successfully. This is
especially important when using Memgraph HA in K8s, since in K8s there is usually a limit set on the disk space used.


<Callout type="warning">
Important note if you're using native Memgraph deployment with Red Hat.

Expand Down
4 changes: 3 additions & 1 deletion pages/getting-started/install-memgraph/kubernetes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -326,14 +326,16 @@ reference guide](/database-management/configuration).

A Helm chart for deploying Memgraph in [high availability (HA)
setup](/clustering/high-availability). This helm chart requires [Memgraph
Enterprise license](/database-management/enabling-memgraph-enterprise).
Enterprise license](/database-management/enabling-memgraph-enterprise). We recommend reading
the documentation about high availability in Memgraph [here](https://memgraph.com/docs/clustering/high-availability).

Memgraph HA cluster includes 3 coordinators and 2 data instances by default. Since
multiple Memgraph instances are used, it is advised to use multiple workers nodes in Kubernetes.
Our advice is that each Memgraph instance gets on its own node. The size of nodes on which
data pods will reside depends on the computing power and the memory you need to store data.
Coordinator nodes can be smaller and machines with basic requirements met (8-16 GB of RAM) will be enough.


### Installing the Memgraph HA Helm chart

To include Memgraph HA cluster as a part of your Kubernetes cluster, you need to
Expand Down