diff --git a/source/core/replication-internals.txt b/source/core/replication-internals.txt index dcfe2beced4..a2096683d9d 100644 --- a/source/core/replication-internals.txt +++ b/source/core/replication-internals.txt @@ -211,6 +211,8 @@ aware of the following conditions and possible situations: :ref:`replica-set-node-priority-configuration`, and :data:`replica configuration `. +.. _replica-set-elections-and-network-partitions: + Elections and Network Partitions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/tutorial/deploy-geographically-distributed-replica-set.txt b/source/tutorial/deploy-geographically-distributed-replica-set.txt index 877a324521d..69a802ce3ae 100644 --- a/source/tutorial/deploy-geographically-distributed-replica-set.txt +++ b/source/tutorial/deploy-geographically-distributed-replica-set.txt @@ -4,23 +4,19 @@ Deploy a Geographically Distributed Replica Set .. default-domain:: mongodb -This document describes the procedure for deploying a replica set with -members in multiple locations, and addresses both three member replica -sets, four member replica sets, and replica sets with more than four -members. +This tutorial describes how to deploy a :term:`replica set` with members +in multiple locations. The tutorial addresses three-member sets, +four-member sets, and sets with more than four members. -.. seealso:: ":doc:`/core/replication`" and - ":doc:`/administration/replication-architectures`" for appropriate - background. - - The ":doc:`/tutorial/deploy-replica-set`" and - ":doc:`/tutorial/expand-replica-set`" tutorials provide - documentation of related operations. +.. seealso:: For appropriate background, see :doc:`/core/replication` + and :doc:`/administration/replication-architectures`. For related + tutorials, see :doc:`/tutorial/deploy-replica-set` and + :doc:`/tutorial/expand-replica-set`. Overview -------- -While replica sets provide basic protection against single-instance +While :term:`replica sets ` provide basic protection against single-instance failure, when all of the members of a replica set reside within a single facility, the replica set is still susceptible to some classes of errors within that facility including power outages, networking @@ -31,50 +27,53 @@ geographically distinct facility or data center. Requirements ------------ -For a three-member replica set you will need two instances in a +For a three-member replica set you need two instances in a primary facility (hereafter, "Site A") and one member in a secondary facility (hereafter, "Site B".) Site A should be the same facility or very close to your primary application infrastructure (i.e. application servers, caching layer, users, etc.) -For a four-member replica set you will need two systems within Site A, -two members in Site B (or one member in Site B, and one member in Site +For a four-member replica set you need two members in Site A, +two members in Site B (or one member in Site B and one member in Site C,) and a single :term:`arbiter` in Site A. -If you wish to deploy additional members in the secondary facility or -multiple secondary facilities, the requirements are the same with the +For replica sets with additional members in the secondary facility or with +multiple secondary facilities, the requirements are the same as above but with the following notes: -- Ensure that a majority of the total number of :ref:`voting nodes +- Ensure that a majority of the :ref:`voting members ` are within Site A. This includes :ref:`secondary-only members ` and - :ref:`arbiters `. + :ref:`arbiters ` For more information on the + need to keep the voting majority on one site, see + :ref`replica-set-elections-and-network-partitions`. - If you deploy a replica set with an uneven number of members, deploy - an :ref:`arbiter ` within Site A. - -Procedure ---------- + an :ref:`arbiter ` on Site A. The arbiter must + be on site A to keep the majority there. -Although you may to deploy more than one replica set member on a -single system, this configuration reduces the redundancy and capacity +For all configurations in this tutorial, deploy each replica set member +on a separate system. Although you may deploy more than one replica set member on a +single system, doing so reduces the redundancy and capacity of the replica set. Such deployments are typically for testing purposes and beyond the scope of this tutorial. -Three Member Replica Set -~~~~~~~~~~~~~~~~~~~~~~~~ +Procedures +---------- -Consider the following features of this deployment: +.. _replica-set-deploy-distributed-three-member: -- Each member of the replica set, except for the :term:`arbiter` (see - below), will reside on it's own machine, and the MongoDB processes - will all bind to port ``27017``, or the standard MongoDB port. +Deploy a Distributed Three-Member Replica Set +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A geographically distributed three-member deployment has the following +features: -- :doc:`Configuration files ` - provide runtime configuration rather than as :doc:`command line - options `. +- Each member of the replica set resides on its own machine, and the + MongoDB processes all bind to port ``27017``, which is the standard + MongoDB port. -- Each member of the replica set needs to be accessible by way of +- Each member of the replica set must be accessible by way of resolvable DNS or hostnames in the following scheme: - ``mongodb0.example.net`` @@ -84,16 +83,15 @@ Consider the following features of this deployment: Configure DNS names appropriately, *or* set up your systems' ``/etc/host`` file to reflect this configuration. Ensure that one system (e.g. ``mongodb2.example.net``) resides in Site B. Host all - other hosts systems in Site A. + other systems in Site A. - Ensure that network traffic can pass between all members in the network securely and efficiently. Consider the following: - Establish a virtual private network between the systems in Site A - and Site B (and Site C if it exists) to encrypt all traffic - between the sites and remains private. Ensure that your network - topology routs all traffic between members within a single site - over the local area network. + and Site B to encrypt all traffic between the sites and remains + private. Ensure that your network topology routs all traffic between + members within a single site over the local area network. - Configure authentication using :setting:`auth` and :setting:`keyFile`, so that only servers and process with @@ -103,134 +101,153 @@ Consider the following features of this deployment: (incoming and outgoing packets) on the default MongoDB port (e.g. ``27017``) from *within* your deployment. - .. seealso:: The ":ref:`replica-set-security`" section for more information regarding security and firewalls. + .. seealso:: For more information on security and firewalls, see :ref:`replica-set-security`. + +- Specify run-time configuration on each system in a + :doc:`configuration file ` stored in + ``/etc/mongodb.conf`` or in a related location. *Do not* specify + run-time configuration through command line options. + + For each MongoDB instance, use the following configuration, with values + set appropriate to your systems: + + .. code-block:: cfg + + port = 27017 + + bind_ip = 10.8.0.10 + + dbpath = /srv/mongodb/ + + fork = true + + replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net -Use the following configuration for each MongoDB instance: + Modify :setting:`bind_ip` to reflect a secure interface on your + system that is able to access all other members of the set *and* + that is accessible to all other members of the replica set. The DNS or + host names need to point and resolve to this IP address. Configure + network rules or a virtual private network (i.e. "VPN") to permit this + access. -.. code-block:: cfg + .. note:: - port = 27017 + The portion of the :setting:`replSet` following the ``/`` provides + a "seed list" of known members of the replica set. + :program:`mongod` uses this list to fetch configuration changes + following restarts. It is acceptable to omit this section entirely, + and have the :setting:`replSet` option resemble: - bind_ip = 10.8.0.10 + .. code-block:: cfg - dbpath = /srv/mongodb/ + replSet = rs0 - fork = true + For more documentation on the above run time configurations, as well as + additional configuration options, see + :doc:`/reference/configuration-options`. - replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net +To deploy a geographically distributed three-member set: -Modify the :setting:`bind_ip` to reflect a secure interface on your -system that will be able to access all other members of the set *and* -that is accessible to all other members of the replica set. The DNS or -host names need to point and resolve to this IP address. Configure -network rules or a virtual private network (i.e. "VPN") to permit this -access. +1. On each system start the :program:`mongod` process by issuing a + command similar to following: -.. note:: + .. code-block:: sh - The portion of the :setting:`replSet` following the ``/`` provides - a "seed list" of known members of the replica - set. :program:`mongod` uses this list to fetch configuration - changes following restarts. It is acceptable to omit this section - entirely, and have the :setting:`replSet` option resemble: + mongod --config /etc/mongodb.conf - .. code-block:: cfg + .. note:: - replSet = rs0 + In production deployments you likely want to use and configure a + :term:`control script` to manage this process based on this + command. Control scripts are beyond the scope of this document. -Store this file on each system, located at ``/etc/mongodb.conf`` on -the file system. See the documentation of the configuration options -used above: :setting:`dbpath`, :setting:`port`, :setting:`replSet`, -:setting:`bind_ip`, and :setting:`fork`. Also consider any additional -:doc:`configuration options ` that -your deployment requires. +#. Open a :program:`mongo` shell connected to this host: -On each system issue the following command to start the -:program:`mongod` process: + .. code-block:: sh -.. code-block:: sh + mongo - mongod --config /etc/mongodb.conf +#. Use :method:`rs.initiate()` to initiate a replica set consisting of + the current member and using the default configuration: -.. note:: + .. code-block:: javascript - In production deployments you likely want to use and configure a - :term:`control script` to manage this process based on this - command. Control scripts are beyond the scope of this document. + rs.initiate() -Log in with the :program:`mongo` shell to this host using the -:program:`mongo` command at the system prompt. Call the following -shell helper to initiate a replica set consisting of the current -instance, using the default configuration: +#. Display the current :doc:`replica configuration `: -.. code-block:: javascript + .. code-block:: javascript - rs.initiate() + rs.conf() -Use the following shell function to display the current :doc:`replica -configuration `: +#. Add the remaining members to the replica set by issuing a sequence of + commands similar to the following. The example commands assume the + current :term:`primary` is ``mongodb0.example.net``: -.. code-block:: javascript + .. code-block:: javascript - rs.config() + rs.add("mongodb1.example.net") + rs.add("mongodb2.example.net") -Now, issue the following sequence of commands to add the remaining -members to the replica set. The following example assumes that the -current primary is ``mongodb0.example.net``. +#. Make sure that you have configured the member located in Site B + (i.e. ``mongodb2.example.net``) as a :ref:`secondary-only member + `: -.. code-block:: javascript + a. Issue the following command to determine the + :data:`members[n]._id` value for ``mongodb2.example.net``: - rs.add("mongodb1.example.net") - rs.add("mongodb2.example.net") - rs.add("mongodb3.example.net") + .. code-block:: javascript -Make sure that you have configured the member located in Site B -(i.e. ``mongodb3.example.net``) as a :ref:`secondary-only member -`. First, issue the following -command determine the :data:`members[n]._id` value for -``mongodb3.example.net``: + rs.config() -.. code-block:: javascript + #. In the :data:`member array `, save the + :data:`members[n]._id` value. The example in the next step assumes + this value is ``2``. - rs.config() + #. In the :program:`mongo` shell connected to the replica set's + primary, issue a command sequence similar to the following: -In the :data:`member array ` for this host, save -the :data:`members[n]._id` value. The next example assumes that this -value is ``2``. Next, in the shell connected to the replica set's -:term:`primary`, issue the following command sequence: + .. code-block:: javascript -.. code-block:: javascript + cfg = rs.conf() + cfg.members[2].priority = 0 + rs.reconfig(cfg) - cfg = rs.conf() - cfg.members[2].priority = 0 - rs.reconfig(cfg) + .. note:: -.. note:: + In some situations, the :method:`rs.reconfig()` shell command + can force the current primary to step down and causes an + election. When the primary steps down, all clients will + disconnect. This is the intended behavior. While, this + typically takes 10-20 seconds, attempt to make these changes + during scheduled maintenance periods. - The :method:`rs.reconfig()` shell command can force the current - primary to step down and causes an election in some - situations. When the primary steps down, all clients will - disconnect. This is the intended behavior. While, this typically - takes 10-20 seconds, attempt to make these changes during scheduled - maintenance periods. + After these commands return you have a geographically distributed + three-member replica set. -Congratulations! You have now deployed a geographically distributed -three-member replica set. +#. To check the status of your replica set, issue :method:`rs.status()`. -Four Member Replica Set -~~~~~~~~~~~~~~~~~~~~~~~ +.. seealso:: The documentation of the following shell functions for + more information: -Consider the following features of this deployment: + - :method:`rs.initiate()` + - :method:`rs.conf()` + - :method:`rs.reconfig()` + - :method:`rs.add()` -- Each member of the replica set, except for the arbiter (see - below), will reside on it's own machine, and the MongoDB processes - will all bind to port ``27017``, or the standard MongoDB port. +.. _replica-set-deploy-distributed-four-member: -- :doc:`Configuration files ` - provide runtime configuration rather than as :doc:`command line - options `. +Deploy a Distributed Four-Member Replica Set +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- Each member of the replica set needs to be accessible by way of +A geographically distributed four-member deployment has the following +features: + +- Each member of the replica set, except for the :term:`arbiter` (see + below), resides on its own machine, and the MongoDB processes all bind + to port ``27017``, which is the standard MongoDB port. + +- Each member of the replica set must be accessible by way of resolvable DNS or hostnames in the following scheme: - ``mongodb0.example.net`` @@ -241,9 +258,9 @@ Consider the following features of this deployment: Configure DNS names appropriately, *or* set up your systems' ``/etc/host`` file to reflect this configuration. Ensure that one system (e.g. ``mongodb2.example.net``) resides in Site B. Host all - other hosts systems in Site A. + other systems in Site A. -- One host (e.g. ``mongodb3.example.net``) will be an ":term:`arbiter`" +- One host (e.g. ``mongodb3.example.net``) will be an :term:`arbiter` and can run on a system that is also used for an application server or some other shared purpose. @@ -251,15 +268,12 @@ Consider the following features of this deployment: - Two members in Site A, two :ref:`secondary-only members ` in Site B, and an - :term:`arbiter` in Site A. + arbiter in Site A. - - Three members in Site A and one :ref:`secondary-only member - ` in Site B. + - Three members in Site A and one secondary-only member in Site B. - - Two members in Site A, one :ref:`secondary-only member - ` in Site B, one - :ref:`secondary-only member ` in - Site C and an :term:`arbiter` in site A. + - Two members in Site A, one secondary-only member in Site B, one + secondary-only member in Site C, and an arbiter in site A. In most cases the first architecture is preferable because it is the lest complex. @@ -281,151 +295,169 @@ Consider the following features of this deployment: (incoming and outgoing packets) on the default MongoDB port (e.g. ``27017``) from *within* your deployment. - .. seealso:: The ":ref:`replica-set-security`" section for - more information regarding security practices with replica - sets. + .. seealso:: For more information on security and firewalls, see :ref:`replica-set-security`. -Use the following configuration for each MongoDB instance: +- Specify run-time configuration on each system in a + :doc:`configuration file ` stored in + ``/etc/mongodb.conf`` or in a related location. *Do not* specify + run-time configuration through command line options. -.. code-block:: cfg + For each MongoDB instance, use the following configuration, with values + set appropriate to your systems: - port = 27017 + .. code-block:: cfg - bind_ip = 10.8.0.10 + port = 27017 - dbpath = /srv/mongodb/ + bind_ip = 10.8.0.10 - fork = true + dbpath = /srv/mongodb/ - replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net,mongodb3.example.net + fork = true -Modify the :setting:`bind_ip` to reflect a secure interface on your -system that will be able to access all other members of the set *and* -that is accessible to all other members of the replica set. The DNS or -host names need to point and resolve to this IP address. Configure -network rules or a virtual private network (i.e. "VPN") to permit this -access. + replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net,mongodb3.example.net -.. note:: + Modify :setting:`bind_ip` to reflect a secure interface on your + system that is able to access all other members of the set *and* + that is accessible to all other members of the replica set. The DNS or + host names need to point and resolve to this IP address. Configure + network rules or a virtual private network (i.e. "VPN") to permit this + access. - The portion of the :setting:`replSet` following the ``/`` provides - a "seed list" of known members of the replica - set. :program:`mongod` uses this list to fetch configuration - changes following restarts. It is acceptable to omit this section - entirely, and have the :setting:`replSet` option resemble: + .. note:: - .. code-block:: cfg + The portion of the :setting:`replSet` following the ``/`` provides + a "seed list" of known members of the replica set. + :program:`mongod` uses this list to fetch configuration changes + following restarts. It is acceptable to omit this section entirely, + and have the :setting:`replSet` option resemble: - replSet = rs0 + .. code-block:: cfg -Store this file on each system, located at ``/etc/mongodb.conf`` on -the file system. See the documentation of the configuration options -used above: :setting:`dbpath`, :setting:`port`, :setting:`replSet`, -:setting:`bind_ip`, and :setting:`fork`. Also consider any additional -:doc:`configuration options ` that -your deployment requires. + replSet = rs0 -On each system issue the following command to start the -:program:`mongod` process: + For more documentation on the above run time configurations, as well + as additional configuration options, see + doc:`/reference/configuration-options`. -.. code-block:: bash +To deploy a geographically distributed four-member set: - mongod --config /etc/mongodb.conf +1. On each system start the :program:`mongod` process by issuing a + command similar to following: -.. note:: + .. code-block:: sh - In production deployments you likely want to use and configure a - :term:`control script` to manage this process based on this - command. Control scripts are beyond the scope of this document. + mongod --config /etc/mongodb.conf -Log in with the :program:`mongo` shell to this host using the -:program:`mongo` command at the system prompt. Call the following -shell helper to initiate a replica set consisting of the current -instance using the default configuration: + .. note:: -.. code-block:: javascript + In production deployments you likely want to use and configure a + :term:`control script` to manage this process based on this + command. Control scripts are beyond the scope of this document. - rs.initiate() +#. Open a :program:`mongo` shell connected to this host: -Use the following shell function to display the current :doc:`replica -configuration `: + .. code-block:: sh -.. code-block:: javascript + mongo - rs.config() +#. Use :method:`rs.initiate()` to initiate a replica set consisting of + the current member and using the default configuration: -Now, issue the following sequence of commands to add the remaining -instances to the replica set. The following example assumes that the -current primary is ``mongodb0.example.net``. + .. code-block:: javascript -.. code-block:: javascript + rs.initiate() - rs.add("mongodb1.example.net") - rs.add("mongodb2.example.net") - rs.add("mongodb3.example.net") +#. Display the current :doc:`replica configuration `: -In the same shell session, issue the following command to add the -arbiter (i.e. ``mongodb4.example.net``): + .. code-block:: javascript -.. code-block:: javascript + rs.conf() - rs.addArb("mongodb4.example.net") +#. Add the remaining members to the replica set by issuing a sequence of + commands similar to the following. The example commands assume the + current :term:`primary` is ``mongodb0.example.net``: -Make sure that you have configured the member located in Site B -(i.e. ``mongodb3.example.net``) as a :ref:`secondary-only member -`. First, issue the following -command determine the :data:`members[n]._id` value for -``mongodb3.example.net``: + .. code-block:: javascript -.. code-block:: javascript + rs.add("mongodb1.example.net") + rs.add("mongodb2.example.net") + rs.add("mongodb3.example.net") - rs.config() +#. In the same shell session, issue the following command to add the + arbiter (e.g. ``mongodb4.example.net``): -In the :data:`member array ` for this host, save -the :data:`members[n]._id` value. The next example assumes that -this value is ``2``. Next, in the shell connected to the replica set's -:term:`primary`, issue the following command sequence: + .. code-block:: javascript -.. code-block:: javascript + rs.addArb("mongodb4.example.net") - cfg = rs.conf() - cfg.members[2].priority = 0 - rs.reconfig(cfg) +#. Make sure that you have configured each member located in Site B + (e.g. ``mongodb3.example.net``) as a :ref:`secondary-only member + `: -.. note:: + a. Issue the following command to determine the + :data:`members[n]._id` value for the member: - The :method:`rs.reconfig()` shell command can force the current - primary to step down and causes an election in some - situations. When the primary steps down, all clients will - disconnect. This is the intended behavior. While, this typically - takes 10-20 seconds, attempt to make these changes during scheduled - maintenance periods. + .. code-block:: javascript -Congratulations! You have now deployed a geographically distributed -four-member replica set. + rs.config() -Larger Replica Set Considerations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + #. In the :data:`member array `, save the + :data:`members[n]._id` value. The example in the next step assumes + this value is ``2``. -The procedure for deploying a geographically distributed set with more -than three or four members resembles the above procedures. However, consider -the following: + #. In the :program:`mongo` shell connected to the replica set's + primary, issue a command sequence similar to the following: -- Never deploy more than seven voting members. + .. code-block:: javascript + + cfg = rs.conf() + cfg.members[2].priority = 0 + rs.reconfig(cfg) + + .. note:: + + In some situations, the :method:`rs.reconfig()` shell command + can force the current primary to step down and causes an + election. When the primary steps down, all clients will + disconnect. This is the intended behavior. While, this + typically takes 10-20 seconds, attempt to make these changes + during scheduled maintenance periods. -- Use the procedure for a four member replica set if you have an even - number of members. Ensure that Site A always has a majority of - the members by deploying the :term:`arbiter` within Site A. + After these commands return you have a geographically distributed + four-member replica set. - For six member sets, deploy at least three voting members in - addition to the arbiter in Site A, the remaining members in - alternate sites. +#. To check the status of your replica set, issue :method:`rs.status()`. + +.. seealso:: The documentation of the following shell functions for + more information: + + - :method:`rs.initiate()` + - :method:`rs.conf()` + - :method:`rs.reconfig()` + - :method:`rs.add()` + +Deploy a Distributed Set with More than Four Members +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The procedure for deploying a geographically distributed set with more +than four members is similar to the above procedures, with the following +differences: + +- Never deploy more than seven voting members. -- Use the procedure for a three member replica set if you have an odd - number of members. Ensure that Site A always has a majority of the - members of the set. For example, if a set has five members, deploy - three remember within the primary facility and two remember in other - facilities. +- Use the procedure for a four-member set if you have an even number of + members (see :ref:`replica-set-deploy-distributed-four-member`). + Ensure that Site A always has a majority of the members by deploying + the :term:`arbiter` within Site A. For six member sets, deploy at + least three voting members in addition to the arbiter in Site A, the + remaining members in alternate sites. + +- Use the procedure for a three-member set if you have an odd number of + members (see :ref:`replica-set-deploy-distributed-three-member`). + Ensure that Site A always has a majority of the members of the set. + For example, if a set has five members, deploy three remember within + the primary facility and two remember in other facilities. - If you have a majority of the members of the set *outside* of Site A and the network partitions to prevent communication between sites,