diff --git a/source/includes/fact-2.0-fsync-lock-change.rst b/source/includes/fact-2.0-fsync-lock-change.rst new file mode 100644 index 00000000000..918f7732038 --- /dev/null +++ b/source/includes/fact-2.0-fsync-lock-change.rst @@ -0,0 +1,10 @@ +.. versionchanged:: 2.0 + MongoDB 2.0 added :method:`db.fsyncLock()` and + :method:`db.fsyncUnlock()` helpers to the :program:`mongo` + shell. Prior to this version, use the :dbcommand:`fsync` + command with the ``lock`` option, as follows: + +.. code-block:: javascript + + db.runCommand( { fsync: 1, lock: true } ); + db.runCommand( { fsync: 1, lock: false } ); diff --git a/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml b/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml new file mode 100644 index 00000000000..58b04405b8d --- /dev/null +++ b/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml @@ -0,0 +1,86 @@ +title: Disable the balancer. +stepnum: 1 +ref: disable-balancer +action: + pre: | + Disable the :term:`balancer` process that equalizes the distribution + of data among the :term:`shards `. To disable the balancer, use + the :method:`sh.stopBalancer()` method in the :program:`mongo` shell. + For example: + language: sh + code: | + use config + sh.stopBalancer() +post: | + For more information, see the + :ref:`sharding-balancing-disable-temporarily` procedure. +--- +title: Lock one secondary member of each replica set in each shard. +stepnum: 2 +ref: lock +content: | + Lock one secondary member of each replica set in each shard so that your + backups reflect the state of your database at the nearest possible + approximation of a single moment in time. Lock these :program:`mongod` + instances in as short of an interval as possible. + + To lock a secondary, connect through the :program:`mongo` shell to the + secondary member's :program:`mongod` instance and issue the + :method:`db.fsyncLock()` method. +--- +title: Back up one of the config servers. +stepnum: 3 +ref: backup-config-server +pre: | + Backing up a :ref:`config server ` backs up the + sharded cluster's metadata. You need back up only one config server, as + they all hold the same data. Do one of the following to back up one of + the config servers: +action: + - heading: "Create a file-system snapshot of the config server." + pre: | + Do this **only if** the config server has :term:`journaling + ` enabled. Use the procedure in + :doc:`/tutorial/backup-with-filesystem-snapshots`. **Never** use + :method:`db.fsyncLock()` on config databases. + - heading: "Create a database dump to backup the config server." + pre: | + Issue :program:`mongodump` against one of the config + :program:`mongod` instances or via the :program:`mongos`. If you are + running MongoDB 2.4 or later with the :option:`--configsvr ` option, then include the :option:`--oplog ` option to ensure that the dump includes a partial oplog + containing operations from the duration of the mongodump operation. + For example: + language: sh + code: | + mongodump --oplog --db config +--- +title: Back up the replica set members of the shards that you locked. +stepnum: 4 +ref: backup-locked-shards +content: | + You may back up the shards in parallel. For each shard, create a + snapshot. Use the procedure in + :doc:`/tutorial/backup-with-filesystem-snapshots`. +--- +title: Unlock locked replica set members. +stepnum: 5 +ref: unlock +content: | + Unlock all locked replica set members of each shard using the + :method:`db.fsyncUnlock()` method in the :program:`mongo` shell. +--- +title: Enable the balancer. +stepnum: 6 +ref: enable-balancer +action: + pre: | + Re-enable the balancer with the :method:`sh.setBalancerState()` + method. Use the following command sequence when connected to the + :program:`mongos` with the :program:`mongo` shell: + language: javascript + code: | + use config + sh.setBalancerState(true) +... diff --git a/source/includes/steps-backup-without-journaling.yaml b/source/includes/steps-backup-without-journaling.yaml new file mode 100644 index 00000000000..37c4b3b715b --- /dev/null +++ b/source/includes/steps-backup-without-journaling.yaml @@ -0,0 +1,30 @@ +title: Flush writes to disk and lock the database to prevent further writes. +stepnum: 1 +ref: flush +action: + pre: | + To flush writes to disk and to "lock" the database, issue the + :method:`db.fsyncLock()` method in the :program:`mongo` shell: + language: javascript + code: | + db.fsyncLock(); +--- +title: "Perform the backup operation described in :ref:`lvm-backup-operation`." +stepnum: 2 +ref: backup +--- +title: "After the snapshot completes, unlock the database." +stepnum: 3 +ref: unlock +action: + - pre: | + To unlock the database after the snapshot has completed, use the + following command in the :program:`mongo` shell: + language: javascript + code: | + db.fsyncUnlock(); +post: | + .. include:: /includes/warning-fsync-lock-mongodump.rst + + .. include:: /includes/fact-2.0-fsync-lock-change.rst +... diff --git a/source/includes/steps-recover-data-with-repairpath.yaml b/source/includes/steps-recover-data-with-repairpath.yaml new file mode 100644 index 00000000000..d99f0abd61d --- /dev/null +++ b/source/includes/steps-recover-data-with-repairpath.yaml @@ -0,0 +1,31 @@ +title: "Start :program:`mongod` using the option to replace the original files with the repaired files." +stepnum: 1 +ref: start-with-repair +action: + pre: | + Start the :program:`mongod` instance using the :option:`--repair + ` option **and** the :option:`--repairpath ` option. Issue a command similar to the following: + language: sh + code: | + mongod --dbpath /data/db --repair --repairpath /data/db0 +post: | + When this completes, the new repaired data files will be in the + ``/data/db0`` directory. +--- +title: "Start :program:`mongod` with the new data directory." +stepnum: 2 +ref: start-with-new-path +action: + pre: | + Start :program:`mongod` using the following invocation to point the + :setting:`~storage.dbPath` at ``/data/db0``: + language: sh + code: | + mongod --dbpath /data/db0 +post: | + Once you confirm that the data files are operational you may delete or + archive the old data files in the ``/data/db`` directory. You may also + wish to move the repaired files to the old database location or update + the :setting:`~storage.dbPath` to indicate the new location. +... diff --git a/source/includes/steps-recover-data-without-repairpath.yaml b/source/includes/steps-recover-data-without-repairpath.yaml new file mode 100644 index 00000000000..1f2a28f206e --- /dev/null +++ b/source/includes/steps-recover-data-without-repairpath.yaml @@ -0,0 +1,39 @@ +title: Remove the stale lock file. +stepnum: 1 +ref: remove-stale-lock-file +action: + pre: | + For example: + language: sh + code: | + rm /data/db/mongod.lock +post: | + Replace ``/data/db`` with your :setting:`~storage.dbPath` where your + MongoDB instance's data files reside. +--- +title: "Start :program:`mongod` using the option to replace the original files with the repaired files." +stepnum: 2 +ref: start-with-repair +action: + pre: | + Start the :program:`mongod` instance using the :option:`--repair + ` option, which replaces the original data files with + the repaired data files. Issue a command similar to the following: + language: sh + code: | + mongod --dbpath /data/db --repair +post: | + When this completes, the repaired data files will replace the original + data files in the ``/data/db`` directory. +--- +title: "Start :program:`mongod` as usual." +stepnum: 3 +ref: start-as-usual +action: + pre: | + Start :program:`mongod` using the following invocation to point the + :setting:`~storage.dbPath` at ``/data/db``: + language: sh + code: | + mongod --dbpath /data/db +... diff --git a/source/includes/steps-restore-primary-from-backup.yaml b/source/includes/steps-restore-primary-from-backup.yaml new file mode 100644 index 00000000000..ab1a47e56e8 --- /dev/null +++ b/source/includes/steps-restore-primary-from-backup.yaml @@ -0,0 +1,63 @@ +title: Obtain backup MongoDB Database files. +stepnum: 1 +ref: obtain-backup-files +content: | + The backup files may come from a :doc:`file system snapshot + `. The `MongoDB Management + Service (MMS) + `_ + produces MongoDB database files for :mms:`stored snapshots + ` and :mms:`point and time + snapshots `. You + can also use :program:`mongorestore` to restore database files using + data created with :program:`mongodump`. See + :doc:`/tutorial/backup-with-mongodump` for more information. +--- +title: "Start a :program:`mongod` using data files from the backup as the data path." +stepnum: 2 +ref: start +action: + pre: | + The following example uses ``/data/db`` as the data path, as specified + in the :setting:`dbpath` setting: + language: sh + code: | + mongod --dbpath /data/db +--- +title: "Convert the standalone :program:`mongod` to a single-node replica set" +stepnum: 3 +ref: convert +action: + pre: | + Convert the standalone :program:`mongod` process to a single-node + replica set by shutting down the :program:`mongod` instance, and + restarting it with the :option:`--replSet ` option, + as in the following example: + language: sh + code: | + mongod --dbpath /data/db --replSet +post: | + Optionally, you can explicitly set a :setting:`~replication.oplogSizeMB` + to control the size of the :term:`oplog` created for this replica set + member. +--- +title: Connect to the :program:`mongod` instance. +stepnum: 4 +ref: connect +action: + pre: | + For example, first issue the following to connect: + language: sh + code: | + mongo +--- +title: Initiate the new replica set. +stepnum: 5 +ref: initiate +action: + pre: | + Issue :method:`rs.initiate()` to initiate the new replica set. For example: + language: javascript + code: | + rs.initiate() +... diff --git a/source/includes/steps-restore-secondary-from-backup-directly.yaml b/source/includes/steps-restore-secondary-from-backup-directly.yaml new file mode 100644 index 00000000000..5b74c6846c1 --- /dev/null +++ b/source/includes/steps-restore-secondary-from-backup-directly.yaml @@ -0,0 +1,28 @@ +title: "Shut down the :program:`mongod` instance that you restored." +stepnum: 1 +ref: shutdown +content: | + Use :option:`--shutdown ` or + :method:`db.shutdownServer()` to ensure a clean shut down. +--- +title: "Copy the primary's data directory to each secondary." +stepnum: 2 +ref: copy-data +content: | + Copy the :term:`primary's ` data directory into the + :setting:`~storage.dbPath` of the other members of the replica set. The + :setting:`~storage.dbPath` is ``/data/db`` by default. +--- +title: "Start the :program:`mongod` instance that you restored." +stepnum: 3 +ref: start +--- +title: Add the secondaries to the replica set. +stepnum: 4 +ref: add +content: | + In a :program:`mongo` shell connected to the :term:`primary`, add the + :term:`secondaries ` to the replica set using + :method:`rs.add()`. See :doc:`/tutorial/deploy-replica-set` for more + information about deploying a replica set. +... diff --git a/source/includes/steps-restore-secondary-from-backup-initial-sync.yaml b/source/includes/steps-restore-secondary-from-backup-initial-sync.yaml new file mode 100644 index 00000000000..6938363bc4b --- /dev/null +++ b/source/includes/steps-restore-secondary-from-backup-initial-sync.yaml @@ -0,0 +1,12 @@ +title: Ensure that the data directories on the prospective replica set members are empty. +stepnum: 1 +ref: delete-data +--- +title: Add each prospective member to the replica set. +stepnum: 2 +ref: add-members +content: | + When you add a member to the replica set, :ref:`Initial Sync + ` copies the data from the :term:`primary` to + the new member. +... diff --git a/source/includes/steps-restore-sharded-cluster-database-dump.yaml b/source/includes/steps-restore-sharded-cluster-database-dump.yaml new file mode 100644 index 00000000000..57de5e86aa7 --- /dev/null +++ b/source/includes/steps-restore-sharded-cluster-database-dump.yaml @@ -0,0 +1,47 @@ +stepnum: 1 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: shutdown-cluster +--- +title: Restore the data files. +stepnum: 2 +ref: restore +pre: | + One each server, use :program:`mongorestore` to restore the database + dump to the location where the :program:`mongod` instance will access + the data. +action: + pre: | + The following example restores a database dump located at + ``/opt/backup/`` to the ``/data/`` directory. This requires that there + are no active :program:`mongod` instances attached to the ``/data`` + directory. + language: sh + code: | + mongorestore --dbpath /data /opt/backup +--- +stepnum: 3 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: restart-config-servers +--- +stepnum: 4 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: shard-hostname +--- +stepnum: 5 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: restart-shards +--- +stepnum: 6 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: restart-mongos-instances +--- +stepnum: 7 +source: + file: steps-restore-sharded-cluster-with-snapshots.yaml + ref: ensure +... diff --git a/source/includes/steps-restore-sharded-cluster-with-snapshots.yaml b/source/includes/steps-restore-sharded-cluster-with-snapshots.yaml new file mode 100644 index 00000000000..8bfe4bea1c3 --- /dev/null +++ b/source/includes/steps-restore-sharded-cluster-with-snapshots.yaml @@ -0,0 +1,79 @@ +title: Shut down the entire cluster. +stepnum: 1 +ref: shutdown-cluster +pre: | + Stop all :program:`mongos` and :program:`mongod` processes, including + all shards *and* all config servers. +action: + pre: | + Connect to each member and issue the following: + language: javascript + code: | + use admin + db.shutdownServer() + post: | + For version 2.4 or earlier, use ``db.shutdownServer({force:true})``. +--- +title: Restore the data files. +stepnum: 2 +ref: restore +pre: | + One each server, extract the data files to the location where the + :program:`mongod` instance will access them. Restore the following: +action: + - heading: "Data files for each server in each shard." + pre: | + Because replica sets provide each production :term:`shard`, restore + all the members of the replica set or use the other standard + approaches for restoring a replica set from backup. See the + :ref:`backup-restore-snapshot` and :ref:`backup-restore-dump` + sections for details on these procedures. + - heading: "Data files for each config server." +--- +title: Restart the config servers. +stepnum: 3 +ref: restart-config-servers +action: + pre: | + Restart each :ref:`config server ` + :program:`mongod` instance by issuing a command similar to the + following for each, using values appropriate to your configuration: + language: sh + code: | + mongod --configsvr --dbpath /data/configdb --port 27019 +--- +title: "If shard hostnames have changed, update the config string and config database." +stepnum: 4 +ref: shard-hostname +content: | + If shard hostnames **have changed**, start **one** :program:`mongos` + instance using the updated config string with the new ``configdb`` + hostnames and ports. + + Then update the ``shards`` collection in the :ref:`config-database` to + reflect the new hostnames. Then stop the :program:`mongos` instance. +--- +title: "Restart all the shard :program:`mongod` instances." +stepnum: 5 +ref: restart-shards +--- +title: "Restart all the :program:`mongos` instances." +stepnum: 6 +ref: restart-mongos-instances +content: | + If shard hostnames **have changed**, make sure to use the updated config + string. +--- +title: "Connect to a :program:`mongos` to ensure the cluster is operational." +stepnum: 7 +ref: ensure +action: + pre: | + Connect to a :program:`mongos` instance from a :program:`mongo` shell + and use the :method:`db.printShardingStatus()` method to ensure that + the cluster is operational, as follows: + language: javascript + code: | + db.printShardingStatus() + show collections +... diff --git a/source/includes/steps-restore-single-shard.yaml b/source/includes/steps-restore-single-shard.yaml new file mode 100644 index 00000000000..d1a59cd8d22 --- /dev/null +++ b/source/includes/steps-restore-single-shard.yaml @@ -0,0 +1,21 @@ +title: "Restore the shard as you would any other :program:`mongod` instance." +stepnum: 1 +ref: restore +content: | + See :doc:`/core/backups` for overviews of these procedures. +--- +title: Manage the chunks. +stepnum: 2 +ref: chunks +content: | + For all chunks that migrate away from this shard, you do not need to do + anything at this time. You do not need to delete these documents from + the shard because the chunks are automatically filtered out from queries + by :program:`mongos`. You can remove these documents from the shard, if + you like, at your leisure. + + For chunks that migrate to this shard after the most recent backup, you + must manually recover the chunks using backups of other shards, or some + other source. To determine what chunks have moved, view the + ``changelog`` collection in the :ref:`config-database`. +... diff --git a/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt b/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt index 51938bd899a..2c118d909fe 100644 --- a/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt +++ b/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt @@ -37,78 +37,11 @@ of the backup while minimizing impact on the cluster by taking the backup from a secondary member of the replica set that provides each shard. -1. Disable the :term:`balancer` process that equalizes the - distribution of data among the :term:`shards `. To disable - the balancer, use the :method:`sh.stopBalancer()` method in the - :program:`mongo` shell. For example: +.. warning:: - .. code-block:: sh + It is essential that you stop the balancer before creating + backups. If the balancer remains active, your resulting backups + could have duplicate data or miss some data, as :term:`chunks + ` may migrate while recording backups. - use config - sh.stopBalancer() - - For more information, see the - :ref:`sharding-balancing-disable-temporarily` procedure. - - .. warning:: - - It is essential that you stop the balancer before creating - backups. If the balancer remains active, your resulting backups - could have duplicate data or miss some data, as :term:`chunks - ` may migrate while recording backups. - -#. Lock one secondary member of each replica set in each shard so that your - backups reflect the state of your database at the nearest possible - approximation of a single moment in time. Lock these - :program:`mongod` instances in as short of an interval as possible. - - To lock a secondary, connect through the :program:`mongo` shell to the - secondary member's :program:`mongod` instance and issue the - :method:`db.fsyncLock()` method. - -#. Back up one of the :ref:`config servers `. - Backing up a config server backs up the sharded cluster's metadata. You - need back up only one config server, as they all hold the same data - - Do one of the following to back up one of the config servers: - - - Create a file-system snapshot of the config server. Use the procedure in - :doc:`/tutorial/backup-with-filesystem-snapshots`. - - .. important:: This is only available if the config server has - :term:`journaling ` enabled. *Never* - use :method:`db.fsyncLock()` on config databases. - - - Use :program:`mongodump` to backup the config server. Issue - :program:`mongodump` against one of the config :program:`mongod` - instances or via the :program:`mongos`. - - If you are running MongoDB 2.4 or later with the - :option:`--configsvr ` option, then include the - :option:`--oplog ` option when running - :program:`mongodump` to ensure that the dump includes a partial oplog - containing operations from the duration of the mongodump operation. - For example: - - .. code-block:: sh - - mongodump --oplog --db config - -#. Back up the replica set members of the shards that you locked. You - may back up the shards in parallel. For each shard, create a - snapshot. Use the procedure in - :doc:`/tutorial/backup-with-filesystem-snapshots`. - -#. Unlock all locked replica set members of each shard using the - :method:`db.fsyncUnlock()` method in the :program:`mongo` shell. - -#. Re-enable the balancer with the :method:`sh.setBalancerState()` - method. - - Use the following command sequence when connected to the - :program:`mongos` with the :program:`mongo` shell: - - .. code-block:: javascript - - use config - sh.setBalancerState(true) +.. include:: /includes/steps/backup-sharded-cluster-with-snapshots.rst diff --git a/source/tutorial/backup-with-filesystem-snapshots.txt b/source/tutorial/backup-with-filesystem-snapshots.txt index ed27626da1d..e19dc0d3c22 100644 --- a/source/tutorial/backup-with-filesystem-snapshots.txt +++ b/source/tutorial/backup-with-filesystem-snapshots.txt @@ -288,34 +288,4 @@ then for your backup use a :term:`secondary` which is not receiving reads (i.e. :term:`hidden member`). -1. To flush writes to disk and to "lock" the database (to prevent - further writes), issue the :method:`db.fsyncLock()` method in the - :program:`mongo` shell: - - .. code-block:: javascript - - db.fsyncLock(); - -#. Perform the backup operation described in :ref:`lvm-backup-operation`. - -#. To unlock the database after the snapshot has completed, use the - following command in the :program:`mongo` shell: - - .. code-block:: javascript - - db.fsyncUnlock(); - - .. note:: - - .. versionchanged:: 2.0 - MongoDB 2.0 added :method:`db.fsyncLock()` and - :method:`db.fsyncUnlock()` helpers to the :program:`mongo` - shell. Prior to this version, use the :dbcommand:`fsync` - command with the ``lock`` option, as follows: - - .. code-block:: javascript - - db.runCommand( { fsync: 1, lock: true } ); - db.runCommand( { fsync: 1, lock: false } ); - - .. include:: /includes/warning-fsync-lock-mongodump.rst +.. include:: /includes/steps/backup-without-journaling.rst diff --git a/source/tutorial/recover-data-following-unexpected-shutdown.txt b/source/tutorial/recover-data-following-unexpected-shutdown.txt index ef906b3936a..912c91d72e9 100644 --- a/source/tutorial/recover-data-following-unexpected-shutdown.txt +++ b/source/tutorial/recover-data-following-unexpected-shutdown.txt @@ -9,8 +9,8 @@ representation of the data files will likely reflect an inconsistent state which could lead to data corruption. [#validation]_ To prevent data inconsistency and corruption, always shut down the -database cleanly and use the :ref:`durability journaling -`. MongoDB writes data to the journal, by default, +database cleanly and use the :term:`durability journaling +`. MongoDB writes data to the journal, by default, every 100 milliseconds, such that MongoDB can always recover to a consistent state even in the case of an unclean shutdown due to power loss or other system failure. @@ -91,22 +91,22 @@ Overview There are two processes to repair data files that result from an unexpected shutdown: -#. Use the :option:`--repair ` option in - conjunction with the :option:`--repairpath ` - option. :program:`mongod` will read the existing data files, and - write the existing data to new data files. This does not modify or - alter the existing data files. +- Use the :option:`--repair ` option in + conjunction with the :option:`--repairpath ` + option. :program:`mongod` will read the existing data files, and + write the existing data to new data files. This does not modify or + alter the existing data files. - You do not need to remove the ``mongod.lock`` file before using - this procedure. + You do not need to remove the ``mongod.lock`` file before using + this procedure. -#. Use the :option:`--repair ` option. - :program:`mongod` will read the existing data files, write the - existing data to new files and replace the existing, possibly - corrupt, files with new files. +- Use the :option:`--repair ` option. + :program:`mongod` will read the existing data files, write the + existing data to new files and replace the existing, possibly + corrupt, files with new files. - You must remove the ``mongod.lock`` file before using this - procedure. + You must remove the ``mongod.lock`` file before using this + procedure. .. note:: @@ -122,66 +122,28 @@ Procedures .. important:: Always Run :program:`mongod` as the same user to avoid changing the permissions of the MongoDB data files. +Repair Data Files and Preserve Original Files ++++++++++++++++++++++++++++++++++++++++++++++ + To repair your data files using the :option:`--repairpath ` option to preserve the original data files unmodified. -#. Start :program:`mongod` using :option:`--repair ` - to read the existing data files. - - .. code-block:: sh - - mongod --dbpath /data/db --repair --repairpath /data/db0 - - When this completes, the new repaired data files will be in the - ``/data/db0`` directory. - -#. Start :program:`mongod` using the following invocation to point the - :setting:`~storage.dbPath` at ``/data/db0``: +.. include:: /includes/steps/recover-data-with-repairpath.rst - .. code-block:: sh - - mongod --dbpath /data/db0 - - Once you confirm that the data files are operational you may delete - or archive the old data files in the ``/data/db`` directory. You may - also wish to move the repaired files to the old database location - or update the :setting:`~storage.dbPath` to indicate the new location. +Repair Data Files without Preserving Original Files ++++++++++++++++++++++++++++++++++++++++++++++++++++ To repair your data files without preserving the original files, do not use the :option:`--repairpath ` option, as in the following procedure: -#. Remove the stale lock file: - - .. code-block:: sh - - rm /data/db/mongod.lock - - Replace ``/data/db`` with your :setting:`~storage.dbPath` where your MongoDB - instance's data files reside. - - .. warning:: - - After you remove the ``mongod.lock`` file you *must* run the - :option:`--repair ` process before using your - database. - -#. Start :program:`mongod` using :option:`--repair ` - to read the existing data files. - - .. code-block:: sh - - mongod --dbpath /data/db --repair - - When this completes, the repaired data files will replace the - original data files in the ``/data/db`` directory. - -#. Start :program:`mongod` using the following invocation to point the - :setting:`~storage.dbPath` at ``/data/db``: +.. warning:: - .. code-block:: sh + After you remove the ``mongod.lock`` file you *must* run the + :option:`--repair ` process before using your + database. - mongod --dbpath /data/db +.. include:: /includes/steps/recover-data-without-repairpath.rst ``mongod.lock`` --------------- diff --git a/source/tutorial/restore-replica-set-from-backup.txt b/source/tutorial/restore-replica-set-from-backup.txt index 2fc5e493dde..eca95e71f5e 100644 --- a/source/tutorial/restore-replica-set-from-backup.txt +++ b/source/tutorial/restore-replica-set-from-backup.txt @@ -18,46 +18,7 @@ efficient ways to deploy a replica set. Restore Database into a Single Node Replica Set ----------------------------------------------- -#. Obtain backup MongoDB Database files. These files may come from a - :doc:`file system snapshot - `. The - `MongoDB Management Service (MMS) - `_ - produces MongoDB database files for :mms:`stored snapshots - ` and :mms:`point and time - snapshots `. - You can also use :program:`mongorestore` to restore database files - using data created with :program:`mongodump`. See - :doc:`/tutorial/backup-with-mongodump` for - more information. - - -#. Start a :program:`mongod` using data files from the backup as the - ``dbpath``. In the following example, ``/data/db`` is the ``dbpath`` to - the data files: - - .. code-block:: sh - - mongod --dbpath /data/db - -#. Convert your standalone :program:`mongod` process to a single node - replica set by shutting down the :program:`mongod` instance, and - restarting it with the :option:`--replSet ` - option, as in the following example: - - .. code-block:: sh - - mongod --dbpath /data/db --replSet - - .. optional:: - - Consider explicitly setting a :setting:`~replication.oplogSizeMB` to control - the size of the :term:`oplog` created for this replica set - member. - -#. Connect to the :program:`mongod` instance. - -#. Use :method:`rs.initiate()` to initiate the new replica set. +.. include:: /includes/steps/restore-primary-from-backup.rst Add Members to the Replica Set ------------------------------ @@ -65,10 +26,10 @@ Add Members to the Replica Set MongoDB provides two options for restoring secondary members of a replica set: -1. Manually copy the database files to each data directory. +- Manually copy the database files to each data directory. -2. Allow :ref:`initial sync ` to distribute - data automatically. +- Allow :ref:`initial sync ` to distribute + data automatically. The following sections outlines both approaches. @@ -85,20 +46,7 @@ Use the following sequence of operations to "seed" additional members of the replica set with the restored data by copying MongoDB data files directly. -#. Shut down the :program:`mongod` instance that you restored. - Using :option:`--shutdown ` or - :method:`db.shutdownServer()` to ensure a clean shut down. - -#. Copy the :term:`primary's ` data directory into the - :setting:`~storage.dbPath` of the other members of the replica set. The - :setting:`~storage.dbPath` is ``/data/db`` by default. - -#. Start the :program:`mongod` instance that you restored. - -#. In a :program:`mongo` shell connected to the :term:`primary`, add - the :term:`secondaries ` to the replica set using - :method:`rs.add()`. See :doc:`/tutorial/deploy-replica-set` for - more information about deploying a replica set. +.. include:: /includes/steps/restore-secondary-from-backup-directly.rst Update Secondaries using Initial Sync ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -107,9 +55,4 @@ Use the following sequence of operations to "seed" additional members of the replica set with the restored data using the default *initial sync* operation. -#. Ensure that the data directories on the prospective replica set - members are empty. - -#. Add each prospective member to the replica set. :ref:`Initial Sync - ` will copy the data from the - :term:`primary` to the other members of the replica set. +.. include:: /includes/steps/restore-secondary-from-backup-initial-sync.rst diff --git a/source/tutorial/restore-sharded-cluster.txt b/source/tutorial/restore-sharded-cluster.txt index 4585ab28d7c..f32d98dfb54 100644 --- a/source/tutorial/restore-sharded-cluster.txt +++ b/source/tutorial/restore-sharded-cluster.txt @@ -7,66 +7,43 @@ Restore a Sharded Cluster Overview -------- -The procedure outlined in this document addresses how to restore an -entire sharded cluster. For information on related backup procedures -consider the following tutorials which describe backup procedures in -greater detail: +You can restore a sharded cluster either from :doc:`snapshots +` or from :term:`BSON` +:doc:`database dumps +` created by the +:program:`mongodump` tool. This document provides procedures for both: -- :doc:`/tutorial/backup-sharded-cluster-with-filesystem-snapshots` -- :doc:`/tutorial/backup-sharded-cluster-with-database-dumps` - -The exact procedure used to restore a database depends on the method -used to capture the backup. See the :doc:`/core/backups` -document for an overview of backups with MongoDB and -:doc:`/administration/backup-sharded-clusters` for a complete -information on backups in MongoDB and backups of sharded clusters in -particular. - -Procedure ---------- - -#. Stop all :program:`mongos` and :program:`mongod` processes, - including all shards *and* all config servers. +- :ref:`restore-sharded-cluster-with-snapshots` -#. Restore the following: +- :ref:`restore-sh-cl-dmp` - - Data files for each server in each :term:`shard`. Because replica - sets provide each production shard, restore all the members of - the replica set or use the other standard approaches for - restoring a replica set from backup. See the - :ref:`backup-restore-snapshot` and :ref:`backup-restore-dump` - sections for details on these procedures. +Related Documents +----------------- - - Data files for each :ref:`config server `. +For an overview of backups in MongoDB, see :doc:`/core/backups`. For +complete information on backups and backups of sharded clusters in +particular, see :doc:`/administration/backup-sharded-clusters`. -#. Restart all the :ref:`config servers ` - :program:`mongod` instances by issuing command similar to the - following, using values appropriate - to your configuration: +For backup procedures, see: - .. code-block:: sh - - mongod --configsvr --dbpath /data/configdb --port 27019 - -#. If shard hostnames **have changed**: +- :doc:`/tutorial/backup-sharded-cluster-with-filesystem-snapshots` +- :doc:`/tutorial/backup-sharded-cluster-with-database-dumps` - a. Start one :program:`mongos` instance, using the updated - config string with the new ``configdb`` hostnames and ports. +Procedures +---------- - #. Update the ``shards`` collection in the :ref:`config-database` to - reflect the new hostnames. +Use the procedure for the type of backup files to restore. - #. Stop the :program:`mongos` instance. +.. _restore-sharded-cluster-with-snapshots: -#. Restart all the shard :program:`mongod` instances. +Restore a Sharded Cluster with Filesystem Snapshots +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -#. Restart all the :program:`mongos` instances, making sure to use the updated config string. +.. include:: /includes/steps/restore-sharded-cluster-with-snapshots.rst -#. Connect to a :program:`mongos` instance from a :program:`mongo` shell - and use the :method:`db.printShardingStatus()` method to ensure - that the cluster is operational, as follows: +.. _restore-sh-cl-dmp: - .. code-block:: javascript +Restore a Sharded Cluster with Database Dumps +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - db.printShardingStatus() - show collections +.. include:: /includes/steps/restore-sharded-cluster-database-dump.rst diff --git a/source/tutorial/restore-single-shard.txt b/source/tutorial/restore-single-shard.txt index 0c39e164079..976ab27bbb4 100644 --- a/source/tutorial/restore-single-shard.txt +++ b/source/tutorial/restore-single-shard.txt @@ -28,17 +28,4 @@ as a whole. When you restore a single shard, keep in mind that the from this shard since the last backup. If that's the case, you must manually move those chunks, as described in this procedure. -1. Restore the shard as you would any other :program:`mongod` - instance. See :doc:`/core/backups` for overviews of these - procedures. - -#. For all chunks that migrate away from this shard, you do not need - to do anything at this time. You do not need to delete these - documents from the shard because the chunks are automatically - filtered out from queries by :program:`mongos`. You can remove - these documents from the shard, if you like, at your leisure. - -#. For chunks that migrate to this shard after the most recent backup, - you must manually recover the chunks using backups of other shards, - or some other source. To determine what chunks have moved, view the - ``changelog`` collection in the :ref:`config-database`. +.. include:: /includes/steps/restore-single-shard.rst