Skip to content

Commit 8a3983b

Browse files
author
Andrew Leung
committed
DOCS-220 sharding documents reviewed.
1 parent 3f784ae commit 8a3983b

File tree

3 files changed

+85
-136
lines changed

3 files changed

+85
-136
lines changed

source/administration/sharding-architectures.txt

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,6 @@ When deploying a shard cluster to production, you must ensure that the data
3838
is redundant and that your individual nodes are highly available. To that end,
3939
a production-level shard cluster should have the following:
4040

41-
.. TODO - 'highly available' ? would 'highly accessible' make more sense?
42-
4341
- 3 :ref:`config servers <sharding-config-server>`, each residing on a separate node.
4442

4543
- For each shard, a three member :term:`replica set <replica set>` consisting of:
@@ -92,7 +90,7 @@ instance or replica set (i.e. a :term:`shard`.)
9290
ensure that all queries and operations use the :term:`mongos`
9391
router to access the data cluster. Use the :program:`mongos` even
9492
for operations that do not impact the sharded data.
95-
93+
9694
Every database has a "primary" [#overloaded-primary-term]_ shard that
9795
holds all un-sharded collections in that database. All collections
9896
that *are not* sharded reside on the primary for their database. Use
@@ -105,12 +103,9 @@ cluster.
105103
.. warning::
106104

107105
The :dbcommand:`moveprimary` command can be expensive because
108-
it copies all non-sharded data between shards, during which
106+
it copies all non-sharded data to the new shard, during which
109107
that data will be unavailable for other operations.
110108

111-
.. TODO - isn't this expensive because data is moving from old
112-
.. primary to new primary?
113-
114109
When you deploy a new :term:`shard cluster`, the "first shard" becomes
115110
the primary for all databases before enabling sharding. Databases
116111
created subsequently, may reside on any shard in the cluster.

source/administration/sharding.txt

Lines changed: 21 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -14,14 +14,11 @@ For a full introduction to sharding in MongoDB see
1414
":doc:`/core/sharding`," and for a complete overview of all sharding
1515
documentation in the MongoDB Manual, see ":doc:`/sharding`." The
1616
":doc:`/administration/sharding-architectures`" document provides an
17-
overview of deployment possibilities that you may find helpful as you
18-
plan to deploy a shard cluster. Finally, the
19-
":doc:`/core/sharding-internals`" document provides a more detailed
20-
introduction to sharding that you may find useful when troubleshooting
17+
overview of deployment possibilities to help deploy a shard
18+
cluster. Finally, the ":doc:`/core/sharding-internals`" document
19+
provides a more detailed introduction to sharding when troubleshooting
2120
issues or understanding your cluster's behavior.
2221

23-
.. TODO - revise to use less 'you may'
24-
2522
.. contents:: Sharding Procedures:
2623
:backlinks: none
2724
:local:
@@ -37,9 +34,6 @@ tutorial as a guide. If you're deploying a :term:`shard cluster` from
3734
scratch, see the ":doc:`/tutorial/deploy-shard-cluster`" tutorial for
3835
more detail or use the following procedure as a quick starting point:
3936

40-
Sharding Quick Start
41-
--------------------
42-
4337
#. Provision the required hardware.
4438

4539
The ":ref:`sharding-requirements`" section describes what you'll
@@ -137,10 +131,6 @@ Sharding Quick Start
137131

138132
sh.addShard( "repl0/mongodb0.example.net:27027,mongodb1.example.net:27017,mongodb2.example.net:27017" )
139133

140-
141-
Enable Sharding
142-
---------------
143-
144134
#. Enable sharding for any database that you want to shard.
145135

146136
MongoDB enables sharding on a per-database basis. This is only a
@@ -204,11 +194,10 @@ Enable Sharding
204194
the distribution of data. Furthermore, you cannot change a
205195
collection's shard key once it has been set.
206196

207-
See the ":ref:`Shard Key Overview <sharding-shard-key>`" and
208-
":ref:`Shard Internals <sharding-internals-shard-keys>`" to help you
209-
choose a better shard key.
210-
211-
.. TODO - find internal ref for `shard key`
197+
See the ":ref:`Shard Key Overview <sharding-shard-key>`" and the
198+
more in depth documentation of ":ref:`Shard Key Qualities
199+
<sharding-internals-shard-keys>`" to help you select better shard
200+
keys.
212201

213202
If you do not specify a shard key, MongoDB will shard the
214203
collection using the ``_id`` field.
@@ -496,7 +485,7 @@ MB. This default chunk size works well for most deployments. However, if you
496485
notice that automatic migrations are incurring a level of I/O that
497486
your hardware cannot handle, you may want to reduce the chunk
498487
size. For the automatic splits and migrations, a small chunk size
499-
leads to more rapid chunk migrations, at the cost of more frequent migrations.
488+
leads to more rapid and frequent migrations.
500489

501490
To modify the chunk size, use the following procedure:
502491

@@ -552,12 +541,11 @@ migrate :term:`chunks <chunk>` between :term:`shards <shard>`.
552541
However, you may want to migrate chunks manually in a few cases:
553542

554543
- If you create chunks by presplitting the data in your collection,
555-
you will have to migrate chunks manually, to distribute chunks
544+
you will have to migrate chunks manually to distribute chunks
556545
evenly across the shards.
557546

558-
- If you find an active cluster is out of balance and the balancer
559-
cannot migrate chunks fast enough, then you will have to migrate
560-
chunks manually.
547+
- If the balancer in an active cluster cannot distribute chunks within
548+
the balancing window, then you will have to migrate chunks manually.
561549

562550
To migrate chunks, use the :dbcommand:`moveChunk` command.
563551

@@ -632,9 +620,6 @@ cluster`, do the following:
632620

633621
When this command returns, you will see output like the following:
634622

635-
.. TODO - split up multi-line output to make it easier to read & describe
636-
.. OR maybe specify the original command using .pretty option
637-
638623
.. code-block:: javascript
639624

640625
{ "_id" : "balancer",
@@ -761,7 +746,7 @@ run this operation from a driver that does not have helper functions:
761746

762747
db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true );
763748

764-
#. To enable the balancer again, alter the value of 'stopped' as follows:
749+
#. To enable the balancer again, alter the value of "stopped" as follows:
765750

766751
.. code-block:: javascript
767752

@@ -804,7 +789,7 @@ three config servers.
804789
#. Copy the entire :setting:`dbpath` file system tree from the
805790
existing config server to the two machines that will provide the
806791
additional config servers. These commands, issued on the system
807-
with the existing config database, `mongo-config0.example.net` may
792+
with the existing config database, ``mongo-config0.example.net`` may
808793
look like the following:
809794

810795
.. code-block:: sh
@@ -832,9 +817,8 @@ name.
832817

833818
#. Shut down the config server that you are moving.
834819

835-
This will render all config data for your cluster "read only:"
836-
837-
.. TODO - make link to config servers :doc:`read only<core/sharding/#config-servers>`
820+
This will render all config data for your cluster :ref:`read only
821+
<sharding-config-server>`."
838822

839823
#. Change the DNS entry that points to the system that provided the old
840824
config server, so that the *same* hostname points to the new
@@ -1025,12 +1009,10 @@ almost all cases this is the result of a shard key that does not
10251009
effectively allow :ref:`write scaling
10261010
<sharding-shard-key-write-scaling>`.
10271011

1028-
It's also possible that you have some "hot chunks". In this case, you may
1012+
It's also possible that you have some "hot chunks." In this case, you may
10291013
be able to solve the problem by splitting and then migrating parts of
10301014
these chunks.
10311015

1032-
.. TODO - is 'hot chunks' a well defined term?
1033-
10341016
In the worst case, you may have to consider re-sharding your data
10351017
and :ref:`choosing a different shard key <sharding-internals-choose-shard-key>`
10361018
to correct this pattern.
@@ -1100,16 +1082,14 @@ better :ref:`write scaling <sharding-shard-key-write-scaling>`.
11001082
Disable Balancing During Backups
11011083
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11021084

1103-
If MongoDB migrates a chunk while you're taking a :doc:`backup
1085+
If MongoDB migrates a chunk during a :doc:`backup
11041086
</administration/backups>`, you can end with an inconsistent snapshot
1105-
of your shard cluster. You should never run a backup unless you're
1106-
certain that you have disabled the balancer. There are two ways to
1107-
ensure this:
1087+
of your shard cluster. Never run a backup unless the balancer is
1088+
disabled. There are two ways to ensure this:
11081089

11091090
- Set the :ref:`balancing window <sharding-schedule-balancing-window>`
1110-
so that the balancer is inactive while you're creating the
1111-
backup. Ensure that the backup process can complete while you have
1112-
the balancer disabled.
1091+
so that the balancer is inactive during the backup. Ensure that the
1092+
backup can complete while you have the balancer disabled.
11131093

11141094
- :ref:`manually disable the balancer <sharding-balancing-disable-temporally>`
11151095
for the duration of the backup procedure.

0 commit comments

Comments
 (0)