@@ -794,7 +794,7 @@ bulk insert operations:
794
794
shards. To avoid this performance cost, you can pre-split the
795
795
collection, as described in :ref:`sharding-administration-pre-splitting`.
796
796
797
- - You can parallels import by sending insert operations to more than
797
+ - You can parallel- import by sending insert operations to more than
798
798
one :program:`mongos` instance. If the collection is empty,
799
799
pre-split first, as described in
800
800
:ref:`sharding-administration-pre-splitting`.
@@ -810,7 +810,7 @@ bulk insert operations:
810
810
increasing shard key, then consider the following modifications to
811
811
your application:
812
812
813
- - Reverse all the bits of the shard key to preserves the information
813
+ - Reverse all the bits of the shard key to preserve the information
814
814
while avoiding the correlation of insertion order and increasing
815
815
sequence of values.
816
816
@@ -994,8 +994,8 @@ all migration, use the following procedure:
994
994
995
995
.. note::
996
996
997
- If a migration is in progress progress , the system will complete
998
- the in progress migration. After disabling, you can use the
997
+ If a migration is in progress, the system will complete
998
+ the in- progress migration. After disabling, you can use the
999
999
following operation in the :program:`mongo` shell to determine if
1000
1000
there are no migrations in progress:
1001
1001
@@ -1233,7 +1233,7 @@ of the cluster metadata from the config database is straight forward:
1233
1233
1234
1234
.. seealso:: :doc:`backups`.
1235
1235
1236
- .. [#read-only] While one of the three config servers unavailable, no
1236
+ .. [#read-only] While one of the three config servers is unavailable,
1237
1237
the cluster cannot split any chunks nor can it migrate chunks
1238
1238
between shards. Your application will be able to write data to the
1239
1239
cluster. The :ref:`sharding-config-server` section of the
@@ -1291,7 +1291,7 @@ Finally, if your shard key has a low :ref:`cardinality
1291
1291
<sharding-shard-key-cardinality>`, MongoDB may not be able to create
1292
1292
sufficient splits among the data.
1293
1293
1294
- One Shard Receives too much Traffic
1294
+ One Shard Receives Too Much Traffic
1295
1295
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1296
1296
1297
1297
In some situations, a single shard or a subset of the cluster will
@@ -1307,7 +1307,7 @@ In the worst case, you may have to consider re-sharding your data
1307
1307
and :ref:`choosing a different shard key <sharding-internals-choose-shard-key>`
1308
1308
to correct this pattern.
1309
1309
1310
- The Cluster does not Balance
1310
+ The Cluster Does Not Balance
1311
1311
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1312
1312
1313
1313
If you have just deployed your sharded cluster, you may want to
@@ -1362,7 +1362,7 @@ consider the following options, depending on the nature of the impact:
1362
1362
:ref:`add one or two shards <sharding-procedure-add-shard>` to
1363
1363
the cluster to distribute load.
1364
1364
1365
- It's also possible, that your shard key causes your
1365
+ It's also possible that your shard key causes your
1366
1366
application to direct all writes to a single shard. This kind of
1367
1367
activity pattern can require the balancer to migrate most data soon after writing
1368
1368
it. Consider redeploying your cluster with a shard key that provides
0 commit comments