@@ -7,14 +7,17 @@ Configure Behavior of Balancer Process in Sharded Clusters
7
7
8
8
.. default-domain:: mongodb
9
9
10
- This section describes the settings you can configure on the balancer.
10
+ The balancer runs on a single :program:`mongos` instance and distributes
11
+ chunks evenly throughout a sharded cluster. In most deployments, you do
12
+ not need to configure the balancer. The balancer automatically
13
+ distributes chunks in an optimal manner. However, administrators might
14
+ need to modify balancer behavior depending on application or operational
15
+ requirements. Should a situation arise where modifying balancer behavior
16
+ is necessary, this page describes settings that can be changed.
17
+
11
18
For conceptual information about the balancer, see
12
19
:ref:`sharding-balancing` and :ref:`sharding-balancing-internals`.
13
20
14
- You configure balancer settings through parameters in database commands
15
- or through fields in the :data:`~config.settings` collection in the
16
- :ref:`config database <config-database>`.
17
-
18
21
.. index:: balancing; secondary throttle
19
22
.. index:: secondary throttle
20
23
.. _sharded-cluster-config-secondary-throttle:
@@ -25,146 +28,114 @@ Require Replication before Chunk Migration (Secondary Throttle)
25
28
.. versionadded:: 2.2.1
26
29
27
30
You can configure the balancer to wait for replication to secondaries
28
- before migrating chunks. You do so by enabling the balancer's
29
- ``_secondaryThrottle`` parameter.
30
-
31
- Secondary throttle can speed performance in cases where you have
32
- migration-caused I/O peaks that do not cooperate with other workloads.
33
-
34
- .. above para is paraphrased from SERVER-7686
31
+ during migrations. You do so by enabling the balancer's
32
+ ``_secondaryThrottle`` parameter, which reduces throughput (i.e.,
33
+ "throttles") in order to decrease the load on secondaries. You might do
34
+ this, for example, if you have migration-caused I/O peaks that impact
35
+ other workloads
35
36
36
37
When enabled, secondary throttle puts a ``{ w : 2 }`` write concern on
37
- deletes and on bulk clones , which means the balancer waits for those
38
+ deletes and on copies , which means the balancer waits for those
38
39
operations to replicate to at least one secondary before migrating
39
40
chunks.
40
41
41
42
.. BACKGROUND NOTES
42
43
Specifically, secondary throttle affects the first and fourth
43
44
phases (informal phases) of chunk migration. Migration can happen during
44
45
the second and third phases (the "steady state"):
45
- 1) bulk clone data from shardA to shardB in the chunk range
46
- 2) continue to copy over ongoing changes that occurred during the initial clone step
46
+ 1) copies the documents in the chunk from shardA to shardB
47
+ 2) continues to copy over ongoing changes that occurred during the initial copy step,
47
48
as well as current changes to that chunk range
48
49
3) Stop writes, allow shardB to get final changes, commit migration to config server
49
50
4) cleanup now-inactive data on shardA in chunk range (once all cursors are done)
50
51
51
- To enable secondary throttle, set ``_secondaryThrottle``
52
- to ``true`` by doing either of the following:
53
-
54
- - Issue the :dbcommand:`moveChunk` command with the
55
- ``_secondaryThrottle`` parameter set to ``true``.
56
-
57
- - Enable the ``_secondaryThrottle`` setting directly in the
58
- :data:`~config.settings` collection in the :ref:`config database
59
- <config-database>`. To do so, run the following commands from the
60
- :program:`mongo` shell:
61
-
62
- .. code-block:: javascript
63
-
64
- use config
65
- db.settings.update( { "_id" : "balancer" } , { $set : { "_secondaryThrottle" : true } } )
66
-
67
- .. _sharded-cluster-config-no-auto-split:
68
-
69
- Prevent Auto-Splitting of Chunks
70
- --------------------------------
71
-
72
- .. versionadded:: 2.0.7
73
-
74
- By default, :program:`mongos` instances automatically split chunks
75
- during inserts or updates if the chunks exceed the default chunk size.
76
- When chunk distribution becomes uneven, the balancer automatically
77
- migrates chunks among shards. Automatic chunk migrations are crucial for
78
- distributing data, but for deployments with large numbers of
79
- :program:`mongos` instances, the automatic migration might affect the
80
- performance of the cluster.
52
+ You enable ``_secondaryThrottle`` directly in the
53
+ :data:`settings <~config.settings>` collection in the :ref:`config database
54
+ <config-database>` by running the following commands from the
55
+ :program:`mongo` shell:
81
56
82
- You can turn off the auto-splitting of chunks by enabling
83
- :setting:`noAutoSplit` for individual :program:`mongos` instances.
84
-
85
- .. note:: Turning off auto-splitting can lead to an imbalanced
86
- distribution of data in the sharded cluster.
87
-
88
- To turn off auto-splitting, do one of the following:
89
-
90
- - When staring a :program:`mongos`, include the :option:`--noAutoSplit
91
- <mongos>` command-line option.
92
-
93
- - In the configuration file for a given :program:`mongos`, include the
94
- :setting:`noAutoSplit` setting.
95
-
96
- Because any :program:`mongos` in a cluster can create a split, to
97
- totally disable splitting in a cluster you must set
98
- :setting:`noAutoSplit` on all :program:`mongos`.
57
+ .. code-block:: javascript
99
58
100
- .. warning::
59
+ use config
60
+ db.settings.update( { "_id" : "balancer" } , { $set : { "_secondaryThrottle" : true } } )
101
61
102
- With :setting:`noAutoSplit` enabled, the data in your sharded cluster
103
- may become imbalanced over time. Enable with caution.
62
+ You also can enable secondary throttle when issuing the
63
+ :dbcommand:`moveChunk` command by setting ``_secondaryThrottle`` to
64
+ ``true``. For more information, see :dbcommand:`moveChunk`.
104
65
105
66
.. _sharded-cluster-config-balancing-window:
106
67
107
68
Schedule a Window of Time for Balancing to Occur
108
69
------------------------------------------------
109
70
110
71
You can schedule a window of time during which the balancer is allowed
111
- to migrate chunks. See :ref:`sharding-schedule-balancing-window` and
112
- :ref:`sharding-balancing-remove-window`.
72
+ to migrate chunks, as described in the following procedures:
73
+
74
+ - :ref:`sharding-schedule-balancing-window`
75
+
76
+ - :ref:`sharding-balancing-remove-window`.
77
+
78
+ The configured time is evaluated relative to the time zone of each
79
+ individual :program:`mongos` instance in the sharded cluster.
113
80
114
81
.. _sharded-cluster-config-default-chunk-size:
115
82
116
- Change the Default Chunk Size
117
- -----------------------------
83
+ Configure Default Chunk Size
84
+ ----------------------------
118
85
119
- The default chunk size for a sharded cluster affects how often chunks
120
- are split and migrated. For details, see :ref:`sharding-chunk-size`.
86
+ The default chunk size for a sharded cluster is 64 megabytes. In most
87
+ situations, the default size is optimal for splitting and migrating
88
+ chunks. For information on how chunk size affects deployments, see
89
+ details, see :ref:`sharding-chunk-size`.
121
90
122
- To modify the default chunk size for a sharded cluster, see
123
- :ref:`sharding-balancing-modify-chunk-size`.
91
+ Changing the default chunk size affects chunks that are processes during
92
+ migrations and auto-splits but does not retroactively affect all chunks.
93
+
94
+ To configure default chunk size, see :ref:`sharding-balancing-modify-chunk-size`.
124
95
125
96
.. _sharded-cluster-config-max-shard-size:
126
97
127
- Change the Maximum Size for a Given Shard
128
- -----------------------------------------
98
+ Change the Maximum Storage Size for a Given Shard
99
+ -------------------------------------------------
129
100
130
101
The ``maxSize`` field in the :data:`~config.shards` collection in the
131
102
:ref:`config database <config-database>` sets the maximum size for a
132
- shard, allowing you to control disk use and affect whether the balancer
133
- will migrate chunks to a shard. By default, ``maxSize`` is not
134
- specified, allowing shards to consume the total amount of available
135
- space on their machines if necessary. You can set ``maxSize`` both when
136
- adding a shard and once a shard is running .
103
+ shard, allowing you to control whether the balancer will migrate chunks
104
+ to a shard. If :data:`dataSize <~dbStats.dataSize>` is above a shard's
105
+ ``maxSize``, the balancer will not move chunks to the shard. The
106
+ balancer also will not move chunks off the shard. The ``maxSize`` value
107
+ only affects the balancer's selection of destination shards .
137
108
138
- .. seealso:: :ref:`sharding-shard-size`
109
+ By default, ``maxSize`` is not specified, allowing shards to consume the
110
+ total amount of available space on their machines if necessary.
139
111
140
- Set Maximum Size When Adding a Shard
141
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
112
+ You can set ``maxSize`` both when adding a shard and once a shard is
113
+ running.
142
114
143
- When adding a shard using the :dbcommand:`addShard` command, set the
144
- ``maxSize`` parameter to the maximum size in megabytes. For example, the
145
- following command run in the :program:`mongo` shell adds a shard with a
146
- maximum size of 125 megabytes:
115
+ To set ``maxSize`` when adding a shard, set the :dbcommand:`addShard`
116
+ command's ``maxSize`` parameter to the maximum size in megabytes. For
117
+ example, the following command run in the :program:`mongo` shell adds a
118
+ shard with a maximum size of 125 megabytes:
147
119
148
120
.. code-block:: javascript
149
121
150
122
db.runCommand( { addshard : "example.net:34008", maxSize : 125 } )
151
123
152
- Set Maximum Size on a Running Shard
153
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
-
155
124
To set ``maxSize`` on an existing shard, insert or update the
156
125
``maxSize`` field in the :data:`~config.shards` collection in the
157
126
:ref:`config database <config-database>`. Set the ``maxSize`` in
158
127
megabytes.
159
128
160
- .. example:: Assume you have the following shard without a ``maxSize`` field:
129
+ .. example::
130
+
131
+ Assume you have the following shard without a ``maxSize`` field:
161
132
162
133
.. code-block:: javascript
163
134
164
135
{ "_id" : "shard0000", "host" : "example.net:34001" }
165
136
166
137
Run the following sequence of commands in the :program:`mongo` shell
167
- to insert a ``maxSize`` of 125 megabytes::
138
+ to insert a ``maxSize`` of 125 megabytes:
168
139
169
140
.. code-block:: javascript
170
141
0 commit comments