You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/how-to/vshard_quick.rst
+13-13Lines changed: 13 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,10 @@ Creating a sharded cluster
6
6
**Example on GitHub**: `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_
7
7
8
8
In this tutorial, you get a sharded cluster up and running on your local machine and learn how to manage the cluster using the tt utility.
9
-
In this tutorial, the following external modules are used:
9
+
This cluster uses the following external modules:
10
10
11
-
- :ref:`vshard <vshard>` enables sharding in the cluster.
12
-
- `crud <https://github.com/tarantool/crud>`__ allows you to perform CRUD operations in the sharded cluster.
11
+
- :ref:`vshard <vshard>` enables sharding in a cluster.
12
+
- `crud <https://github.com/tarantool/crud>`__ allows you to manipulate data in a sharded cluster.
13
13
14
14
The cluster created in this tutorial includes 5 instances: one router and 4 storages, which constitute two replica sets.
15
15
@@ -51,7 +51,7 @@ In this tutorial, the application layout is prepared manually:
51
51
3. Inside ``instances.enabled/sharded_cluster_crud``, create the following files:
52
52
53
53
- ``instances.yml`` specifies instances to run in the current environment.
54
-
- ``config.yaml`` specifies the cluster's:ref:`configuration <configuration_overview>`.
54
+
- ``config.yaml`` specifies the cluster :ref:`configuration <configuration_overview>`.
55
55
- ``storage.lua`` contains code specific for :ref:`storages <vshard-architecture-storage>`.
56
56
- ``router.lua`` contains code specific for a :ref:`router <vshard-architecture-router>`.
57
57
- ``sharded_cluster_crud-scm-1.rockspec`` specifies external dependencies required by the application.
@@ -133,7 +133,7 @@ In this section, the following options are configured:
133
133
134
134
The cluster topology defined in the :ref:`following section <vshard-quick-start-configuring-cluster-topology>` also specifies the ``iproto.advertise.client`` option for each instance.
135
135
This option accepts a URI used to advertise the instance to clients.
136
-
For example, |tcm_full_name| uses these URIs to :ref:`connect <tcm_connect_clusters>` to cluster instances.
136
+
For example, :ref:`Tarantool Cluster Manager <tcm>` uses these URIs to :ref:`connect <tcm_connect_clusters>` to cluster instances.
@@ -155,13 +155,13 @@ Specify the total number of :ref:`buckets <vshard-vbuckets>` in a sharded cluste
155
155
Step 4: Defining the cluster topology
156
156
*************************************
157
157
158
-
Define the cluster's topology inside the :ref:`groups <configuration_reference_groups>` section.
158
+
Define the cluster topology inside the :ref:`groups <configuration_reference_groups>` section.
159
159
The cluster includes two groups:
160
160
161
161
* ``storages`` includes two replica sets. Each replica set contains two instances.
162
162
* ``routers`` includes one router instance.
163
163
164
-
Here is a schematic view of the cluster's topology:
164
+
Here is a schematic view of the cluster topology:
165
165
166
166
.. code-block:: yaml
167
167
@@ -326,7 +326,7 @@ After starting instances, you need to bootstrap the cluster as follows:
326
326
• Connecting to the instance...
327
327
• Connected to sharded_cluster_crud:router-a-001
328
328
329
-
2. Call :ref:`vshard.router.bootstrap() <router_api-bootstrap>` to perform the initial cluster bootstrap:
329
+
2. Call :ref:`vshard.router.bootstrap() <router_api-bootstrap>` to perform the initial cluster bootstrap and distribute all buckets across the replica sets:
330
330
331
331
.. code-block:: tarantoolsession
332
332
@@ -338,10 +338,10 @@ After starting instances, you need to bootstrap the cluster as follows:
338
338
339
339
.. _vshard-quick-start-working-status:
340
340
341
-
Checking the cluster's status
342
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
341
+
Checking the cluster status
342
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~
343
343
344
-
To check the cluster's status, execute :ref:`vshard.router.info() <router_api-info>` on the router:
344
+
To check the cluster status, execute :ref:`vshard.router.info() <router_api-info>` on the router:
345
345
346
346
.. code-block:: tarantoolsession
347
347
@@ -416,7 +416,7 @@ Writing and selecting data
416
416
{ 10, box.NULL, 'Queen', 1970 }
417
417
})
418
418
419
-
Calling this function :ref:`distributes data <vshard-quick-start-working-adding-data>` evenly across the cluster's nodes.
419
+
Calling this function :ref:`distributes data <vshard-quick-start-working-adding-data>` evenly across the cluster nodes.
420
420
421
421
2. To get a tuple by the specified ID, call the ``crud.get()`` function:
422
422
@@ -464,7 +464,7 @@ Writing and selecting data
464
464
Checking data distribution
465
465
~~~~~~~~~~~~~~~~~~~~~~~~~~
466
466
467
-
To check how data is distributed across the cluster's nodes, follow the steps below:
467
+
To check how data is distributed across the replica sets, follow the steps below:
468
468
469
469
1. Connect to any storage in the ``storage-a`` replica set:
0 commit comments