Skip to content

reorg of read concern (for merge on Monday) #3281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 9, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions config/sphinx_local.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ theme:
- /reference/operator/query-modifier
- /reference/operator/update
- /reference/replication
- /reference/read-concern
- /reference/security
- /reference/sharding
- /reference/write-concern
Expand Down
90 changes: 90 additions & 0 deletions source/images/read-concern-write-timeline.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
105 changes: 105 additions & 0 deletions source/includes/fact-read-concern-write-timeline.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
Consider the following timeline of a write operation Write\ :sub:`0` to
a three member replica set:

.. note::

For simplification, the example assumes:

- All writes prior to Write\ :sub:`0` have been successfully
replicated to all members.

- Write\ :sub:`prev` is the previous write before Write\ :sub:`0`.

- No other writes have occured after Write\ :sub:`0`.

.. figure:: /images/read-concern-write-timeline.svg
:alt: Timeline of a write operation to a three member replica set.
:figwidth: 330px

.. list-table::
:header-rows: 1
:widths: 10 45 25 25

* - Time
- Event
- Most Recent Write
- Most Recent w: "majority" write

* - t\ :sub:`0`
- Primary applies Write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

- | **Primary**: Write\ :sub:`prev`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`



* - t\ :sub:`1`
- Secondary\ :sub:`1` applies write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`


- | **Primary**: Write\ :sub:`prev`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

* - t\ :sub:`2`
- Secondary\ :sub:`2` applies write\ :sub:`0`
- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`


- | **Primary**: Write\ :sub:`prev`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

* - t\ :sub:`3`
- Primary is aware of successful replication to Secondary\ :sub:`1` and sends acknowledgement to client
- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

* - t\ :sub:`4`
- Primary is aware of successful replication to Secondary\ :sub:`2`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`prev`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

* - t\ :sub:`5`
- Secondary\ :sub:`1` receives notice (through regular replication mechanism) to update its snapshot of its most recent w: "majority" write

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`prev`

* - t\ :sub:`6`
- Secondary\ :sub:`2` receives notice (through regular replication mechanism) to update its snapshot of its most recent w: "majority" write

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`

- | **Primary**: Write\ :sub:`0`
| **Secondary**\ :sub:`1`: Write\ :sub:`0`
| **Secondary**\ :sub:`2`: Write\ :sub:`0`
9 changes: 1 addition & 8 deletions source/includes/fact-readConcern-syntax.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,4 @@ The readConcern option has the following syntax:

.. code-block:: javascript

readConcern: { level: <value>,
afterClusterTime: <Timestamp> }

.. important::

Do not manually set the ``afterClusterTime``. MongoDB drivers set
this value automatically for operations associated with
:ref:`causally consistent sessions <causal-consistency>`.
readConcern: { level: <value> }
89 changes: 89 additions & 0 deletions source/reference/read-concern-available.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
.. default-domain:: mongodb

.. class:: hidden

.. readconcern:: "available"

============================
Read Concern ``"available"``
============================

.. meta::
:description: read concern, available read concern, read isolation
:keywords: read concern, available read concern, read isolation

.. versionadded:: 3.6

A query with read concern `"available"` returns the instance's most
recent data. Read concern `"available"` provides no guarantee that the
data has been written to a majority of the replica set members (i.e.
may be rolled back).

Read concern `"available"` is the default for reads against secondaries
if the reads are not associated with :ref:`causally consistent sessions
<sessions>`.

For a sharded cluster, :readconcern:`"available"` read concern provides
greater tolerance for partitions since it does not wait to ensure
consistency guarantees. However, a query with
:readconcern:`"available"` read concern may return orphan documents if
the shard is undergoing chunk migrations since the
:readconcern:`"available"` read concern, unlike :readconcern:`"local"`
read concern, does not contact the shard's primary nor the config
servers for updated :doc:`metadata
</core/sharded-cluster-config-servers>`.

For unsharded collections (including collections in a standalone
deployment or a replica set deployment), :readconcern:`"local"` and
:readconcern:`"available"` read concerns behave identically.

.. include:: /includes/fact-readConcern-most-recent-data-in-node.rst

.. seealso:: :parameter:`orphanCleanupDelaySecs`

Causally Consistent Sessions
----------------------------

Read concern :readconcern:`available` is unavailable for use with
causally consistent sessions.

Example
-------

.. include:: /includes/fact-read-concern-write-timeline.rst

Then, the following tables summarizes the state of the data that a read
operation with :readconcern:`"available"` read concern would see at
time ``T``.

.. figure:: /images/read-concern-write-timeline.svg
:alt: Timeline of a write operation to a three member replica set.
:figwidth: 330px

.. list-table::
:header-rows: 1
:widths: 40 30 30

* - Read Target
- Time ``T``
- State of Data

* - Primary
- After t\ :sub:`0`
- Data reflects Write\ :sub:`0`.

* - Secondary\ :sub:`1`
- Before t\ :sub:`1`
- Data reflects Write\ :sub:`prev`

* - Secondary\ :sub:`1`
- After t\ :sub:`1`
- Data reflects Write\ :sub:`0`

* - Secondary\ :sub:`2`
- Before t\ :sub:`2`
- Data reflects Write\ :sub:`prev`

* - Secondary\ :sub:`2`
- After t\ :sub:`2`
- Data reflects Write\ :sub:`0`
Loading