Skip to content

V3.6 update function docs #3310

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 41 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
f59f596
update v3.6 sitemap
kay-kim Apr 11, 2018
efa833e
DOCS-11573: csrs data center
kay-kim Apr 10, 2018
fd8b6a1
DOCSP-2133: Clarify time required for failover/election
ravindk89 Apr 12, 2018
e7e575f
DOCS-11554: 3.6.4 upcoming
kay-kim Apr 13, 2018
bda1282
DOCS-11408: deprecate getPrevError command
kay-kim Apr 13, 2018
11fc503
Update 3.6-downgrade-path.rst
joannac- Apr 13, 2018
ab1048a
DOCS-11574 - Fix typo on db.collection.watch() parameters.
jdestefano-mongo Apr 13, 2018
fe99f00
DOCS-11615: Correct example using collMod to add/update validation fo…
stennie Apr 16, 2018
e3c393b
DOCS-11322, DOCS-11531: remove SLES11 and Ubuntu 12.04 support
kay-kim Apr 12, 2018
a7910f6
DOCS-11322, DOCS-11531: typo fix
kay-kim Apr 12, 2018
849a47f
DOCS-11322,DOCS-11531 - update platform compatibility table for v3.4 …
kay-kim Apr 12, 2018
282b2b0
DOCS-11573: rs/data centers + csrs especially
kay-kim Apr 16, 2018
5148225
trivial: lower subheading level
kay-kim Apr 17, 2018
6e63d8f
DOCS-11554: 3.6.4 release
kay-kim Apr 16, 2018
6dafd01
DOCS-11590: Update unattended installation for Windows to include new…
steveren Apr 13, 2018
c6b7256
Revert "trivial: lower subheading level" -- commit was made in mistake
kay-kim Apr 17, 2018
7f25e96
DOCS-11628 Add missing word and clarify phrasing
kallimachos Apr 17, 2018
054ff87
update Makefile for the node examples
kay-kim Apr 17, 2018
9f8455d
DOCS-11322,DOCS-11531: tweak wording for SLES11 and Ubuntu 12.04
kay-kim Apr 18, 2018
279d687
Added SHARD_FILTER info
skerschb Apr 6, 2018
52df4d2
there is no such thing as conflicting zones
asya999 Mar 14, 2018
e6a9110
DOCS-10721 Document locked memory ulimit
kallimachos Feb 28, 2018
a8f6dba
tweak
kay-kim Apr 18, 2018
c660cf7
DOCS-11573: additional tweaks to csrs/rs data centers
kay-kim Apr 19, 2018
a0ec1fa
adds ref for linking to supported platforms
schmalliso Apr 19, 2018
2870693
DOCS-6295: remove old version admonitions
kay-kim Apr 20, 2018
f354c7d
DOCS-6295: fix accidental checkin of a different file
kay-kim Apr 20, 2018
882ad5d
update old agg 2.6 phrases
kay-kim Apr 20, 2018
54a32bc
DOCS-11640: remove arbiter no-journal tip until SERVER-33747
kay-kim Apr 23, 2018
4aa5a30
Fix typo in isMaster
i80and Apr 23, 2018
93451e0
DOCS-11645: typo on currentOp
kay-kim Apr 25, 2018
995667c
add subheading ref link
kay-kim Apr 25, 2018
f2a7373
DOCS-11659: clean up failIndexKeyTooLong
kay-kim Apr 26, 2018
4421f47
remove outdated no join statement
kay-kim Apr 26, 2018
cd99580
rm outdated additional resources in upgrade-revision page
kay-kim Apr 29, 2018
805983d
minor
schmalliso Apr 30, 2018
8835902
DOCSP-2294: correct param name for cursor.tailable method
schmalliso Apr 25, 2018
e8aa918
correct --bind_ip_all for mongod/mongos
kay-kim May 2, 2018
d1d21a9
DOCS-11667: 3.2.20 release - upcoming
kay-kim May 2, 2018
39b77dd
incorrect fields included in delete example
asya999 May 7, 2018
08fc97d
Update change-events.txt
asya999 May 7, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ examples:
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-ruby-driver/master/spec/mongo/change_stream_examples_spec.rb -o ${DRIVERS_PATH}/change_stream_examples_spec.rb
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-scala-driver/master/driver/src/it/scala/org/mongodb/scala/DocumentationExampleSpec.scala -o ${DRIVERS_PATH}/DocumentationExampleSpec.scala
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-csharp-driver/master/tests/MongoDB.Driver.Examples/DocumentationExamples.cs -o ${DRIVERS_PATH}/DocumentationExamples.cs
curl -SfL https://raw.githubusercontent.com/mongodb/node-mongodb-native/3.0.0/test/functional/operation_changestream_example_tests.js -o ${DRIVERS_PATH}/ChangeStreamNodeExamples.js
curl -SfL https://raw.githubusercontent.com/mongodb/node-mongodb-native/3.0/test/functional/operation_changestream_example_tests.js -o ${DRIVERS_PATH}/ChangeStreamNodeExamples.js
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-csharp-driver/master/tests/MongoDB.Driver.Examples/ChangeStreamExamples.cs -o ${DRIVERS_PATH}/ChangeStreamExamples.cs
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-c-driver/master/tests/test-mongoc-sample-commands.c -o ${DRIVERS_PATH}/test-mongoc-sample-commands.c
curl -SfL https://raw.githubusercontent.com/mongodb/mongo-java-driver-reactivestreams/master/examples/documentation/src/DocumentationSamples.java -o ${DRIVERS_PATH}/AsyncDocumentationSamples.java
10 changes: 5 additions & 5 deletions conf-sitemap.xml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<site
base_url="http://docs.mongodb.com/manual/"
store_into="build/master/sitemap.xml.gz"
base_url="http://docs.mongodb.com/v3.6/"
store_into="build/v3.6/sitemap.xml.gz"
verbose="1"
>

<directory
path="build/public/master/"
url="http://docs.mongodb.com/manual/"
path="build/public/v3.6/"
url="http://docs.mongodb.com/v3.6/"
default_file="index.html"
/>
<filter action="drop" type="wildcard" pattern="*~" />
Expand All @@ -19,7 +19,7 @@
<filter action="drop" type="wildcard" pattern="*.gif" />
<filter action="drop" type="wildcard" pattern="*.png" />
<filter action="drop" type="regexp" pattern="/\.[^/]*" />
<filter action="drop" type="wildcard" pattern="*/master/single*"/>
<filter action="drop" type="wildcard" pattern="*/v3.6/single*"/>
<filter action="drop" type="wildcard" pattern="*/meta/*"/>

<filter action="drop" type="wildcard" pattern="*/_static/*" />
Expand Down
2 changes: 1 addition & 1 deletion config/build_conf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ project:
branched: true
siteroot: true
version:
release: '3.6.3'
release: '3.6.4'
branch: '3.6'
system:
files:
Expand Down
3 changes: 1 addition & 2 deletions source/administration/production-notes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ systems are recommended for production use:
- Amazon Linux
- Debian 7.1
- RHEL / CentOS 6.2+
- SLES 11+
- Ubuntu LTS 14.04
- SLES 12
- Ubuntu LTS 16.04
- Windows Server 2012 & 2012 R2

Expand Down
2 changes: 0 additions & 2 deletions source/core/2dsphere.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@
:depth: 1
:class: singlecol

.. versionadded:: 2.4

Overview
--------

Expand Down
12 changes: 2 additions & 10 deletions source/core/aggregation-pipeline-limits.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,23 +21,15 @@ Result Size Restrictions
MongoDB 3.6 removes the option for the :dbcommand:`aggregate`
command to return its results as a single document.

.. versionchanged:: 2.6

Starting in MongoDB 2.6, the :dbcommand:`aggregate` command can return
The :dbcommand:`aggregate` command can return
either a cursor or store the results in a collection. When returning a
cursor or storing the results in a collection, each document in the
result set is subject to the :limit:`BSON Document Size` limit,
currently 16 megabytes; if any single document that exceeds the
:limit:`BSON Document Size` limit, the command will produce an error.
The limit only applies to the returned documents; during the pipeline
processing, the documents may exceed this size. The
:method:`db.collection.aggregate()` method returns a cursor by default
starting in MongoDB 2.6.

Before MongoDB 2.6, the :dbcommand:`aggregate` command can only return
a single BSON document that contains the result set and will produce an
error if the total size of the result set exceeds the :limit:`BSON
Document Size` limit.
:method:`db.collection.aggregate()` method returns a cursor by default.

.. _agg-memory-restrictions:

Expand Down
9 changes: 4 additions & 5 deletions source/core/aggregation-pipeline.txt
Original file line number Diff line number Diff line change
Expand Up @@ -97,11 +97,10 @@ The :pipeline:`$match` and :pipeline:`$sort` pipeline operators can
take advantage of an index when they occur at the **beginning** of the
pipeline.

.. versionadded:: 2.4
The :pipeline:`$geoNear` pipeline operator takes advantage of a
geospatial index. When using :pipeline:`$geoNear`, the
:pipeline:`$geoNear` pipeline operation must appear as the first
stage in an aggregation pipeline.
The :pipeline:`$geoNear` pipeline operator takes advantage of a
geospatial index. When using :pipeline:`$geoNear`, the
:pipeline:`$geoNear` pipeline operation must appear as the first stage
in an aggregation pipeline.

.. versionchanged:: 3.2

Expand Down
23 changes: 9 additions & 14 deletions source/core/gridfs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ that exceed the :term:`BSON`\-document :ref:`size limit

Instead of storing a file in a single document, GridFS divides the file
into parts, or chunks [#chunk-disambiguation]_, and stores each chunk as
a separate document. By default, GridFS uses a chunk size of 255 kB;
a separate document. By default, GridFS uses a default chunk size of 255 kB;
that is, GridFS divides a file into chunks of 255 kB with the exception
of the last chunk. The last chunk is only as large as necessary.
Similarly, files that are no larger than the chunk size only have a
Expand All @@ -39,9 +39,6 @@ for storing any files for which you want access without having to load
the entire file into memory. See also
:ref:`faq-developers-when-to-use-gridfs`.

.. versionchanged:: 2.4.10
The default chunk size changed from 256 kB to 255 kB.

.. _faq-developers-when-to-use-gridfs:

When to Use GridFS
Expand Down Expand Up @@ -74,11 +71,11 @@ can update the metadata field that indicates "latest" status in an
atomic update after uploading the new version of the file, and later
remove previous versions if needed.

Furthermore, if your files are all smaller the 16 MB :limit:`BSON
Document Size` limit, consider storing the file manually within a
single document instead of using GridFS. You may use the BinData data
type to store the binary data. See your :doc:`drivers
</applications/drivers>` documentation for details on using BinData.
Furthermore, if your files are all smaller than the 16 MB :limit:`BSON
Document Size` limit, consider storing each file in a single document instead
of using GridFS. You may use the BinData data type to store the binary data.
See your :doc:`drivers </applications/drivers>` documentation for details on
using BinData.

.. index:: GridFS; initialize
.. _gridfs-use:
Expand Down Expand Up @@ -201,8 +198,6 @@ following fields:
chunks of size ``chunkSize``, except for the last, which is only as
large as needed. The default size is 255 kilobytes (kB).

.. versionchanged:: 2.4.10
The default chunk size changed from 256 kB to 255 kB.

.. data:: files.uploadDate

Expand Down Expand Up @@ -316,12 +311,12 @@ index.
``files_id`` is an :term:`objectid` and changes
:ref:`monotonically<shard-key-monotonic>`.

You cannot use :doc:`/core/hashed-sharding` when sharding the ``chunks``
You cannot use :doc:`/core/hashed-sharding` when sharding the ``chunks``
collection.

The ``files`` collection is small and only contains metadata. None of the
required keys for GridFS lend themselves to an even distribution in a
sharded environment. If you *must* shard the ``files`` collection, use the
required keys for GridFS lend themselves to an even distribution in a
sharded environment. If you *must* shard the ``files`` collection, use the
``_id`` field, possibly in combination with an application field.

Leaving ``files`` unsharded allows all the file metadata documents to live
Expand Down
2 changes: 0 additions & 2 deletions source/core/hashed-sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ Hashed Sharding

.. default-domain:: mongodb

.. versionadded:: 2.4

Hashed sharding uses a :ref:`hashed index <index-hashed-index>` of a
single field as the :term:`shard key` to partition data across your
sharded cluster.
Expand Down
2 changes: 0 additions & 2 deletions source/core/index-hashed.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ Hashed Indexes
:depth: 1
:class: singlecol

.. versionadded:: 2.4

Hashed indexes maintain entries with hashes of the values of the
indexed field.

Expand Down
12 changes: 5 additions & 7 deletions source/core/index-multikey.txt
Original file line number Diff line number Diff line change
Expand Up @@ -141,13 +141,11 @@ Shard Keys

You **cannot** specify a multikey index as the shard key index.

.. versionchanged:: 2.6

However, if the shard key index is a :ref:`prefix
<compound-index-prefix>` of a compound index, the compound index is
allowed to become a compound *multikey* index if one of the other
keys (i.e. keys that are not part of the shard key) indexes an
array. Compound multikey indexes can have an impact on performance.
However, if the shard key index is a :ref:`prefix
<compound-index-prefix>` of a compound index, the compound index is
allowed to become a compound *multikey* index if one of the other keys
(i.e. keys that are not part of the shard key) indexes an array.
Compound multikey indexes can have an impact on performance.

Hashed Indexes
~~~~~~~~~~~~~~
Expand Down
1 change: 0 additions & 1 deletion source/core/kerberos.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ Kerberos Authentication
:depth: 1
:class: singlecol

.. versionadded:: 2.4

Overview
--------
Expand Down
17 changes: 0 additions & 17 deletions source/core/map-reduce-sharded-collections.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,20 +53,3 @@ To output to a sharded collection:

- Balancing of chunks for the output collection is automatically
prevented during post-processing to avoid concurrency issues.

In MongoDB 2.0:

- :binary:`~bin.mongos` retrieves the results from each shard,
performs a merge sort to order the results, and proceeds to the reduce/finalize phase as
needed. :binary:`~bin.mongos` then writes the result to the output
collection in sharded mode.

- This model requires only a small amount of memory, even for large data sets.

- Shard chunks are not automatically split during insertion. This
requires manual intervention until the chunks are granular and
balanced.

.. important::
For best results, only use the sharded output options for
:dbcommand:`mapReduce` in version 2.2 or later.
Original file line number Diff line number Diff line change
Expand Up @@ -54,16 +54,15 @@ distributions of members include:
- If Data Center 2 goes down, the replica set remains writeable as
the members in Data Center 1 can hold an election.

.. note::

.. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst

- Three data centers: one members to Data Center 1, one member to Data
Center 2, and one member to Data Center 3.

- If any Data Center goes down, the replica set remains writeable as
the remaining members can hold an election.

.. include:: /includes/note-csrs-distribution.rst

Five-member Replica Set
```````````````````````

Expand All @@ -78,12 +77,15 @@ distributions of members include:
- If Data Center 2 goes down, the replica set remains writeable as
the members in Data Center 1 can create a majority.


- Three data centers: two member to Data Center 1, two members to Data
Center 2, and one member to site Data Center 3.

- If any Data Center goes down, the replica set remains writeable as
the remaining members can hold an election.

.. include:: /includes/note-csrs-distribution.rst

For example, the following 5 member replica set distributes its members
across three data centers.

Expand Down
5 changes: 5 additions & 0 deletions source/core/replica-set-architectures.txt
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,8 @@ throughput by distributing reads to secondary members. As your
deployment grows, add or move members to alternate data centers to
improve redundancy and availability.

.. include:: /includes/note-csrs-distribution.rst

Always ensure that the main facility is able to elect a primary.

Add Capacity Ahead of Demand
Expand All @@ -146,6 +148,9 @@ the likelihood that even with a loss of a data center, the remaining
replica set members can form a majority or at minimum, provide a copy
of your data.

.. include:: /includes/note-csrs-distribution.rst


To ensure that the members in your main data center be elected primary
before the members in the alternate data center, set the
:rsconf:`members[n].priority` of the members in the alternate data
Expand Down
39 changes: 21 additions & 18 deletions source/core/replica-set-elections.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,28 +17,31 @@ Replica Set Elections
:class: singlecol

:term:`Replica sets <replica set>` use elections to determine which
set member will become :term:`primary`. Elections occur after
initiating a replica set, and also any time the primary becomes
unavailable. The primary is the only member in the set that can accept
write operations. If a primary becomes unavailable, elections allow
the set to recover normal operations without manual
intervention.
In the following three-member replica set, the primary is unavailable.
One of the remaining secondaries holds an election to elect itself as a
new primary.
set member will become :term:`primary`. Replica sets can trigger an
election in response to a variety of events, such as:

- Adding a new node to the replica set,
- :method:`initiating a replica set <rs.initiate()>`,
- performing replica set maintenance using methods such as :method:`rs.stepDown()` or :method:`rs.reconfig()`, and
- the :term:`secondary` members losing connectivity to the primary for more than the configured :rsconf:`timeout <settings.electionTimeoutMillis>` (10 seconds by default).

In the following diagram, the primary node was unavailable for longer
than the :rsconf:`configured timeout <settings.electionTimeoutMillis>`
and triggers the :ref:`automatic failover <replication-auto-failover>`
process. One of the remaining secondaries calls for an election to
select a new primary and automatically resume normal operations.

.. include:: /images/replica-set-trigger-election.rst

Elections are essential for independent operation of a
replica set; however, elections take time to complete. While an
election is in process, the replica set has no primary and cannot
accept writes and all remaining members become read-only.
The replica set cannot process write operations until the
election completes successfully. The replica set can continue to serve
read queries if such queries are configured to
:ref:`run on secondaries <replica-set-read-preference>`.

.. include:: /includes/fact-election-latency.rst

.. include:: /includes/fact-retryable-writes-failover-election.rst

If a majority of the replica set is inaccessible or unavailable to the
current primary, the primary will step down and become a secondary. The
replica set cannot accept writes after this occurs, but remaining
members can continue to serve read queries if such queries are
configured to run on secondaries.

Factors and Conditions that Affect Elections
--------------------------------------------
Expand Down
27 changes: 3 additions & 24 deletions source/core/replica-set-high-availability.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,29 +15,8 @@ Replica Set High Availability
:depth: 1
:class: singlecol

:term:`Replica sets <replica set>` provide high availability using
automatic :term:`failover`. Failover allows a :term:`secondary` member
to become :term:`primary` if the current primary becomes unavailable.
Replica sets use elections to support high availability.

.. versionchanged:: 3.2
.. include:: /includes/toc/dfn-list-replica-set-high-availability.rst

MongoDB introduces a version 1 of the replication protocol
(:rsconf:`protocolVersion: 1 <protocolVersion>`) to reduce replica set
failover time and accelerates the detection of multiple simultaneous
primaries. New replica sets will, by default, use
:rsconf:`protocolVersion: 1 <protocolVersion>`. Previous versions of
MongoDB use version 0 of the protocol.

Replica set members keep the same data set but are otherwise
independent. If the primary becomes unavailable, an eligible secondary
holds an :doc:`election </core/replica-set-elections>` to elect itself
as a new primary. In some situations, the failover process may undertake
a :doc:`rollback </core/replica-set-rollbacks>`. [#rollback-automatic]_

.. class:: hidden

.. include:: /includes/toc/replica-set-high-availability.rst

.. [#rollback-automatic] Replica sets remove "rollback" data when
needed without intervention. Administrators must apply or discard
rollback data manually.
.. include:: /includes/toc/replica-set-high-availability.rst
11 changes: 11 additions & 0 deletions source/core/sharded-cluster-components.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,24 @@ for a production sharded cluster deployment:
- Deploy each Shard as a 3 member :term:`replica set`
- Deploy one or more :binary:`~bin.mongos` routers

Replica Set Distribution
~~~~~~~~~~~~~~~~~~~~~~~~

Where possible, consider deploying one member of each replica set
in a site suitable for being a disaster recovery location.

.. include:: /includes/note-csrs-distribution.rst

Number of Shards
~~~~~~~~~~~~~~~~

Sharding requires at least two shards to distribute sharded data. Single
shard sharded clusters may be useful if you plan on enabling sharding in
the near future, but do not need to at the time of deployment.

Number of ``mongos`` and Distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Deploying multiple :binary:`~bin.mongos` routers supports high
availability and scalability. A common pattern is to place a
:binary:`~bin.mongos` on each application server. Deploying one
Expand Down
2 changes: 1 addition & 1 deletion source/includes/3.6-downgrade-path.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
Once upgraded to |newversion|, we recommend |downgrading| to the latest patch release of 3.4.
Once upgraded to |newversion|, if you need to downgrade, we recommend |downgrading| to the latest patch release of 3.4.

Loading