From 9ace9cfb89d959ab1886a7162ff7e159f4b0cb3e Mon Sep 17 00:00:00 2001 From: Andrew Aldridge Date: Mon, 13 Apr 2015 18:32:21 -0400 Subject: [PATCH 1/2] DOCS-1560: Document the function of $query --- source/reference/operator/meta/query.txt | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/source/reference/operator/meta/query.txt b/source/reference/operator/meta/query.txt index c5845d9408c..4c32976e8e5 100644 --- a/source/reference/operator/meta/query.txt +++ b/source/reference/operator/meta/query.txt @@ -6,22 +6,30 @@ $query .. operator:: $query - The :operator:`$query` operator provides an interface to describe - queries. Consider the following operation: + The :operator:`$query` operator forces MongoDB to interpret an expression + as a query. + + The following example :program:`mongo` commands are equivalent, and + return only those documents in the collection named ``collection`` where the + ``age`` field equals ``25``. .. code-block:: javascript db.collection.find( { $query: { age : 25 } } ) + db.collection.find( { age : 25 } ) - This is equivalent to the more familiar - :method:`db.collection.find()` method: + This operator is necessary to work with documents containing a subdocument + field called ``query``. For example, the following sequence of operations + will return no results: .. code-block:: javascript - db.collection.find( { age : 25 } ) + db.documents.save( { "query": { a: 1, b: 2 } } ) + db.documents.find( { query: { a: 1, b: 2 } } ) + + To obtain the document, you will need to use the following query: - These operations return only those documents in the collection named - ``collection`` where the ``age`` field equals ``25``. + db.documents.find( { "$query": { query: { a: 1, b: 2 } } } ) .. note:: From e7a527a3d5774d1ff676c6a20c1a53789998e5c8 Mon Sep 17 00:00:00 2001 From: Andrew Aldridge Date: Tue, 14 Apr 2015 18:13:13 -0400 Subject: [PATCH 2/2] DOCS-5211: Split out Analyzing MongoDB performance article --- config/redirects.yaml | 11 +- .../analyzing-mongodb-performance.txt | 249 ++++++++++++++++++ source/administration/monitoring.txt | 224 +--------------- source/applications/design-notes.txt | 232 +--------------- source/includes/intro-performance.rst | 4 + .../toc-administration-optimization.yaml | 3 + source/reference/command/serverStatus.txt | 6 +- .../reference/method/db.collection.find.txt | 5 +- source/reference/operator/meta/query.txt | 34 ++- 9 files changed, 297 insertions(+), 471 deletions(-) create mode 100644 source/administration/analyzing-mongodb-performance.txt create mode 100644 source/includes/intro-performance.rst diff --git a/config/redirects.yaml b/config/redirects.yaml index 599deb71d19..08fec766c1d 100644 --- a/config/redirects.yaml +++ b/config/redirects.yaml @@ -211,8 +211,15 @@ code: 301 outputs: - 'before-v2.6' --- +from: '/administration/analyzing-mongodb-performance' +to: '/administration/optimization' +type: 'redirect' +code: 301 +outputs: + - 'before-v2.6' +--- # redirected in 3.0 to getting started -# temp -- we should fix giza so that +# temp -- we should fix giza so that # we can use the external field to redirect with # the after-xxx outputs from: '/tutorial/getting-started' @@ -225,7 +232,7 @@ outputs: - { 'v3.0': "http://docs.mongodb.org/getting-started" } --- # redirected in 3.0 to getting started -# temp -- we should fix giza so that +# temp -- we should fix giza so that # we can use the external field to redirect with # the after-xxx outputs from: '/tutorial/generate-test-data' diff --git a/source/administration/analyzing-mongodb-performance.txt b/source/administration/analyzing-mongodb-performance.txt new file mode 100644 index 00000000000..d13197a4c16 --- /dev/null +++ b/source/administration/analyzing-mongodb-performance.txt @@ -0,0 +1,249 @@ +============================= +Analyzing MongoDB Performance +============================= + +.. default-domain:: mongodb + +As you develop and operate applications with MongoDB, you may need to +analyze the performance of the application and its database. +When you encounter degraded performance, it is often a function of database +access strategies, hardware availability, and the number of open database +connections. + +Some users may experience performance limitations as a result of inadequate +or inappropriate indexing strategies, or as a consequence of poor schema +design patterns. :ref:`analyzing-performance-locks` discusses how these can +impact MongoDB's internal locking. + +Performance issues may indicate that the database is operating at capacity +and that it is time to add additional capacity to the database. In particular, +the application's :term:`working set` should fit in the available physical +memory. See :ref:`analyzing-memory-mmapv1` for more information on the working +set. + +In some cases performance issues may be temporary and related to +abnormal traffic load. As discussed in :ref:`number-of-connections`, scaling +can help relax excessive traffic. + +:ref:`database-profiling` can help you to understand what operations are causing +degradation. + +.. _analyzing-performance-locks: + +Locking Performance +~~~~~~~~~~~~~~~~~~~ + +MongoDB uses a locking system to ensure data set consistency. If +certain operations are long-running or a queue forms, performance +will degrade as requests and operations wait for the lock. + +Lock-related slowdowns can be intermittent. To see if the lock has been +affecting your performance, look to the data in the +:ref:`globalLock` section of the :dbcommand:`serverStatus` output. + +If :data:`globalLock.currentQueue.total +` is consistently high, +then there is a chance that a large number of requests are waiting for +a lock. This indicates a possible concurrency issue that may be affecting +performance. + +If :data:`globalLock.totalTime ` is +high relative to :data:`~serverStatus.uptime`, the database has +existed in a lock state for a significant amount of time. + +If :data:`globalLock.ratio ` is also high, +MongoDB has likely been processing a large number of long running +queries. + +Long queries can result from ineffective use of indexes; +non-optimal schema design; poor query structure; system architecture issues; or +insufficient RAM resulting in +:ref:`page faults ` and disk reads. + +.. _analyzing-memory-mmapv1: + +Memory and the MMAPv1 Storage Engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Memory Use +`````````` + +With the :ref:`MMAPv1 ` storage engine, MongoDB uses +memory-mapped files to store data. Given a data set of sufficient size, +the :program:`mongod` process will allocate all available memory on the system +for its use. + +While this is intentional and aids performance, the memory mapped files make it +difficult to determine if the amount of RAM is sufficient for the data set. + +The :ref:`memory usage statuses ` metrics of the +:dbcommand:`serverStatus` output can provide insight into MongoDB's +memory use. + +The :data:`mem.resident ` field provides the +amount of resident memory in use. If this exceeds the amount of system +memory *and* there is a significant amount of data on disk that isn't in RAM, +you may have exceeded the capacity of your system. + +You can inspect :data:`mem.mapped ` to check the +amount of mapped memory that :program:`mongod` is using. If this value is +greater than the amount of system memory, some operations will require a +:term:`page faults ` to read data from disk. + + +.. _administration-monitoring-page-faults: + +Page Faults +``````````` + +.. include:: /includes/fact-page-fault.rst + +MongoDB reports its triggered page faults as the total number of +:term:`page faults ` in one second. To check for page faults, see +the :data:`extra_info.page_faults ` value +in the :dbcommand:`serverStatus` output. + +Rapid increases in the MongoDB page fault counter may indicate that the server +has too little physical memory. Page faults also can occur while accessing +large data sets or scanning an entire collection. + +A single page fault completes quickly and is not problematic. However, in +aggregate, large volumes of page faults typically indicate that MongoDB +is reading too much data from disk. + +MongoDB can often "yield" read locks after a page fault, allowing other database +processes to read while :program:`mongod` loads the next page into memory. +Yielding the read lock following a page fault improves concurrency, and also +improves overall throughput in high volume systems. + +Increasing the amount of RAM accessible to MongoDB may help reduce the +frequency of page faults. If this is not possible, you may want to consider +deploying a :term:`sharded cluster` or adding :term:`shards ` +to your deployment to distribute load among :program:`mongod` instances. + +See :ref:`faq-storage-page-faults` for more information. + +.. _number-of-connections: + +Number of Connections +~~~~~~~~~~~~~~~~~~~~~ + +In some cases, the number of connections between the applications and the +database can overwhelm the ability of the server to handle requests. The +following fields in the :dbcommand:`serverStatus` document can provide insight: + +- :data:`globalLock.activeClients + ` contains a counter of the + total number of clients with active operations in progress or + queued. + +- :data:`~serverStatus.connections` is a container for the following + two fields: + + - :data:`~serverStatus.connections.current` the total number of + current clients that connect to the database instance. + + - :data:`~serverStatus.connections.available` the total number of + unused connections available for new clients. + +If there are numerous concurrent application requests, the database may have +trouble keeping up with demand. If this is the case, then you will need to +increase the capacity of your deployment. + +For read-heavy applications, increase the size of your :term:`replica set` and +distribute read operations to :term:`secondary` members. + +For write-heavy applications, deploy :term:`sharding` and add one or more +:term:`shards ` to a :term:`sharded cluster` to distribute load among +:program:`mongod` instances. + +Spikes in the number of connections can also be the result of +application or driver errors. All of the officially supported MongoDB +drivers implement connection pooling, which allows clients to use and +reuse connections more efficiently. Extremely high numbers of +connections, particularly without corresponding workload is often +indicative of a driver or other configuration error. + +Unless constrained by system-wide limits, MongoDB has no limit on +incoming connections. On Unix-based systems, you can modify system limits +using the ``ulimit`` command, or by editing your system's +``/etc/sysctl`` file. See :doc:`/reference/ulimit` for more +information. + +.. _database-profiling: + +Database Profiling +~~~~~~~~~~~~~~~~~~ + +MongoDB's "Profiler" is a database profiling system that can help identify +inefficient queries and operations. + +The following profiling levels are available: + +.. list-table:: + :header-rows: 1 + + * - **Level** + + - **Setting** + + * - 0 + + - Off. No profiling + + * - 1 + + - On. Only includes *"slow"* operations + + * - 2 + + - On. Includes *all* operations + +Enable the profiler by setting the +:dbcommand:`profile` value using the following command in the +:program:`mongo` shell: + +.. code-block:: javascript + + db.setProfilingLevel(1) + +The :setting:`~operationProfiling.slowOpThresholdMs` setting defines what constitutes a "slow" +operation. To set the threshold above which the profiler considers +operations "slow" (and thus, included in the level ``1`` profiling +data), you can configure :setting:`~operationProfiling.slowOpThresholdMs` at runtime as an argument to +the :method:`db.setProfilingLevel()` operation. + +.. see:: The documentation of :method:`db.setProfilingLevel()` for more + information. + +By default, :program:`mongod` records all "slow" queries to its +:setting:`log `, as defined by :setting:`~operationProfiling.slowOpThresholdMs`. + +.. note:: + + Because the database profiler can negatively impact + performance, only enable profiling for strategic intervals and as + minimally as possible on production systems. + + You may enable profiling on a per-:program:`mongod` basis. This + setting will not propagate across a :term:`replica set` or + :term:`sharded cluster`. + +You can view the output of the profiler in the ``system.profile`` +collection of your database by issuing the ``show profile`` command in +the :program:`mongo` shell, or with the following operation: + +.. code-block:: javascript + + db.system.profile.find( { millis : { $gt : 100 } } ) + +This returns all operations that lasted longer than 100 milliseconds. +Ensure that the value specified here (``100``, in this example) is above the +:setting:`~operationProfiling.slowOpThresholdMs` threshold. + +You must use the :operator:`$query` operator to access the ``query`` +field of documents within ``system.profile``. + +.. seealso:: :doc:`/administration/optimization` addresses strategies + that may improve the performance of your database queries and + operations. diff --git a/source/administration/monitoring.txt b/source/administration/monitoring.txt index 075b580bfe4..0874a3f9586 100644 --- a/source/administration/monitoring.txt +++ b/source/administration/monitoring.txt @@ -337,229 +337,7 @@ affect logging: Diagnosing Performance Issues ----------------------------- -Degraded performance in MongoDB -is typically a function of the relationship between the -quantity of data stored in the database, the amount of system RAM, the -number of connections to the database, and the amount of time the -database spends in a locked state. - -In some cases performance issues may be transient and related to -traffic load, data access patterns, or the availability of hardware on -the host system for virtualized environments. Some users also -experience performance limitations as a result of inadequate or -inappropriate indexing strategies, or as a consequence of poor schema -design patterns. In other situations, performance issues may indicate -that the database may be operating at capacity and that it is time to -add additional capacity to the database. - -The following are some causes of degraded performance in MongoDB. - -Locks -~~~~~ - -MongoDB uses a locking system to ensure data set consistency. However, if -certain operations are long-running, or a queue forms, performance -will degrade as requests and operations wait for the lock. Lock-related -slowdowns can be intermittent. To see if the lock has been affecting -your performance, look to the data in the -:ref:`globalLock` section of the :dbcommand:`serverStatus` output. If -:data:`globalLock.currentQueue.total -` is consistently high, -then there is a chance that a large number of requests are waiting for -a lock. This indicates a possible concurrency issue that may be affecting -performance. - -If :data:`globalLock.totalTime ` is -high relative to :data:`~serverStatus.uptime`, the database has -existed in a lock state for a significant amount of time. If -:data:`globalLock.ratio ` is also high, -MongoDB has likely been processing a large number of long running -queries. Long queries are often the result of a number of factors: -ineffective use of indexes, non-optimal schema design, poor query -structure, system architecture issues, or insufficient RAM resulting -in :ref:`page faults ` and disk -reads. - -Memory Usage -~~~~~~~~~~~~ - -MongoDB uses memory mapped files to store data. Given a data -set of sufficient size, the MongoDB process will allocate all -available memory on the system for its use. -While this is part of the design, and affords MongoDB superior -performance, the memory mapped files make it difficult to determine if -the amount of RAM is sufficient for the data set. - -The :ref:`memory usage statuses ` metrics of the -:dbcommand:`serverStatus` output can provide insight into MongoDB's -memory use. Check the resident memory use -(i.e. :data:`mem.resident `): if this -exceeds the amount of system memory *and* there is a significant amount -of data on disk that isn't in RAM, you may have exceeded the capacity -of your system. - -You should also check the amount of mapped memory (i.e. :data:`mem.mapped -`.) If this value is greater than the amount -of system memory, some operations will require disk access :term:`page -faults ` to read data from virtual memory and negatively -affect performance. - -.. _administration-monitoring-page-faults: - -Page Faults -~~~~~~~~~~~ - -.. include:: /includes/fact-page-fault.rst - -Page faults triggered by MongoDB are reported as the total number of page -faults in one second. To check for page faults, see the -:data:`extra_info.page_faults ` value -in the :dbcommand:`serverStatus` output. - -MongoDB on Windows counts both hard and soft page faults. - -The MongoDB page fault counter may increase dramatically in moments of -poor performance and may correlate with limited physical memory -environments. Page faults also can increase while accessing much larger -data sets, for example, scanning an entire collection. Limited and -sporadic MongoDB page faults do not necessarily indicate a problem or a -need to tune the database. - -A single page fault completes quickly and is not problematic. However, in -aggregate, large volumes of page faults typically indicate that MongoDB -is reading too much data from disk. In many situations, MongoDB's -read locks will "yield" after a page fault to allow other processes to -read and avoid blocking while waiting for the next page to read into -memory. This approach improves concurrency, and also improves overall -throughput in high volume systems. - -Increasing the amount of RAM accessible to MongoDB may help reduce the -frequency of page faults. If this is not possible, you may want to consider -deploying a :term:`sharded cluster` or adding :term:`shards ` -to your deployment to distribute load among :program:`mongod` instances. - -See :ref:`faq-storage-page-faults` for more information. - -Number of Connections -~~~~~~~~~~~~~~~~~~~~~ - -In some cases, the number of connections between the application layer -(i.e. clients) and the database can overwhelm the ability of the -server to handle requests. This can produce performance -irregularities. The following fields in the :dbcommand:`serverStatus` -document can provide insight: - -- :data:`globalLock.activeClients - ` contains a counter of the - total number of clients with active operations in progress or - queued. - -- :data:`~serverStatus.connections` is a container for the following - two fields: - - - :data:`~serverStatus.connections.current` the total number of - current clients that connect to the database instance. - - - :data:`~serverStatus.connections.available` the total number of - unused connections available for new clients. - -If requests are high because there are numerous concurrent application -requests, the database may have trouble keeping up with demand. If -this is the case, then you will need to increase the capacity of your -deployment. For read-heavy applications increase the size of your -:term:`replica set` and distribute read operations to -:term:`secondary` members. For write heavy applications, deploy -:term:`sharding` and add one or more :term:`shards ` to a -:term:`sharded cluster` to distribute load among :program:`mongod` -instances. - -Spikes in the number of connections can also be the result of -application or driver errors. All of the officially supported MongoDB -drivers implement connection pooling, which allows clients to use and -reuse connections more efficiently. Extremely high numbers of -connections, particularly without corresponding workload is often -indicative of a driver or other configuration error. - -Unless constrained by system-wide limits MongoDB has no limit on -incoming connections. You can modify system limits -using the ``ulimit`` command, or by editing your system's -``/etc/sysctl`` file. See :doc:`/reference/ulimit` for more -information. - -.. _database-profiling: - -Database Profiling -~~~~~~~~~~~~~~~~~~ - -MongoDB's "Profiler" is a database profiling system that can help identify -inefficient queries and operations. - -The following profiling levels are available: - -.. list-table:: - :header-rows: 1 - - * - **Level** - - - **Setting** - - * - 0 - - - Off. No profiling - - * - 1 - - - On. Only includes *"slow"* operations - - * - 2 - - - On. Includes *all* operations - -Enable the profiler by setting the -:dbcommand:`profile` value using the following command in the -:program:`mongo` shell: - -.. code-block:: javascript - - db.setProfilingLevel(1) - -The :setting:`~operationProfiling.slowOpThresholdMs` setting defines what constitutes a "slow" -operation. To set the threshold above which the profiler considers -operations "slow" (and thus, included in the level ``1`` profiling -data), you can configure :setting:`~operationProfiling.slowOpThresholdMs` at runtime as an argument to -the :method:`db.setProfilingLevel()` operation. - -.. see:: The documentation of :method:`db.setProfilingLevel()` for more - information about this command. - -By default, :program:`mongod` records all "slow" queries to its -:setting:`log `, as defined by :setting:`~operationProfiling.slowOpThresholdMs`. - -.. note:: - - Because the database profiler can negatively impact - performance, only enable profiling for strategic intervals and as - minimally as possible on production systems. - - You may enable profiling on a per-:program:`mongod` basis. This - setting will not propagate across a :term:`replica set` or - :term:`sharded cluster`. - -You can view the output of the profiler in the ``system.profile`` -collection of your database by issuing the ``show profile`` command in -the :program:`mongo` shell, or with the following operation: - -.. code-block:: javascript - - db.system.profile.find( { millis : { $gt : 100 } } ) - -This returns all operations that lasted longer than 100 milliseconds. -Ensure that the value specified here (``100``, in this example) is above the -:setting:`~operationProfiling.slowOpThresholdMs` threshold. - -.. seealso:: :doc:`/administration/optimization` addresses strategies - that may improve the performance of your database queries and - operations. +.. include:: /includes/intro-performance.rst .. _replica-set-monitoring: diff --git a/source/applications/design-notes.txt b/source/applications/design-notes.txt index cb56011730c..6eda04329cf 100644 --- a/source/applications/design-notes.txt +++ b/source/applications/design-notes.txt @@ -165,234 +165,4 @@ Sharding Considerations Analyze Performance ------------------- -As you develop and operate applications with MongoDB, you may want to -analyze the performance of the database as the application. Consider -the following as you begin to investigate the performance of -MongoDB. - -Overview -~~~~~~~~ - -Degraded performance in MongoDB -is typically a function of the relationship between the -quantity of data stored in the database, the amount of system RAM, the -number of connections to the database, and the amount of time the -database spends in a locked state. - -In some cases performance issues may be transient and related to -traffic load, data access patterns, or the availability of hardware on -the host system for virtualized environments. Some users also -experience performance limitations as a result of inadequate or -inappropriate indexing strategies, or as a consequence of poor schema -design patterns. In other situations, performance issues may indicate -that the database may be operating at capacity and that it is time to -add additional capacity to the database. - -The following are some causes of degraded performance in MongoDB. - -Locks -~~~~~ - -MongoDB uses a locking system to ensure data set consistency. However, if -certain operations are long-running, or a queue forms, performance -will slow as requests and operations wait for the lock. Lock-related -slowdowns can be intermittent. To see if the lock has been affecting -your performance, look to the data in the -:ref:`globalLock` section of the :dbcommand:`serverStatus` output. If -:data:`globalLock.currentQueue.total -` is consistently high, -then there is a chance that a large number of requests are waiting for -a lock. This indicates a possible concurrency issue that may be affecting -performance. - -If :data:`globalLock.totalTime ` is -high relative to :data:`~serverStatus.uptime`, the database has -existed in a lock state for a significant amount of time. If -:data:`globalLock.ratio ` is also high, -MongoDB has likely been processing a large number of long running -queries. Long queries are often the result of a number of factors: -ineffective use of indexes, non-optimal schema design, poor query -structure, system architecture issues, or insufficient RAM resulting -in :ref:`page faults ` and disk -reads. - -Memory Use for MMAPv1 Storage Engine -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -MongoDB uses memory mapped files to store data. Given a data -set of sufficient size, the MongoDB process will allocate all -available memory on the system for its use. -While this is part of the design, and affords MongoDB superior -performance, the memory mapped files make it difficult to determine if -the amount of RAM is sufficient for the data set. - -The :ref:`memory usage statuses ` metrics of the -:dbcommand:`serverStatus` output can provide insight into MongoDB's -memory use. Check the resident memory use -(i.e. :data:`mem.resident `): if this -exceeds the amount of system memory *and* there is a significant amount -of data on disk that isn't in RAM, you may have exceeded the capacity -of your system. - -You should also check the amount of mapped memory (i.e. :data:`mem.mapped -`.) If this value is greater than the amount -of system memory, some operations will require disk access :term:`page -faults ` to read data from virtual memory and negatively -affect performance. - -.. _administration-monitoring-page-faults: - -Page Faults for MMAPv1 Storage Engine -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. include:: /includes/fact-page-fault.rst - -Page faults triggered by MongoDB are reported as the total number of page -faults in one second. To check for page faults, see the -:data:`extra_info.page_faults ` value -in the :dbcommand:`serverStatus` output. - -MongoDB on Windows counts both hard and soft page faults. - -The MongoDB page fault counter may increase dramatically in moments of -poor performance and may correlate with limited physical memory -environments. Page faults also can increase while accessing much larger -data sets, for example, scanning an entire collection. Limited and -sporadic MongoDB page faults do not necessarily indicate a problem or a -need to tune the database. - -A single page fault completes quickly and is not problematic. However, in -aggregate, large volumes of page faults typically indicate that MongoDB -is reading too much data from disk. In many situations, MongoDB's -read locks will "yield" after a page fault to allow other processes to -read and avoid blocking while waiting for the next page to read into -memory. This approach improves concurrency, and also improves overall -throughput in high volume systems. - -Increasing the amount of RAM accessible to MongoDB may help reduce the -frequency of page faults. If this is not possible, you may want to consider -deploying a :term:`sharded cluster` or adding :term:`shards ` -to your deployment to distribute load among :program:`mongod` instances. - -See :ref:`faq-storage-page-faults` for more information. - -Number of Connections -~~~~~~~~~~~~~~~~~~~~~ - -In some cases, the number of connections between the application layer -(i.e. clients) and the database can overwhelm the ability of the -server to handle requests. This can produce performance -irregularities. The following fields in the :dbcommand:`serverStatus` -document can provide insight: - -- :data:`globalLock.activeClients - ` contains a counter of the - total number of clients with active operations in progress or - queued. - -- :data:`~serverStatus.connections` is a container for the following - two fields: - - - :data:`~serverStatus.connections.current` the total number of - current clients that connect to the database instance. - - - :data:`~serverStatus.connections.available` the total number of - unused collections available for new clients. - -If requests are high because there are numerous concurrent application -requests, the database may have trouble keeping up with demand. If -this is the case, then you will need to increase the capacity of your -deployment. For read-heavy applications increase the size of your -:term:`replica set` and distribute read operations to -:term:`secondary` members. For write heavy applications, deploy -:term:`sharding` and add one or more :term:`shards ` to a -:term:`sharded cluster` to distribute load among :program:`mongod` -instances. - -Spikes in the number of connections can also be the result of -application or driver errors. All of the officially supported MongoDB -drivers implement connection pooling, which allows clients to use and -reuse connections more efficiently. Extremely high numbers of -connections, particularly without corresponding workload is often -indicative of a driver or other configuration error. - -Unless constrained by system-wide limits MongoDB has no limit on -incoming connections. You can modify system limits -using the ``ulimit`` command, or by editing your system's -``/etc/sysctl`` file. See :doc:`/reference/ulimit` for more -information. - -.. _database-profiling: - -Database Profiling -~~~~~~~~~~~~~~~~~~ - -MongoDB's "Profiler" is a database profiling system that can help identify -inefficient queries and operations. - -The following profiling levels are available: - -.. list-table:: - :header-rows: 1 - - * - **Level** - - - **Setting** - - * - 0 - - - Off. No profiling - - * - 1 - - - On. Only includes *"slow"* operations - - * - 2 - - - On. Includes *all* operations - -Enable the profiler by setting the -:dbcommand:`profile` value using the following command in the -:program:`mongo` shell: - -.. code-block:: javascript - - db.setProfilingLevel(1) - -The :setting:`~operationProfiling.slowOpThresholdMs` setting defines what constitutes a "slow" -operation. To set the threshold above which the profiler considers -operations "slow" (and thus, included in the level ``1`` profiling -data), you can configure :setting:`~operationProfiling.slowOpThresholdMs` at runtime as an argument to -the :method:`db.setProfilingLevel()` operation. - -.. see:: The documentation of :method:`db.setProfilingLevel()` for more - information about this command. - -By default, :program:`mongod` records all "slow" queries to its -:setting:`log `, as defined by :setting:`~operationProfiling.slowOpThresholdMs`. - -.. note:: - - Because the database profiler can negatively impact - performance, only enable profiling for strategic intervals and as - minimally as possible on production systems. - - You may enable profiling on a per-:program:`mongod` basis. This - setting will not propagate across a :term:`replica set` or - :term:`sharded cluster`. - -You can view the output of the profiler in the ``system.profile`` -collection of your database by issuing the ``show profile`` command in -the :program:`mongo` shell, or with the following operation: - -.. code-block:: javascript - - db.system.profile.find( { millis : { $gt : 100 } } ) - -This returns all operations that lasted longer than 100 milliseconds. -Ensure that the value specified here (``100``, in this example) is above the -:setting:`~operationProfiling.slowOpThresholdMs` threshold. - -.. seealso:: :doc:`/administration/optimization` addresses strategies - that may improve the performance of your database queries and - operations. +.. include:: /includes/intro-performance.rst diff --git a/source/includes/intro-performance.rst b/source/includes/intro-performance.rst new file mode 100644 index 00000000000..a871518d1c1 --- /dev/null +++ b/source/includes/intro-performance.rst @@ -0,0 +1,4 @@ +As you develop and operate applications with MongoDB, you may want to +analyze the performance of the database as the application. +:doc:`/administration/analyzing-mongodb-performance` discusses some of the +operational factors that can influence performance. diff --git a/source/includes/toc-administration-optimization.yaml b/source/includes/toc-administration-optimization.yaml index ef9d4d98cdd..514875d2875 100644 --- a/source/includes/toc-administration-optimization.yaml +++ b/source/includes/toc-administration-optimization.yaml @@ -1,3 +1,6 @@ +file: /administration/analyzing-mongodb-performance +description: "Discusses some of the factors that can influence MongoDB's performance." +--- file: /tutorial/evaluate-operation-performance description: "MongoDB provides introspection tools that describe the query execution process, to allow users to test queries and build more efficient queries." --- diff --git a/source/reference/command/serverStatus.txt b/source/reference/command/serverStatus.txt index 824e07795bc..b14b8faa32b 100644 --- a/source/reference/command/serverStatus.txt +++ b/source/reference/command/serverStatus.txt @@ -397,6 +397,10 @@ section ` of the memory environments and larger data sets. Limited and sporadic page faults do not necessarily indicate an issue. + Windows draws a distinction between "hard" page faults involving disk IO, and + "soft" page faults that only require moving pages in memory. MongoDB counts + both hard and soft page faults in this statistic. + .. _server-status-backgroundflushing: .. _server-status-background-flushing: @@ -1690,6 +1694,6 @@ section ` of the the WiredTiger storage engine. These settings are MongoDB-specific. To change the settings for concurrent - reads and write transactions, see + reads and write transactions, see :parameter:`wiredTigerConcurrentReadTransactions` and :parameter:`wiredTigerConcurrentWriteTransactions`. diff --git a/source/reference/method/db.collection.find.txt b/source/reference/method/db.collection.find.txt index 3b7a0d969f3..5a21cea7749 100644 --- a/source/reference/method/db.collection.find.txt +++ b/source/reference/method/db.collection.find.txt @@ -22,7 +22,7 @@ Definition "returns documents," the method is actually returning a cursor to the documents. - If the ``projection`` argument is specified, the + If :method:`~db.collection.find()` receives a ``projection`` argument, the matching documents contain only the ``projection`` fields and the ``_id`` field. You can optionally exclude the ``_id`` field. @@ -58,7 +58,8 @@ only field that you can *explicitly exclude*. .. [#formal-query-structure] :method:`db.collection.find()` is a wrapper for the more formal query structure that uses the - :operator:`$query` operator. + :operator:`$query` operator. This operator is necessary to work with + documents containing a field name ``query`` containing an embedded document. Examples -------- diff --git a/source/reference/operator/meta/query.txt b/source/reference/operator/meta/query.txt index 4c32976e8e5..b33a9402c41 100644 --- a/source/reference/operator/meta/query.txt +++ b/source/reference/operator/meta/query.txt @@ -4,12 +4,15 @@ $query .. default-domain:: mongodb +Definition +---------- + .. operator:: $query The :operator:`$query` operator forces MongoDB to interpret an expression as a query. - The following example :program:`mongo` commands are equivalent, and + The following :program:`mongo` commands are equivalent, and return only those documents in the collection named ``collection`` where the ``age`` field equals ``25``. @@ -18,18 +21,30 @@ $query db.collection.find( { $query: { age : 25 } } ) db.collection.find( { age : 25 } ) - This operator is necessary to work with documents containing a subdocument - field called ``query``. For example, the following sequence of operations - will return no results: + :operator:`$query` is necessary to work with documents containing a field + name ``query`` containing an embedded document, such as the following: + + .. code-block:: javascript + + { _id: 1, age: 25, query: { a: 1 } } + + The following find operation that does not use the :operator:`$query` + operator will return no results: .. code-block:: javascript - db.documents.save( { "query": { a: 1, b: 2 } } ) - db.documents.find( { query: { a: 1, b: 2 } } ) + db.documents.find( { query: { a: 1 } } ) To obtain the document, you will need to use the following query: - db.documents.find( { "$query": { query: { a: 1, b: 2 } } } ) + .. code-block:: javascript + + db.documents.find( { "$query": { query: { a: 1 } } } ) + + .. seealso:: For more information about queries in MongoDB see + :doc:`/core/read-operations`, + :method:`db.collection.find()`, and `Getting Started with MongoDB + `_. .. note:: @@ -46,8 +61,3 @@ $query db.collection.find( { $query: { age : 25 }, $explain: true } ) db.collection.find( { age : 25 } ).explain() - - .. seealso:: For more information about queries in MongoDB see - :doc:`/core/read-operations`, - :method:`db.collection.find()`, and `Getting Started with MongoDB - `_.