Skip to content

DOCS-6736: fix unit typography #2475

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions source/administration/production-checklist-operations.txt
Original file line number Diff line number Diff line change
Expand Up @@ -235,8 +235,8 @@ Monitoring

In the absence of disk space monitoring, or as a precaution:

- Create a dummy 4GB file on the :setting:`storage.dbPath` drive to
ensure available space if the disk becomes full.
- Create a dummy 4 GB file on the :setting:`storage.dbPath` drive
to ensure available space if the disk becomes full.

- A combination of ``cron+df`` can alert when disk space hits a
high-water mark, if no other monitoring tool is available.
Expand Down
2 changes: 1 addition & 1 deletion source/administration/production-notes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -518,7 +518,7 @@ For the MMAPv1 storage engine:

- Ensure that readahead settings for the block devices that store the
database files are appropriate. For random access use patterns, set
low readahead values. A readahead of 32 (16kb) often works well.
low readahead values. A readahead of 32 (16 KB) often works well.

For a standard block device, you can run ``sudo blockdev --report``
to get the readahead settings and ``sudo blockdev --setra <value>
Expand Down
4 changes: 2 additions & 2 deletions source/applications/design-notes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@ options. See: the :method:`update` method reference documentation.
BSON Document Size Limit
~~~~~~~~~~~~~~~~~~~~~~~~

The :limit:`BSON Document Size` limit is currently
set at 16MB per document. If you require larger documents, use :doc:`GridFS
The :limit:`BSON Document Size` limit is currently set at 16 MB per
document. If you require larger documents, use :doc:`GridFS
</core/gridfs/>`.

No Fully Generalized Transactions
Expand Down
4 changes: 2 additions & 2 deletions source/core/data-model-operations.txt
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ sorted results. MongoDB automatically creates a unique index on the

As you create indexes, consider the following behaviors of indexes:

- Each index requires at least 8KB of data space.
- Each index requires at least 8 KB of data space.

- Adding an index has some negative performance impact for write
operations. For collections with high write-to-read ratio, indexes
Expand Down Expand Up @@ -160,7 +160,7 @@ the following behaviors:

- Each collection has a certain minimum overhead of a few kilobytes.

- Each index, including the index on ``_id``, requires at least 8KB of
- Each index, including the index on ``_id``, requires at least 8 KB of
data space.

- For each :term:`database`, a single namespace file (i.e.
Expand Down
8 changes: 4 additions & 4 deletions source/core/gridfs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ GridFS

:term:`GridFS` is a specification for storing and retrieving files
that exceed the :term:`BSON`\-document :ref:`size limit
<limit-bson-document-size>` of 16MB.
<limit-bson-document-size>` of 16 MB.

Instead of storing a file in a single document, GridFS divides a file
into parts, or chunks, [#chunk-disambiguation]_ and stores each of
those chunks as a separate document. By default GridFS limits chunk
size to 255k. GridFS uses two collections to store files. One
size to 255 KB. GridFS uses two collections to store files. One
collection stores the file chunks, and the other stores file metadata.

When you query a GridFS store for a file, the driver or client will
Expand All @@ -22,7 +22,7 @@ files stored through GridFS. You also can access information from
arbitrary sections of files, which allows you to "skip" into the
middle of a video or audio file.

GridFS is useful not only for storing files that exceed 16MB but also
GridFS is useful not only for storing files that exceed 16 MB but also
for storing any files for which you want access without having to load
the entire file into memory. For more information on the indications
of GridFS, see :ref:`faq-developers-when-to-use-gridfs`.
Expand All @@ -32,7 +32,7 @@ of GridFS, see :ref:`faq-developers-when-to-use-gridfs`.
the context of sharding.

.. versionchanged:: 2.4.10
The default chunk size changed from 256k to 255k.
The default chunk size changed from 256 KB to 255 KB.

.. index:: GridFS; initialize
.. _gridfs-implement:
Expand Down
2 changes: 1 addition & 1 deletion source/core/journaling.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ following intervals or conditions:
WiredTiger forces a sync of the WiredTiger log files.

- Because MongoDB uses a log file size limit of 100 MB, WiredTiger
creates a new journal file approximately every 100MB of data. When
creates a new journal file approximately every 100 MB of data. When
WiredTiger creates a new journal file, WiredTiger syncs the previous
journal file.

Expand Down
2 changes: 1 addition & 1 deletion source/core/master-slave.txt
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ you start the ``master`` instance, by adding the
:program:`mongod`. If you do not specify
:option:`--oplogSize <mongod --oplogSize>`, :program:`mongod` will
allocate 5% of available disk space on start up to the oplog, with a
minimum of 1GB for 64bit machines and 50MB for 32bit machines.
minimum of 1 GB for 64-bit machines and 50 MB for 32-bit machines.

Run time Master-Slave Configuration
-----------------------------------
Expand Down
4 changes: 2 additions & 2 deletions source/core/mmapv1.txt
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ Power of 2 Sized Allocations
MongoDB 3.0 uses the power of 2 sizes allocation as the default record
allocation strategy for MMAPv1. With the power of 2 sizes allocation
strategy, each record has a size in bytes that is a power of 2 (e.g.
32, 64, 128, 256, 512 ... 2MB). For documents larger than 2MB, the
allocation is rounded up to the nearest multiple of 2MB.
32, 64, 128, 256, 512 ... 2 MB). For documents larger than 2 MB, the
allocation is rounded up to the nearest multiple of 2 MB.

The power of 2 sizes allocation strategy has the following key
properties:
Expand Down
4 changes: 2 additions & 2 deletions source/core/sharding-introduction.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ and large data sets:
Each shard stores less data as the cluster grows.

For example, if a database has a 1 terabyte data set, and there are
4 shards, then each shard might hold only 256GB of data. If there
are 40 shards, then each shard might hold only 25GB of data.
4 shards, then each shard might hold only 256 GB of data. If there
are 40 shards, then each shard might hold only 25 GB of data.

Sharding in MongoDB
-------------------
Expand Down
2 changes: 1 addition & 1 deletion source/faq/diagnostics.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ To calculate how much RAM you need, you must calculate your working set
size, or the portion of your data that clients use most often. This
depends on your access patterns, what indexes you have, and the size of
your documents. Because MongoDB uses a thread per connection model, each
database connection also will need up to 1MB of RAM, whether active or idle.
database connection also will need up to 1 MB of RAM, whether active or idle.

If page faults are infrequent, your
working set fits in RAM. If fault rates rise higher than that, you risk
Expand Down
4 changes: 2 additions & 2 deletions source/includes/extracts-wired-tiger.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ content: |
seconds or 2 GB of data, depending on which occurs first.

- Because MongoDB uses a log file size limit of 100 MB, WiredTiger
creates a new journal file approximately every 100MB of data. When
creates a new journal file approximately every 100 MB of data. When
WiredTiger creates a new journal file, WiredTiger syncs the previous
journal file.

Expand Down Expand Up @@ -149,6 +149,6 @@ content: |
more than 10 GB of RAM, the configuration is greater than the 3.0
setting.

In MongoDB 3.0, the WiredTiger cache, by default, uses either 1GB
In MongoDB 3.0, the WiredTiger cache, by default, uses either 1 GB
or half of the installed physical RAM, whichever is larger.
...
6 changes: 3 additions & 3 deletions source/includes/note-suse-ulimit.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
.. note::

SUSE Linux Enterprise Server 11 and potentially other versions of SLES
and other SUSE distributions ship with virtual memory address space limited
to 8GB by default. This *must* be adjusted in order to prevent virtual memory
SUSE Linux Enterprise Server 11 and potentially other versions of SLES and
other SUSE distributions ship with virtual memory address space limited to 8
GB by default. This *must* be adjusted in order to prevent virtual memory
allocation failures as the database grows.

The SLES packages for MongoDB adjust these limits in the default scripts,
Expand Down
2 changes: 1 addition & 1 deletion source/includes/toc-data-modeling-concepts.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ description: |
file: /core/gridfs
description: |
GridFS is a specification for storing documents that exceeds the
:term:`BSON`\-document size limit of 16MB.
:term:`BSON`\-document size limit of 16 MB.
...
2 changes: 1 addition & 1 deletion source/reference/command/compact.txt
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Specifying ``paddingBytes`` can be useful if your documents start small
but then increase in size significantly.

For example, if your documents
are initially 40 bytes long and you grow them by 1KB, using
are initially 40 bytes long and you grow them by 1 KB, using
``paddingBytes: 1024`` might be reasonable since using ``paddingFactor:
4.0`` would specify a record size of 160 bytes (``4.0`` times the
initial document size), which would only provide a padding of 120 bytes
Expand Down
2 changes: 1 addition & 1 deletion source/reference/gridfs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ following fields. Applications may create additional arbitrary fields:
the size specified here. The default size is 255 kilobytes.

.. versionchanged:: 2.4.10
The default chunk size changed from 256k to 255k.
The default chunk size changed from 256 KB to 255 KB.

.. data:: files.uploadDate

Expand Down
2 changes: 1 addition & 1 deletion source/reference/method/db.setProfilingLevel.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Definition

Configure the :setting:`~operationProfiling.slowOpThresholdMs` option to set the threshold
for the profiler to consider a query "slow." Specify this value in
milliseconds to override the default, 100ms.
milliseconds to override the default, 100 ms.

:program:`mongod` writes the output of the database profiler to the
``system.profile`` collection.
Expand Down
2 changes: 1 addition & 1 deletion source/tutorial/troubleshoot-replica-sets.txt
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ connect to the member in a :program:`mongo` shell and run the

The output displays the size of the oplog and the date ranges of the
operations contained in the oplog. In the following example, the oplog
is about 10MB and is able to fit about 26 hours (94400 seconds) of
is about 10 MB and is able to fit about 26 hours (94400 seconds) of
operations:

.. code-block:: javascript
Expand Down