Skip to content

Commit cae1489

Browse files
committed
docs: minor spelling tweaks
1 parent aeaa6b2 commit cae1489

File tree

8 files changed

+14
-14
lines changed

8 files changed

+14
-14
lines changed

docs/source/accelerators.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ To link up arbitrary hardware, implement your own Accelerator subclass
2121
class MyAccelerator(Accelerator):
2222
def __init__(self, trainer, cluster_environment=None):
2323
super().__init__(trainer, cluster_environment)
24-
self.nickname = 'my_accelator'
24+
self.nickname = 'my_accelerator'
2525
2626
def setup(self):
2727
# find local rank, etc, custom things to implement

docs/source/asr_nlp_tts.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -324,13 +324,13 @@ that are included with NeMo:
324324
- `Language Modeling (BERT Pretraining) <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb>`_
325325
- `Question Answering <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/nlp/Question_Answering_Squad.ipynb>`_
326326
- `Text Classification <https://github.com/NVIDIA/NeMo/tree/v1.0.0b1/examples/nlp/text_classification>`_ (including Sentiment Analysis)
327-
- `Token Classifcation <https://github.com/NVIDIA/NeMo/tree/v1.0.0b1/examples/nlp/token_classification>`_ (including Named Entity Recognition)
327+
- `Token Classification <https://github.com/NVIDIA/NeMo/tree/v1.0.0b1/examples/nlp/token_classification>`_ (including Named Entity Recognition)
328328
- `Punctuation and Capitalization <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/nlp/Punctuation_and_Capitalization.ipynb>`_
329329

330330
Named Entity Recognition (NER)
331331
------------------------------
332332

333-
NER (or more generally token classifcation) is the NLP task of detecting and classifying key information (entities) in text.
333+
NER (or more generally token classification) is the NLP task of detecting and classifying key information (entities) in text.
334334
This task is very popular in Healthcare and Finance. In finance, for example, it can be important to identify
335335
geographical, geopolitical, organizational, persons, events, and natural phenomenon entities.
336336
See this `NER notebook <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb>`_
@@ -435,7 +435,7 @@ Hydra makes every aspect of the NeMo model, including the PyTorch Lightning Trai
435435
Tokenizers
436436
----------
437437

438-
Tokenization is the process of converting natural langauge text into integer arrays
438+
Tokenization is the process of converting natural language text into integer arrays
439439
which can be used for machine learning.
440440
For NLP tasks, tokenization is an essential part of data preprocessing.
441441
NeMo supports all BERT-like model tokenizers from
@@ -462,7 +462,7 @@ Much of the state-of-the-art in natural language processing is achieved
462462
by fine-tuning pretrained language models on the downstream task.
463463

464464
With NeMo, you can either `pretrain <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/examples/nlp/language_modeling/bert_pretraining.py>`_
465-
a BERT model on your data or use a pretrained lanugage model from `HuggingFace Transformers <https://github.com/huggingface/transformers>`_
465+
a BERT model on your data or use a pretrained language model from `HuggingFace Transformers <https://github.com/huggingface/transformers>`_
466466
or `NVIDIA Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_.
467467

468468
To see the list of language models available in NeMo:

docs/source/bolts.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Example 1: Pretrained, prebuilt models
4646
Example 2: Extend for faster research
4747
-------------------------------------
4848
Bolts are contributed with benchmarks and continuous-integration tests. This means
49-
you can trust the implementations and use them to bootstrap your resarch much faster.
49+
you can trust the implementations and use them to bootstrap your research much faster.
5050

5151
.. code-block:: python
5252

docs/source/loggers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Loggers
1010
*******
1111

1212
Lightning supports the most popular logging frameworks (TensorBoard, Comet, etc...). TensorBoard is used by default,
13-
but you can pass to the :class:`~pytorch_lightning.trainer.trainer.Trainer` any combintation of the following loggers.
13+
but you can pass to the :class:`~pytorch_lightning.trainer.trainer.Trainer` any combination of the following loggers.
1414

1515
.. note::
1616

docs/source/lr_finder.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ method of the trainer. A typical example of this would look like
102102
trainer.fit(model)
103103
104104
The figure produced by ``lr_finder.plot()`` should look something like the figure
105-
below. It is recommended to not pick the learning rate that achives the lowest
105+
below. It is recommended to not pick the learning rate that achieves the lowest
106106
loss, but instead something in the middle of the sharpest downward slope (red point).
107107
This is the point returned py ``lr_finder.suggestion()``.
108108

docs/source/metrics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ common metric implementations.
1717

1818
The metrics API provides ``update()``, ``compute()``, ``reset()`` functions to the user. The metric base class inherits
1919
``nn.Module`` which allows us to call ``metric(...)`` directly. The ``forward()`` method of the base ``Metric`` class
20-
serves the dual purpose of calling ``update()`` on its input and simultanously returning the value of the metric over the
20+
serves the dual purpose of calling ``update()`` on its input and simultaneously returning the value of the metric over the
2121
provided input.
2222

2323
These metrics work with DDP in PyTorch and PyTorch Lightning by default. When ``.compute()`` is called in

docs/source/trainer.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ The accelerator backend to use (previously known as distributed_backend).
224224
- (```ddp```) is DistributedDataParallel (each gpu on each node trains, and syncs grads)
225225
- (```ddp_cpu```) is DistributedDataParallel on CPU (same as `ddp`, but does not use GPUs.
226226
Useful for multi-node CPU training or single-node debugging. Note that this will **not** give
227-
a speedup on a single node, since Torch already makes effient use of multiple CPUs on a single
227+
a speedup on a single node, since Torch already makes efficient use of multiple CPUs on a single
228228
machine.)
229229
- (```ddp2```) dp on node, ddp across nodes. Useful for things like increasing
230230
the number of negative samples
@@ -971,7 +971,7 @@ Number of processes to train with. Automatically set to the number of GPUs
971971
when using ``accelerator="ddp"``. Set to a number greater than 1 when
972972
using ``accelerator="ddp_cpu"`` to mimic distributed training on a
973973
machine without GPUs. This is useful for debugging, but **will not** provide
974-
any speedup, since single-process Torch already makes effient use of multiple
974+
any speedup, since single-process Torch already makes efficient use of multiple
975975
CPUs.
976976
977977
.. testcode::

docs/source/training_tricks.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -110,11 +110,11 @@ The algorithm in short works by:
110110
2. Iteratively until convergence or maximum number of tries `max_trials` (default 25) has been reached:
111111
- Call `fit()` method of trainer. This evaluates `steps_per_trial` (default 3) number of
112112
training steps. Each training step can trigger an OOM error if the tensors
113-
(training batch, weights, gradients ect.) allocated during the steps have a
113+
(training batch, weights, gradients, etc.) allocated during the steps have a
114114
too large memory footprint.
115115
- If an OOM error is encountered, decrease batch size else increase it.
116-
How much the batch size is increased/decreased is determined by the choosen
117-
stratrgy.
116+
How much the batch size is increased/decreased is determined by the chosen
117+
strategy.
118118
3. The found batch size is saved to either `model.batch_size` or `model.hparams.batch_size`
119119
4. Restore the initial state of model and trainer
120120

0 commit comments

Comments
 (0)