From 39121044f0ee935824fa6327efb29984c0f82f98 Mon Sep 17 00:00:00 2001 From: awaelchli Date: Thu, 21 Jul 2022 04:52:17 +0200 Subject: [PATCH 1/2] Fix PyTorch spelling errors --- .github/CONTRIBUTING.md | 2 +- .github/ISSUE_TEMPLATE/documentation.md | 2 +- .github/ISSUE_TEMPLATE/feature_request.md | 2 +- .github/ISSUE_TEMPLATE/refactor.md | 2 +- .github/stale.yml | 2 +- dockers/base-xla/Dockerfile | 2 +- docs/source-app/get_started/training_with_apps.rst | 4 ++-- docs/source-pytorch/deploy/production_advanced_2.rst | 2 +- docs/source-pytorch/ecosystem/asr_nlp_tts.rst | 2 +- examples/pl_domain_templates/computer_vision_fine_tuning.py | 2 +- src/lightning_app/components/python/tracer.py | 2 +- src/pytorch_lightning/overrides/distributed.py | 2 +- src/pytorch_lightning/strategies/fully_sharded_native.py | 2 +- src/pytorch_lightning/trainer/trainer.py | 2 +- tests/tests_pytorch/helpers/datasets.py | 2 +- 15 files changed, 16 insertions(+), 16 deletions(-) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 1d47028bfef89..7bec2d8763afd 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -225,7 +225,7 @@ git push -f #### How to add new tests? -We are using [pytest](https://docs.pytest.org/en/stable/) in Pytorch Lightning. +We are using [pytest](https://docs.pytest.org/en/stable/) in PyTorch Lightning. Here are tutorials: diff --git a/.github/ISSUE_TEMPLATE/documentation.md b/.github/ISSUE_TEMPLATE/documentation.md index 9336d4bd35415..8f94ee921e7ee 100644 --- a/.github/ISSUE_TEMPLATE/documentation.md +++ b/.github/ISSUE_TEMPLATE/documentation.md @@ -30,4 +30,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 77f5bac403d72..0d506dd923087 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -38,4 +38,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/ISSUE_TEMPLATE/refactor.md b/.github/ISSUE_TEMPLATE/refactor.md index 7df1c3002665e..159a4ce8d651b 100644 --- a/.github/ISSUE_TEMPLATE/refactor.md +++ b/.github/ISSUE_TEMPLATE/refactor.md @@ -34,4 +34,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/stale.yml b/.github/stale.yml index 51b57c079879d..a1fb9abfc9257 100644 --- a/.github/stale.yml +++ b/.github/stale.yml @@ -14,7 +14,7 @@ issues: markComment: > This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. - Thank you for your contributions, Pytorch Lightning Team! + Thank you for your contributions, PyTorch Lightning Team! # Comment to post when closing a stale issue. Set to `false` to disable closeComment: false diff --git a/dockers/base-xla/Dockerfile b/dockers/base-xla/Dockerfile index 13da7c22086d8..977aee878ffcd 100644 --- a/dockers/base-xla/Dockerfile +++ b/dockers/base-xla/Dockerfile @@ -77,7 +77,7 @@ ENV \ RUN pip --version && \ pip config set global.cache-dir false && \ conda remove pytorch torchvision && \ - # Install Pytorch XLA + # Install PyTorch XLA py_version=${PYTHON_VERSION/./} && \ gsutil cp "gs://tpu-pytorch/wheels/torch-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \ gsutil cp "gs://tpu-pytorch/wheels/torch_xla-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \ diff --git a/docs/source-app/get_started/training_with_apps.rst b/docs/source-app/get_started/training_with_apps.rst index a7061cae562fb..f509ba4cf0267 100644 --- a/docs/source-app/get_started/training_with_apps.rst +++ b/docs/source-app/get_started/training_with_apps.rst @@ -8,7 +8,7 @@ Evolve a model into an ML system **Required background:** Basic Python familiarity and complete the :ref:`build_model` guide. -**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing Pytorch Lightning scripts. +**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing PyTorch Lightning scripts. .. join_slack:: :align: left @@ -50,7 +50,7 @@ Inside the ``app.py`` file, add the following code. .. literalinclude:: ../code_samples/convert_pl_to_app/app.py -This App runs the Pytorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out! +This App runs the PyTorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out! ---- diff --git a/docs/source-pytorch/deploy/production_advanced_2.rst b/docs/source-pytorch/deploy/production_advanced_2.rst index 5f6fe58d6ef72..ea5ca9fd24a8b 100644 --- a/docs/source-pytorch/deploy/production_advanced_2.rst +++ b/docs/source-pytorch/deploy/production_advanced_2.rst @@ -34,7 +34,7 @@ can save or directly use. It is recommended that you install the latest supported version of PyTorch to use this feature without limitations. -Once you have the exported model, you can run it in Pytorch or C++ runtime: +Once you have the exported model, you can run it in PyTorch or C++ runtime: .. code-block:: python diff --git a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst index b624696886c73..abec585df6ff7 100644 --- a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst +++ b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst @@ -48,7 +48,7 @@ so that each can be configured from .yaml or the Hydra CLI. .. note:: Every NeMo model has an example configuration file and a corresponding script that contains all configurations needed for training. -The end result of using NeMo, Pytorch Lightning, and Hydra is that +The end result of using NeMo, PyTorch Lightning, and Hydra is that NeMo models all have the same look and feel. This makes it easy to do Conversational AI research across multiple domains. NeMo models are also fully compatible with the PyTorch ecosystem. diff --git a/examples/pl_domain_templates/computer_vision_fine_tuning.py b/examples/pl_domain_templates/computer_vision_fine_tuning.py index dc31d79ab0032..8556a77a110f7 100644 --- a/examples/pl_domain_templates/computer_vision_fine_tuning.py +++ b/examples/pl_domain_templates/computer_vision_fine_tuning.py @@ -150,7 +150,7 @@ def val_dataloader(self): return self.__dataloader(train=False) -# --- Pytorch-lightning module --- +# --- PyTorch Lightning module --- class TransferLearningModel(LightningModule): diff --git a/src/lightning_app/components/python/tracer.py b/src/lightning_app/components/python/tracer.py index 5605eee6b6d47..ed692c7f3ed27 100644 --- a/src/lightning_app/components/python/tracer.py +++ b/src/lightning_app/components/python/tracer.py @@ -79,7 +79,7 @@ def __init__( This callback has a reference to the work and on every batch end, we are capturing the trainer ``global_step`` and ``best_model_path``. - Even more interesting, this component works for ANY Pytorch Lightning script and + Even more interesting, this component works for ANY PyTorch Lightning script and its state can be used in real time in a UI. .. literalinclude:: ../../../../examples/app_components/python/component_tracer.py diff --git a/src/pytorch_lightning/overrides/distributed.py b/src/pytorch_lightning/overrides/distributed.py index 8048d83252af7..f09a7b9e3ae08 100644 --- a/src/pytorch_lightning/overrides/distributed.py +++ b/src/pytorch_lightning/overrides/distributed.py @@ -41,7 +41,7 @@ def _find_tensors( # In manual_optimization, we need to call reducer prepare_for_backward. -# Note: Keep track of Pytorch DDP and update if there is a change +# Note: Keep track of PyTorch DDP and update if there is a change # https://github.com/pytorch/pytorch/blob/v1.7.1/torch/nn/parallel/distributed.py#L626-L638 def prepare_for_backward(model: DistributedDataParallel, output: Any) -> None: # `prepare_for_backward` is `DistributedDataParallel` specific. diff --git a/src/pytorch_lightning/strategies/fully_sharded_native.py b/src/pytorch_lightning/strategies/fully_sharded_native.py index 7528d5b95903e..d70187cbdbb1f 100644 --- a/src/pytorch_lightning/strategies/fully_sharded_native.py +++ b/src/pytorch_lightning/strategies/fully_sharded_native.py @@ -85,7 +85,7 @@ def __init__( `For more information: https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/`. .. warning:: ``DDPFullyShardedNativeStrategy`` is in beta and subject to change. The interface can - bring breaking changes and new features with the next release of Pytorch. + bring breaking changes and new features with the next release of PyTorch. Defaults have been set and options have been exposed, but may require configuration based on your level of memory/speed efficiency. We suggest having a look at this tutorial for diff --git a/src/pytorch_lightning/trainer/trainer.py b/src/pytorch_lightning/trainer/trainer.py index 46e991d1bbbab..cca7272b214dd 100644 --- a/src/pytorch_lightning/trainer/trainer.py +++ b/src/pytorch_lightning/trainer/trainer.py @@ -256,7 +256,7 @@ def __init__( deterministic: If ``True``, sets whether PyTorch operations must use deterministic algorithms. Set to ``"warn"`` to use deterministic algorithms whenever possible, throwing warnings on operations - that don't support deterministic mode (requires Pytorch 1.11+). If not set, defaults to ``False``. + that don't support deterministic mode (requires PyTorch 1.11+). If not set, defaults to ``False``. Default: ``None``. devices: Will be mapped to either `gpus`, `tpu_cores`, `num_processes` or `ipus`, diff --git a/tests/tests_pytorch/helpers/datasets.py b/tests/tests_pytorch/helpers/datasets.py index 2366145004c6d..3443020d4528f 100644 --- a/tests/tests_pytorch/helpers/datasets.py +++ b/tests/tests_pytorch/helpers/datasets.py @@ -23,7 +23,7 @@ class MNIST(Dataset): - """Customized `MNIST `_ dataset for testing Pytorch Lightning without the + """Customized `MNIST `_ dataset for testing PyTorch Lightning without the torchvision dependency. Part of the code was copied from From 5a685b09cc160409ef1bbaa5612633d4fb41be36 Mon Sep 17 00:00:00 2001 From: awaelchli Date: Fri, 22 Jul 2022 12:41:51 +0200 Subject: [PATCH 2/2] more --- src/pytorch_lightning/CHANGELOG.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/pytorch_lightning/CHANGELOG.md b/src/pytorch_lightning/CHANGELOG.md index 66c249db1456a..5b2120427ca1d 100644 --- a/src/pytorch_lightning/CHANGELOG.md +++ b/src/pytorch_lightning/CHANGELOG.md @@ -1772,7 +1772,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) -- Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) +- Deprecated `PyTorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Deprecated `@auto_move_data` in favor of `trainer.predict` ([#6993](https://github.com/PyTorchLightning/pytorch-lightning/pull/6993)) - Deprecated `Callback.on_load_checkpoint(checkpoint)` in favor of `Callback.on_load_checkpoint(trainer, pl_module, checkpoint)` ([#7253](https://github.com/PyTorchLightning/pytorch-lightning/pull/7253)) - Deprecated metrics in favor of `torchmetrics` ( @@ -2358,7 +2358,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). [#4737](https://github.com/PyTorchLightning/pytorch-lightning/pull/4737), [#4773](https://github.com/PyTorchLightning/pytorch-lightning/pull/4773)) - Added `experiment_id` to the NeptuneLogger ([#3462](https://github.com/PyTorchLightning/pytorch-lightning/pull/3462)) -- Added `Pytorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568)) +- Added `PyTorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568)) - Added `all_gather` method to `LightningModule` which allows gradient based tensor synchronizations for use-cases such as negative sampling. ([#5012](https://github.com/PyTorchLightning/pytorch-lightning/pull/5012)) - Enabled `self.log` in most functions ([#4969](https://github.com/PyTorchLightning/pytorch-lightning/pull/4969)) - Added changeable extension variable for `ModelCheckpoint` ([#4977](https://github.com/PyTorchLightning/pytorch-lightning/pull/4977))