Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ git push -f

#### How to add new tests?

We are using [pytest](https://docs.pytest.org/en/stable/) in Pytorch Lightning.
We are using [pytest](https://docs.pytest.org/en/stable/) in PyTorch Lightning.

Here are tutorials:

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ ______________________________________________________________________

- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ ______________________________________________________________________

- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/refactor.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ ______________________________________________________________________

- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
2 changes: 1 addition & 1 deletion .github/stale.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ issues:
markComment: >
This issue has been automatically marked as stale because it hasn't had any recent activity.
This issue will be closed in 7 days if no further activity occurs.
Thank you for your contributions, Pytorch Lightning Team!
Thank you for your contributions, PyTorch Lightning Team!
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

Expand Down
2 changes: 1 addition & 1 deletion dockers/base-xla/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ ENV \
RUN pip --version && \
pip config set global.cache-dir false && \
conda remove pytorch torchvision && \
# Install Pytorch XLA
# Install PyTorch XLA
py_version=${PYTHON_VERSION/./} && \
gsutil cp "gs://tpu-pytorch/wheels/torch-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \
gsutil cp "gs://tpu-pytorch/wheels/torch_xla-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \
Expand Down
4 changes: 2 additions & 2 deletions docs/source-app/get_started/training_with_apps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Evolve a model into an ML system

**Required background:** Basic Python familiarity and complete the :ref:`build_model` guide.

**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing Pytorch Lightning scripts.
**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing PyTorch Lightning scripts.

.. join_slack::
:align: left
Expand Down Expand Up @@ -50,7 +50,7 @@ Inside the ``app.py`` file, add the following code.

.. literalinclude:: ../code_samples/convert_pl_to_app/app.py

This App runs the Pytorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out!
This App runs the PyTorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out!

----

Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/deploy/production_advanced_2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ can save or directly use.

It is recommended that you install the latest supported version of PyTorch to use this feature without limitations.

Once you have the exported model, you can run it in Pytorch or C++ runtime:
Once you have the exported model, you can run it in PyTorch or C++ runtime:

.. code-block:: python

Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/ecosystem/asr_nlp_tts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ so that each can be configured from .yaml or the Hydra CLI.

.. note:: Every NeMo model has an example configuration file and a corresponding script that contains all configurations needed for training.

The end result of using NeMo, Pytorch Lightning, and Hydra is that
The end result of using NeMo, PyTorch Lightning, and Hydra is that
NeMo models all have the same look and feel. This makes it easy to do Conversational AI research
across multiple domains. NeMo models are also fully compatible with the PyTorch ecosystem.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ def val_dataloader(self):
return self.__dataloader(train=False)


# --- Pytorch-lightning module ---
# --- PyTorch Lightning module ---


class TransferLearningModel(LightningModule):
Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/components/python/tracer.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def __init__(
This callback has a reference to the work and on every batch end, we are capturing the
trainer ``global_step`` and ``best_model_path``.

Even more interesting, this component works for ANY Pytorch Lightning script and
Even more interesting, this component works for ANY PyTorch Lightning script and
its state can be used in real time in a UI.

.. literalinclude:: ../../../../examples/app_components/python/component_tracer.py
Expand Down
4 changes: 2 additions & 2 deletions src/pytorch_lightning/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -1772,7 +1772,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))
- Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))
- Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621))
- Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))
- Deprecated `PyTorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))
- Deprecated `@auto_move_data` in favor of `trainer.predict` ([#6993](https://github.com/PyTorchLightning/pytorch-lightning/pull/6993))
- Deprecated `Callback.on_load_checkpoint(checkpoint)` in favor of `Callback.on_load_checkpoint(trainer, pl_module, checkpoint)` ([#7253](https://github.com/PyTorchLightning/pytorch-lightning/pull/7253))
- Deprecated metrics in favor of `torchmetrics` (
Expand Down Expand Up @@ -2358,7 +2358,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
[#4737](https://github.com/PyTorchLightning/pytorch-lightning/pull/4737),
[#4773](https://github.com/PyTorchLightning/pytorch-lightning/pull/4773))
- Added `experiment_id` to the NeptuneLogger ([#3462](https://github.com/PyTorchLightning/pytorch-lightning/pull/3462))
- Added `Pytorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568))
- Added `PyTorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568))
- Added `all_gather` method to `LightningModule` which allows gradient based tensor synchronizations for use-cases such as negative sampling. ([#5012](https://github.com/PyTorchLightning/pytorch-lightning/pull/5012))
- Enabled `self.log` in most functions ([#4969](https://github.com/PyTorchLightning/pytorch-lightning/pull/4969))
- Added changeable extension variable for `ModelCheckpoint` ([#4977](https://github.com/PyTorchLightning/pytorch-lightning/pull/4977))
Expand Down
2 changes: 1 addition & 1 deletion src/pytorch_lightning/overrides/distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _find_tensors(


# In manual_optimization, we need to call reducer prepare_for_backward.
# Note: Keep track of Pytorch DDP and update if there is a change
# Note: Keep track of PyTorch DDP and update if there is a change
# https://github.com/pytorch/pytorch/blob/v1.7.1/torch/nn/parallel/distributed.py#L626-L638
def prepare_for_backward(model: DistributedDataParallel, output: Any) -> None:
# `prepare_for_backward` is `DistributedDataParallel` specific.
Expand Down
2 changes: 1 addition & 1 deletion src/pytorch_lightning/strategies/fully_sharded_native.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def __init__(
`For more information: https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/`.

.. warning:: ``DDPFullyShardedNativeStrategy`` is in beta and subject to change. The interface can
bring breaking changes and new features with the next release of Pytorch.
bring breaking changes and new features with the next release of PyTorch.

Defaults have been set and options have been exposed, but may require configuration
based on your level of memory/speed efficiency. We suggest having a look at this tutorial for
Expand Down
2 changes: 1 addition & 1 deletion src/pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ def __init__(

deterministic: If ``True``, sets whether PyTorch operations must use deterministic algorithms.
Set to ``"warn"`` to use deterministic algorithms whenever possible, throwing warnings on operations
that don't support deterministic mode (requires Pytorch 1.11+). If not set, defaults to ``False``.
that don't support deterministic mode (requires PyTorch 1.11+). If not set, defaults to ``False``.
Default: ``None``.

devices: Will be mapped to either `gpus`, `tpu_cores`, `num_processes` or `ipus`,
Expand Down
2 changes: 1 addition & 1 deletion tests/tests_pytorch/helpers/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@


class MNIST(Dataset):
"""Customized `MNIST <http://yann.lecun.com/exdb/mnist/>`_ dataset for testing Pytorch Lightning without the
"""Customized `MNIST <http://yann.lecun.com/exdb/mnist/>`_ dataset for testing PyTorch Lightning without the
torchvision dependency.

Part of the code was copied from
Expand Down