Skip to content

Commit 3fac4c6

Browse files
authored
Merge branch 'master' into bugfix/no_ret_warn
2 parents a3310cf + e7298b5 commit 3fac4c6

File tree

82 files changed

+904
-981
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

82 files changed

+904
-981
lines changed

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
blank_issues_enabled: false
2+
contact_links:
3+
- name: Ask a Question
4+
url: https://github.com/PyTorchLightning/pytorch-lightning/discussions/new
5+
about: Ask and answer Lightning related questions
6+
- name: 💬 Slack
7+
url: https://app.slack.com/client/TR9DVT48M/CQXV8BRH9/thread/CQXV8BRH9-1591382895.254600
8+
about: Chat with our community

.github/ISSUE_TEMPLATE/how-to-question.md

Lines changed: 0 additions & 31 deletions
This file was deleted.

CHANGELOG.md

Lines changed: 36 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
99

1010
### Added
1111

12+
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
13+
14+
15+
- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))
16+
1217

1318
### Changed
1419

@@ -18,19 +23,32 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
1823

1924
### Removed
2025

26+
- Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164))
27+
2128

2229
- Removed no return warning from val/test step ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139))
2330

2431

25-
### Fixed
32+
- Removed passing a `ModelCheckpoint` instance to `Trainer(checkpoint_callback)` ([#6166](https://github.com/PyTorchLightning/pytorch-lightning/pull/6166))
2633

27-
- Fixed incorrect yield logic for the amp autocast context manager ([#6080](https://github.com/PyTorchLightning/pytorch-lightning/pull/6080))
2834

35+
- Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163))
2936

30-
- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))
3137

38+
- Removed deprecated metrics ([#6161](https://github.com/PyTorchLightning/pytorch-lightning/pull/6161))
39+
* from `pytorch_lightning.metrics.functional.classification` removed `to_onehot`, `to_categorical`, `get_num_classes`, `roc`, `multiclass_roc`, `average_precision`, `precision_recall_curve`, `multiclass_precision_recall_curve`
40+
* from `pytorch_lightning.metrics.functional.reduction` removed `reduce`, `class_reduce`
3241

33-
- Fixed priority of plugin/accelerator when setting distributed mode ([#6089](https://github.com/PyTorchLightning/pytorch-lightning/pull/6089))
42+
43+
- Removed deprecated `ModelCheckpoint` arguments `prefix`, `mode="auto"` ([#6162](https://github.com/PyTorchLightning/pytorch-lightning/pull/6162))
44+
45+
46+
- Removed `mode='auto'` from `EarlyStopping` ([#6167](https://github.com/PyTorchLightning/pytorch-lightning/pull/6167))
47+
48+
49+
### Fixed
50+
51+
- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))
3452

3553

3654
- Move lightning module to correct device type when using LightningDistributedWrapper ([#6070](https://github.com/PyTorchLightning/pytorch-lightning/pull/6070))
@@ -39,10 +57,22 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
3957
- Do not print top-k verbose log with `ModelCheckpoint(monitor=None)` ([#6109](https://github.com/PyTorchLightning/pytorch-lightning/pull/6109))
4058

4159

42-
- Fixed error message for AMP + CPU incompatibility ([#6107](https://github.com/PyTorchLightning/pytorch-lightning/pull/6107))
60+
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115))
61+
62+
63+
- Fixed epoch level schedulers not being called when `val_check_interval < 1.0` ([#6075](https://github.com/PyTorchLightning/pytorch-lightning/pull/6075))
4364

4465

45-
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115)
66+
- Fixed multiple early stopping callbacks ([#6197](https://github.com/PyTorchLightning/pytorch-lightning/pull/6197))
67+
68+
69+
## [1.2.1] - 2021-02-23
70+
71+
### Fixed
72+
73+
- Fixed incorrect yield logic for the amp autocast context manager ([#6080](https://github.com/PyTorchLightning/pytorch-lightning/pull/6080))
74+
- Fixed priority of plugin/accelerator when setting distributed mode ([#6089](https://github.com/PyTorchLightning/pytorch-lightning/pull/6089))
75+
- Fixed error message for AMP + CPU incompatibility ([#6107](https://github.com/PyTorchLightning/pytorch-lightning/pull/6107))
4676

4777

4878
## [1.2.0] - 2021-02-18
@@ -92,9 +122,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
92122
- Added DeepSpeed integration ([#5954](https://github.com/PyTorchLightning/pytorch-lightning/pull/5954),
93123
[#6042](https://github.com/PyTorchLightning/pytorch-lightning/pull/6042))
94124

95-
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
96-
97-
98125
### Changed
99126

100127
- Changed `stat_scores` metric now calculates stat scores over all classes and gains new parameters, in line with the new `StatScores` metric ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))

README.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ Scale your models, not the boilerplate.**
2929

3030
[![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=stable)](https://pytorch-lightning.readthedocs.io/en/stable/)
3131
[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
32-
[![Discourse status](https://img.shields.io/discourse/status?server=https%3A%2F%2Fforums.pytorchlightning.ai)](https://forums.pytorchlightning.ai/)
3332
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
3433

3534
<!--
@@ -391,10 +390,8 @@ Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/
391390
### Asking for help
392391
If you have any questions please:
393392
1. [Read the docs](https://pytorch-lightning.rtfd.io/en/latest).
394-
2. [Search through the Discussions](https://github.com/PyTorchLightning/pytorch-lightning/discussions).
395-
3. [Look it up in our forum (or add a new question)](https://forums.pytorchlightning.ai)
396-
4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A).
397-
393+
2. [Search through existing Discussions](https://github.com/PyTorchLightning/pytorch-lightning/discussions), or [add a new question](https://github.com/PyTorchLightning/pytorch-lightning/discussions/new)
394+
3. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A).
398395
### Funding
399396
[We're venture funded](https://techcrunch.com/2020/10/08/grid-ai-raises-18-6m-series-a-to-help-ai-researchers-and-engineers-bring-their-models-to-production/) to make sure we can provide around the clock support, hire a full-time staff, attend conferences, and move faster through implementing features you request.
400397

docs/source/advanced/multi_gpu.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -690,9 +690,9 @@ DeepSpeed
690690
.. note::
691691
The DeepSpeed plugin is in beta and the API is subject to change. Please create an `issue <https://github.com/PyTorchLightning/pytorch-lightning/issues>`_ if you run into any issues.
692692

693-
`DeepSpeed <https://github.com/microsoft/DeepSpeed>`_ offers additional CUDA deep learning training optimizations, similar to `FairScale <https://github.com/facebookresearch/fairscale>`_. DeepSpeed offers lower level training optimizations, and useful efficient optimizers such as `1-bit Adam <https://www.deepspeed.ai/tutorials/onebit-adam/>`_.
694-
Using the plugin, we were able to **train model sizes of 10 Billion parameters and above**, with a lot of useful information in this `benchmark <https://github.com/huggingface/transformers/issues/9996>`_ and the DeepSpeed `docs <https://www.deepspeed.ai/tutorials/megatron/>`_.
695-
We recommend using DeepSpeed in environments where speed and memory optimizations are important (such as training large billion parameter models). In addition, we recommend trying :ref:`sharded` first before trying DeepSpeed's further optimizations, primarily due to FairScale Sharded ease of use in scenarios such as multiple optimizers/schedulers.
693+
`DeepSpeed <https://github.com/microsoft/DeepSpeed>`_ is a deep learning training optimization library, providing the means to train massive billion parameter models at scale.
694+
Using the DeepSpeed plugin, we were able to **train model sizes of 10 Billion parameters and above**, with a lot of useful information in this `benchmark <https://github.com/huggingface/transformers/issues/9996>`_ and the DeepSpeed `docs <https://www.deepspeed.ai/tutorials/megatron/>`_.
695+
DeepSpeed also offers lower level training optimizations, and efficient optimizers such as `1-bit Adam <https://www.deepspeed.ai/tutorials/onebit-adam/>`_. We recommend using DeepSpeed in environments where speed and memory optimizations are important (such as training large billion parameter models).
696696

697697
To use DeepSpeed, you first need to install DeepSpeed using the commands below.
698698

@@ -706,7 +706,7 @@ Additionally if you run into any issues installing m4py, ensure you have openmpi
706706
.. note::
707707
Currently ``resume_from_checkpoint`` and manual optimization are not supported.
708708

709-
DeepSpeed only supports single optimizer, single scheduler.
709+
DeepSpeed currently only supports single optimizer, single scheduler within the training loop.
710710

711711
ZeRO-Offload
712712
""""""""""""

docs/source/common/hyperparameters.rst

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -167,9 +167,6 @@ improve readability and reproducibility.
167167
def train_dataloader(self):
168168
return DataLoader(mnist_train, batch_size=self.hparams.batch_size)
169169
170-
.. warning:: Deprecated since v1.1.0. This method of assigning hyperparameters to the LightningModule
171-
will no longer be supported from v1.3.0. Use the ``self.save_hyperparameters()`` method from above instead.
172-
173170
174171
4. You can also save full objects such as `dict` or `Namespace` to the checkpoint.
175172

docs/source/common/optimizers.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -300,8 +300,6 @@ override the :meth:`optimizer_step` function.
300300

301301
For example, here step optimizer A every 2 batches and optimizer B every 4 batches
302302

303-
.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.
304-
305303
.. testcode::
306304

307305
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):

docs/source/starter/new-project.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -737,7 +737,7 @@ Lightning has many tools for debugging. Here is an example of just a few of them
737737
.. testcode::
738738

739739
# Profile your code to find speed/memory bottlenecks
740-
Trainer(profiler=True)
740+
Trainer(profiler="simple")
741741

742742
---------------
743743

@@ -773,7 +773,8 @@ Community
773773
**********
774774
Our community of core maintainers and thousands of expert researchers is active on our
775775
`Slack <https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A>`_
776-
and `Forum <https://forums.pytorchlightning.ai/>`_. Drop by to hang out, ask Lightning questions or even discuss research!
776+
and `GitHub Discussions <https://github.com/PyTorchLightning/pytorch-lightning/discussions>`_. Drop by
777+
to hang out, ask Lightning questions or even discuss research!
777778

778779

779780
-------------

notebooks/04-transformers-text-classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
"---\n",
2424
" - Give us a ⭐ [on Github](https://www.github.com/PytorchLightning/pytorch-lightning/)\n",
2525
" - Check out [the documentation](https://pytorch-lightning.readthedocs.io/en/latest/)\n",
26-
" - Ask a question on [the forum](https://forums.pytorchlightning.ai/)\n",
26+
" - Ask a question on [GitHub Discussions](https://github.com/PyTorchLightning/pytorch-lightning/discussions/)\n",
2727
" - Join us [on Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)\n",
2828
"\n",
2929
" - [HuggingFace datasets](https://github.com/huggingface/datasets)\n",

notebooks/06-mnist-tpu-training.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
" - Give us a ⭐ [on Github](https://www.github.com/PytorchLightning/pytorch-lightning/)\n",
4141
" - Check out [the documentation](https://pytorch-lightning.readthedocs.io/en/latest/)\n",
4242
" - Join us [on Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)\n",
43-
" - Ask a question on our [official forum](https://forums.pytorchlightning.ai/)"
43+
" - Ask a question on our [GitHub Discussions](https://github.com/PyTorchLightning/pytorch-lightning/discussions/)"
4444
]
4545
},
4646
{

0 commit comments

Comments
 (0)