You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/CONTRIBUTING.md
+3-11Lines changed: 3 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -237,7 +237,7 @@ We welcome any useful contribution! For your convenience here's a recommended wo
237
237
238
238
#### How can I help/contribute?
239
239
240
-
All types of contributions are welcome - reporting bugs, fixing documentation, adding test cases, solving issues, and preparing bug fixes.
240
+
All types of contributions are welcome - reporting bugs, fixing documentation, adding test cases, solving issues, and preparing bug fixes.
241
241
To get started with code contributions, look for issues marked with the label [good first issue](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) or chose something close to your domain with the label [help wanted](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Before coding, make sure that the issue description is clear and comment on the issue so that we can assign it to you (or simply self-assign if you can).
Currently we have separate streams/branches for bugfixes/features and release from the default branch (`master`).
329
-
Bugfixes should land in this `master` branch and features should land in `release/X.y-dev`.
330
-
This means that when starting your contribution and creating a branch according to question 2) you should start this new branch from master or future release dev branch.
331
-
Later in PR creation also pay attention to properly set the target branch, usually the starting (base) and target branch are the same.
332
-
333
-
_Note, that this flow may change after the 1.2 release as we will adjust releasing strategy._
334
326
335
327
#### How to fix PR with mixed base and target branches?
336
328
@@ -339,7 +331,7 @@ Do not panic, the solution is very straightforward and quite simple.
339
331
All you need to do are these two steps in arbitrary order:
340
332
- Ask someone from Core to change the base/target branch to the correct one
341
333
- Rebase or cherry-pick your commits onto the correct base branch...
342
-
334
+
343
335
Let's show how to deal with the git...
344
336
the sample case is moving a PR from `master` to `release/1.2-dev` assuming my branch name is `my-branch`
345
337
and the last true master commit is `ccc111` and your first commit is `mmm222`.
@@ -354,7 +346,7 @@ and the last true master commit is `ccc111` and your first commit is `mmm222`.
354
346
# so open one and cherry-pick your last commits from `my-branch-backup`
355
347
# resolve all eventual conflict as the new base may contain different code
356
348
# when all done, push back to the open PR
357
-
git push -f
349
+
git push -f
358
350
```
359
351
***Rebasing way**, see more about [rebase onto usage](https://womanonrails.com/git-rebase-onto)
Copy file name to clipboardExpand all lines: CHANGELOG.md
+39-1Lines changed: 39 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
9
9
10
10
### Added
11
11
12
+
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
13
+
12
14
13
15
- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))
14
16
@@ -21,15 +23,51 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
21
23
22
24
### Removed
23
25
26
+
- Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164))
27
+
28
+
29
+
- Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163))
- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))
43
+
44
+
45
+
- Move lightning module to correct device type when using LightningDistributedWrapper ([#6070](https://github.com/PyTorchLightning/pytorch-lightning/pull/6070))
46
+
47
+
48
+
- Do not print top-k verbose log with `ModelCheckpoint(monitor=None)` ([#6109](https://github.com/PyTorchLightning/pytorch-lightning/pull/6109))
49
+
50
+
51
+
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115))
52
+
53
+
54
+
- Fixed epoch level schedulers not being called when `val_check_interval < 1.0` ([#6075](https://github.com/PyTorchLightning/pytorch-lightning/pull/6075))
55
+
56
+
57
+
## [1.2.1] - 2021-02-23
24
58
25
59
### Fixed
26
60
61
+
- Fixed incorrect yield logic for the amp autocast context manager ([#6080](https://github.com/PyTorchLightning/pytorch-lightning/pull/6080))
62
+
- Fixed priority of plugin/accelerator when setting distributed mode ([#6089](https://github.com/PyTorchLightning/pytorch-lightning/pull/6089))
63
+
- Fixed error message for AMP + CPU incompatibility ([#6107](https://github.com/PyTorchLightning/pytorch-lightning/pull/6107))
64
+
27
65
28
66
## [1.2.0] - 2021-02-18
29
67
30
68
### Added
31
69
32
-
- Added `DataType`, `AverageMethod` and `MDMCAverageMethod` enum in metrics ([#5657](https://github.com/PyTorchLightning/pytorch-lightning/pull/5689)
70
+
- Added `DataType`, `AverageMethod` and `MDMCAverageMethod` enum in metrics ([#5657](https://github.com/PyTorchLightning/pytorch-lightning/pull/5689))
33
71
- Added support for summarized model total params size in megabytes ([#5590](https://github.com/PyTorchLightning/pytorch-lightning/pull/5590))
34
72
- Added support for multiple train loaders ([#1959](https://github.com/PyTorchLightning/pytorch-lightning/pull/1959))
35
73
- Added `Accuracy` metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using the `top_k` parameter ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
Copy file name to clipboardExpand all lines: docs/source/advanced/multi_gpu.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -690,9 +690,9 @@ DeepSpeed
690
690
.. note::
691
691
The DeepSpeed plugin is in beta and the API is subject to change. Please create an `issue <https://github.com/PyTorchLightning/pytorch-lightning/issues>`_ if you run into any issues.
692
692
693
-
`DeepSpeed <https://github.com/microsoft/DeepSpeed>`_ offers additional CUDA deep learning training optimizations, similar to `FairScale <https://github.com/facebookresearch/fairscale>`_. DeepSpeed offers lower level training optimizations, and useful efficient optimizers such as `1-bit Adam <https://www.deepspeed.ai/tutorials/onebit-adam/>`_.
694
-
Using the plugin, we were able to **train model sizes of 10 Billion parameters and above**, with a lot of useful information in this `benchmark <https://github.com/huggingface/transformers/issues/9996>`_ and the DeepSpeed `docs <https://www.deepspeed.ai/tutorials/megatron/>`_.
695
-
We recommend using DeepSpeed in environments where speed and memory optimizations are important (such as training large billion parameter models). In addition, we recommend trying :ref:`sharded` first before trying DeepSpeed's further optimizations, primarily due to FairScale Sharded ease of use in scenarios such as multiple optimizers/schedulers.
693
+
`DeepSpeed <https://github.com/microsoft/DeepSpeed>`_ is a deep learning training optimization library, providing the means to train massive billion parameter models at scale.
694
+
Using the DeepSpeed plugin, we were able to **train model sizes of 10 Billion parameters and above**, with a lot of useful information in this `benchmark <https://github.com/huggingface/transformers/issues/9996>`_ and the DeepSpeed `docs <https://www.deepspeed.ai/tutorials/megatron/>`_.
695
+
DeepSpeed also offers lower level training optimizations, and efficient optimizers such as `1-bit Adam <https://www.deepspeed.ai/tutorials/onebit-adam/>`_. We recommend using DeepSpeed in environments where speed and memory optimizations are important (such as training large billion parameter models).
696
696
697
697
To use DeepSpeed, you first need to install DeepSpeed using the commands below.
698
698
@@ -706,7 +706,7 @@ Additionally if you run into any issues installing m4py, ensure you have openmpi
706
706
.. note::
707
707
Currently ``resume_from_checkpoint`` and manual optimization are not supported.
708
708
709
-
DeepSpeed only supports single optimizer, single scheduler.
709
+
DeepSpeed currently only supports single optimizer, single scheduler within the training loop.
0 commit comments