You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .azure-pipelines/gpu-tests.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ jobs:
50
50
51
51
- bash: |
52
52
python -c "fname = 'requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if 'horovod' not in line] ; open(fname, 'w').writelines(lines)"
* Added optional `model` argument to the `optimizer_step` methods in accelerators and plugins ([#10023](https://github.com/PyTorchLightning/pytorch-lightning/pull/10023))
223
-
223
+
* Updated precision attributes in `DeepSpeedPlugin` ([#10164](https://github.com/PyTorchLightning/pytorch-lightning/pull/10164))
224
+
* Added the ability to return a result from rank 0 in `DDPSpawnPlugin.spawn` ([#10162](https://github.com/PyTorchLightning/pytorch-lightning/pull/10162))
@@ -343,6 +344,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
343
344
- Moved the `optimizer_step` and `clip_gradients` hook from the `Accelerator` and `TrainingTypePlugin` into the `PrecisionPlugin` ([#10143](https://github.com/PyTorchLightning/pytorch-lightning/pull/10143), [#10029](https://github.com/PyTorchLightning/pytorch-lightning/pull/10029))
344
345
345
346
347
+
-`NativeMixedPrecisionPlugin` and its subclasses now take an optional `GradScaler` instance ([#10055](https://github.com/PyTorchLightning/pytorch-lightning/pull/10055))
348
+
349
+
346
350
- Updated several places in the loops and trainer to access `training_type_plugin` directly instead of `accelerator` ([#9901](https://github.com/PyTorchLightning/pytorch-lightning/pull/9901))
347
351
348
352
@@ -444,10 +448,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
444
448
- Deprecated `ClusterEnvironment.creates_children()` in favor of `ClusterEnvironment.creates_processes_externally` (property) ([#10106](https://github.com/PyTorchLightning/pytorch-lightning/pull/10106))
445
449
446
450
447
-
448
451
- Deprecated `PrecisionPlugin.master_params()` in favor of `PrecisionPlugin.main_params()` ([#10105](https://github.com/PyTorchLightning/pytorch-lightning/pull/10105))
449
452
450
453
454
+
- Deprecated `lr_sch_names` from `LearningRateMonitor` ([#10066](https://github.com/PyTorchLightning/pytorch-lightning/pull/10066))
@@ -656,9 +662,16 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
656
662
- Fixed undesired side effects being caused by `Trainer` patching dataloader methods on the `LightningModule` ([#9764](https://github.com/PyTorchLightning/pytorch-lightning/pull/9764))
657
663
658
664
665
+
- Fixed monitor value in `ModelCheckpoint` getting moved to the wrong device in a special case where it becomes NaN ([#10118](https://github.com/PyTorchLightning/pytorch-lightning/pull/10118))
666
+
667
+
659
668
- Fixed creation of `dirpath` in `BaseProfiler` if it doesn't exist ([#10073](https://github.com/PyTorchLightning/pytorch-lightning/pull/10073))
660
669
661
670
671
+
- Fixed an issue with `pl.utilities.seed.reset_seed` converting the `PL_SEED_WORKERS` environment variable to `bool` ([#10099](https://github.com/PyTorchLightning/pytorch-lightning/pull/10099))
672
+
673
+
674
+
662
675
## [1.4.9] - 2021-09-30
663
676
664
677
- Fixed `lr_find` to generate same results on multiple calls ([#9704](https://github.com/PyTorchLightning/pytorch-lightning/pull/9704))
@@ -321,7 +323,7 @@ You can also add a forward method to do predictions however you want.
321
323
322
324
323
325
autoencoder = LitAutoEncoder()
324
-
autoencoder = autoencoder(torch.rand(1, 28 * 28))
326
+
embedding = autoencoder(torch.rand(1, 28 * 28))
325
327
326
328
327
329
.. code-block:: python
@@ -371,9 +373,9 @@ a forward method or trace only the sub-models you need.
371
373
372
374
--------------------
373
375
374
-
Using CPUs/GPUs/TPUs
375
-
====================
376
-
It's trivial to use CPUs, GPUsor TPUs in Lightning. There's **NO NEED** to change your code, simply change the :class:`~pytorch_lightning.trainer.Trainer` options.
376
+
Using CPUs/GPUs/TPUs/IPUs
377
+
=========================
378
+
It's trivial to use CPUs, GPUs, TPUs or IPUs in Lightning. There's **NO NEED** to change your code, simply change the :class:`~pytorch_lightning.trainer.Trainer` options.
377
379
378
380
.. testcode::
379
381
@@ -423,6 +425,11 @@ Without changing a SINGLE line of your code, you can now do the following with t
423
425
# using only half the training data and checking validation every quarter of a training epoch
0 commit comments