Skip to content

Commit b3ebc18

Browse files
justusschockawaelchliBordatchaton
authored
Hardware specific parts of Accelerator Refactoring (#5719)
* add basic accelerator class. Co-Authored with @awaelchi * pep8 Co-authored-by: @awaelchi * add cpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add gpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add tpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add accelerator connector Co-authored-by: Adrian Wälchli <[email protected]> * add single device training Co-authored-by: Adrian Wälchli <[email protected]> * add single tpu Co-authored-by: Adrian Wälchli <[email protected]> * add tpu spawn Co-authored-by: Adrian Wälchli <[email protected]> * make on_colab_kaggle utility func * add basic accelerator class. Co-Authored with @awaelchi * pep8 Co-authored-by: @awaelchi * add cpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add gpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add tpu accelerator Co-authored-by: Adrian Wälchli <[email protected]> * add accelerator connector Co-authored-by: Adrian Wälchli <[email protected]> * add single device training Co-authored-by: Adrian Wälchli <[email protected]> * add single tpu Co-authored-by: Adrian Wälchli <[email protected]> * add tpu spawn Co-authored-by: Adrian Wälchli <[email protected]> * make on_colab_kaggle utility func * fixes * move * yapf * . * . * . * flake8 * sync accelerator connector changes from dev1.2 * changelog * fix tpu handling * tpu * aval * yapf * Update pytorch_lightning/plugins/training_type/tpu_spawn.py Co-authored-by: chaton <[email protected]> * Update pytorch_lightning/accelerators/accelerator_connector.py Co-authored-by: chaton <[email protected]> * Update pytorch_lightning/plugins/training_type/tpu_spawn.py Co-authored-by: chaton <[email protected]> * Update tpu_spawn.py * Update pytorch_lightning/accelerators/accelerator_connector.py Co-authored-by: chaton <[email protected]> * indentation Co-authored-by: Adrian Wälchli <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: chaton <[email protected]>
1 parent 963c17b commit b3ebc18

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+904
-62
lines changed

CHANGELOG.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
110110
- Refactored Accelerators and Plugins
111111
* Added base classes for plugins ([#5715](https://github.com/PyTorchLightning/pytorch-lightning/pull/5715))
112112
* Added parallel plugins for DP, DDP, DDPSpawn, DDP2 and Horovod ([#5714](https://github.com/PyTorchLightning/pytorch-lightning/pull/5714))
113-
113+
* Added new Accelerators for CPU, GPU and TPU ([#5719](https://github.com/PyTorchLightning/pytorch-lightning/pull/5719))
114+
* Added Plugins for TPU training ([#5719](https://github.com/PyTorchLightning/pytorch-lightning/pull/5719))
114115

115116
### Deprecated
116117

docs/source/common/trainer.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1121,7 +1121,7 @@ To define your own behavior, subclass the relevant class and pass it in. Here's
11211121
11221122
.. code-block:: python
11231123
1124-
from pytorch_lightning.cluster_environments import cluster_environment
1124+
from pytorch_lightning.plugins.environments import cluster_environment
11251125
11261126
class MyCluster(ClusterEnvironment):
11271127

docs/source/extensions/accelerators.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ First, implement your own ClusterEnvironment. Here is the torch elastic implemen
7070
import os
7171
from pytorch_lightning import _logger as log
7272
from pytorch_lightning.utilities import rank_zero_warn
73-
from pytorch_lightning.cluster_environments.cluster_environment import ClusterEnvironment
73+
from pytorch_lightning.plugins.environments.cluster_environment import ClusterEnvironment
7474
7575
class TorchElasticEnvironment(ClusterEnvironment):
7676

pytorch_lightning/accelerators/accelerator.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -347,8 +347,7 @@ def amp_backend(self) -> Optional[LightningEnum]:
347347
return AMPType.APEX
348348
elif isinstance(self.precision_plugin, NativeMixedPrecisionPlugin):
349349
return AMPType.NATIVE
350-
else:
351-
return None
350+
return None
352351

353352
@property
354353
def precision(self) -> int:

0 commit comments

Comments
 (0)