Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
911e921
Update versions in pre-commit config
akihironitta Oct 26, 2022
f0adefd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 26, 2022
9ac4a8f
Update versions further
akihironitta Oct 26, 2022
0c60e1b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 26, 2022
4b5ccec
Update black version for blacken-docs
akihironitta Oct 26, 2022
52af133
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 26, 2022
ea4b583
Adjust formatting
akihironitta Oct 27, 2022
668c22c
Adjust formatting
akihironitta Oct 27, 2022
3333d48
Fix flake8
akihironitta Oct 27, 2022
0a871cf
Apply the hack
akihironitta Oct 27, 2022
03c32a3
pls pass
akihironitta Oct 27, 2022
c0609c1
increase verbosity of docformatter
akihironitta Oct 27, 2022
5257763
Dump docformatter error
akihironitta Oct 27, 2022
7fc54f5
Merge branch 'master' into ci/update-pre-commit
akihironitta Oct 27, 2022
13ea360
Fix docformatter manually
akihironitta Oct 27, 2022
c158a64
Revert "Dump docformatter error"
akihironitta Oct 27, 2022
fdb1185
Merge branch 'master' into ci/update-pre-commit
akihironitta Oct 27, 2022
cdaf85a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 27, 2022
9cfed7c
Apply suggestions from code review
Borda Oct 28, 2022
7513eab
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 28, 2022
76d44d7
Revert "Apply suggestions from code review"
Borda Oct 28, 2022
53820d2
Merge branch 'master' into ci/update-pre-commit
akihironitta Oct 30, 2022
6da8f9b
Merge branch 'master' into ci/update-pre-commit
akihironitta Nov 2, 2022
4edf830
Merge branch 'master' into ci/update-pre-commit
Borda Nov 8, 2022
33213d2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 8, 2022
ee172f7
Merge branch 'master' into ci/update-pre-commit
akihironitta Nov 9, 2022
2c45ac9
Merge branch 'master' into ci/update-pre-commit
akihironitta Nov 9, 2022
55b45bb
Merge branch 'master' into ci/update-pre-commit
akihironitta Nov 11, 2022
0251d75
Merge branch 'master' into ci/update-pre-commit
Borda Nov 22, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,20 +49,21 @@ repos:
- id: detect-private-key

- repo: https://github.com/asottile/pyupgrade
rev: v2.34.0
rev: v3.1.0
hooks:
- id: pyupgrade
args: [--py37-plus]
name: Upgrade code

- repo: https://github.com/myint/docformatter
rev: v1.4
- repo: https://github.com/PyCQA/docformatter
rev: v1.5.0
hooks:
- id: docformatter
args: [--in-place, --wrap-summaries=115, --wrap-descriptions=120]
verbose: true

- repo: https://github.com/asottile/yesqa
rev: v1.3.0
rev: v1.4.0
hooks:
- id: yesqa
name: Unused noqa
Expand All @@ -75,7 +76,7 @@ repos:
exclude: docs/source-app

- repo: https://github.com/psf/black
rev: 22.6.0
rev: 22.10.0
hooks:
- id: black
name: Format code
Expand All @@ -86,11 +87,11 @@ repos:
hooks:
- id: blacken-docs
args: [--line-length=120]
additional_dependencies: [black==21.12b0]
additional_dependencies: [black==22.10.0]
exclude: docs/source-app

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.14
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
Expand All @@ -105,7 +106,7 @@ repos:
)$

- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
rev: 5.0.4
hooks:
- id: flake8
name: Check PEP8
2 changes: 1 addition & 1 deletion docs/source-app/examples/file_server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def upload_file(self, file):
"size": full_size,
"drive_path": uploaded_file,
}
with open(self.get_filepath(meta_file), "wt") as f:
with open(self.get_filepath(meta_file), "w") as f:
json.dump(meta, f)

# 5: Put the file to the drive.
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/debug/debugging_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ A breakpoint stops your code execution so you can inspect variables, etc... and
import pdb

pdb.set_trace()
y = x ** 2
y = x**2

In this example, the code will stop before executing the ``y = x**2`` line.

Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/strategies/hivemind_expert.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Size Adaptive Compression has been used in a variety of Hivemind applications an

# compresses values above threshold with 8bit Quantization, lower with Float16
compression = SizeAdaptiveCompression(
threshold=2 ** 16 + 1, less=Float16Compression(), greater_equal=Uniform8BitQuantization()
threshold=2**16 + 1, less=Float16Compression(), greater_equal=Uniform8BitQuantization()
)
trainer = pl.Trainer(
strategy=HivemindStrategy(
Expand Down
1 change: 0 additions & 1 deletion examples/lite/image_classifier_1_pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
# Credit to the PyTorch team
# Taken from https://github.com/pytorch/examples/blob/master/mnist/main.py and slightly adapted.
def run(hparams):

torch.manual_seed(hparams.seed)

use_cuda = torch.cuda.is_available()
Expand Down
1 change: 0 additions & 1 deletion src/lightning_app/cli/cmd_init.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@


def app(app_name: str) -> None:

if app_name is None:
app_name = _capture_valid_app_component_name(resource_type="app")

Expand Down
3 changes: 0 additions & 3 deletions src/lightning_app/cli/cmd_install.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ def gallery_component(name: str, yes_arg: bool, version_arg: str, cwd: str = Non


def non_gallery_component(gh_url: str, yes_arg: bool, cwd: str = None) -> None:

# give the user the chance to do a manual install
git_url = _show_non_gallery_install_component_prompt(gh_url, yes_arg)

Expand All @@ -41,7 +40,6 @@ def non_gallery_component(gh_url: str, yes_arg: bool, cwd: str = None) -> None:


def gallery_app(name: str, yes_arg: bool, version_arg: str, cwd: str = None, overwrite: bool = False) -> None:

# make sure org/app-name syntax is correct
org, app = _validate_name(name, resource_type="app", example="lightning/quick-start")

Expand All @@ -61,7 +59,6 @@ def gallery_app(name: str, yes_arg: bool, version_arg: str, cwd: str = None, ove


def non_gallery_app(gh_url: str, yes_arg: bool, cwd: str = None, overwrite: bool = False) -> None:

# give the user the chance to do a manual install
repo_url, folder_name = _show_non_gallery_install_app_prompt(gh_url, yes_arg)

Expand Down
1 change: 0 additions & 1 deletion src/lightning_app/cli/commands/logs.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ def logs(app_name: str, components: List[str], follow: bool) -> None:


def _show_logs(app_name: str, components: List[str], follow: bool) -> None:

client = LightningClient()
project = _get_project(client)

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
r"""
To test a lightning component:
r"""To test a lightning component:

1. Init the component.
2. call .run()
Expand Down
1 change: 0 additions & 1 deletion src/lightning_app/utilities/imports.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@


def requires(module_paths: Union[str, List]):

if not isinstance(module_paths, list):
module_paths = [module_paths]

Expand Down
3 changes: 1 addition & 2 deletions src/lightning_lite/lite.py
Original file line number Diff line number Diff line change
Expand Up @@ -396,8 +396,7 @@ def barrier(self, name: Optional[str] = None) -> None:
def all_gather(
self, data: Union[Tensor, Dict, List, Tuple], group: Optional[Any] = None, sync_grads: bool = False
) -> Union[Tensor, Dict, List, Tuple]:
r"""
Gather tensors or collections of tensors from multiple processes.
r"""Gather tensors or collections of tensors from multiple processes.

Args:
data: int, float, tensor of shape (batch, ...), or a (possibly nested) collection thereof.
Expand Down
3 changes: 1 addition & 2 deletions src/lightning_lite/strategies/launchers/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,7 @@


class _Launcher(ABC):
r"""
Abstract base class for all Launchers.
r"""Abstract base class for all Launchers.

Launchers are responsible for the creation and instrumentation of new processes so that the
:class:`~lightning_lite.strategies.strategy.Strategy` can set up communication between all them.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,7 @@


class _SubprocessScriptLauncher(_Launcher):
r"""
A process laucher that invokes the current script as many times as desired in a single node.
r"""A process laucher that invokes the current script as many times as desired in a single node.

This launcher needs to be invoked on each node.
In its default behavior, the main process in each node then spawns N-1 child processes via :func:`subprocess.Popen`,
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_lite/strategies/launchers/xla.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@


class _XLALauncher(_MultiProcessingLauncher):
r"""Launches processes that run a given function in parallel on XLA supported hardware, and joins them all at the
end.
r"""Launches processes that run a given function in parallel on XLA supported hardware, and joins them all at
the end.

The main process in which this launcher is invoked creates N so-called worker processes (using the
`torch_xla` :func:`xmp.spawn`) that run the given function.
Expand Down
8 changes: 6 additions & 2 deletions src/pytorch_lightning/accelerators/accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,13 @@ class Accelerator(_Accelerator, ABC):
"""

def setup_environment(self, root_device: torch.device) -> None:
"""
"""Create and prepare the device for the current process.

Note that this is deprecated.

.. deprecated:: v1.8.0
This hook was deprecated in v1.8.0 and will be removed in v1.10.0. Please use ``setup_device()`` instead.
This hook was deprecated in v1.8.0 and will be removed in v1.10.0. Please use
``setup_device()`` instead.
"""
rank_zero_deprecation(
"`Accelerator.setup_environment` has been deprecated in deprecated in v1.8.0 and will be removed in"
Expand Down
14 changes: 4 additions & 10 deletions src/pytorch_lightning/callbacks/callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Base class used to build new callbacks.

"""
r"""Base class used to build new callbacks."""

from typing import Any, Dict, List, Optional, Type

Expand All @@ -26,8 +23,7 @@


class Callback:
r"""
Abstract base class used to build new callbacks.
r"""Abstract base class used to build new callbacks.

Subclass this class and override any of the relevant hooks
"""
Expand Down Expand Up @@ -213,8 +209,7 @@ def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
def on_save_checkpoint(
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", checkpoint: Dict[str, Any]
) -> None:
r"""
Called when saving a checkpoint to give you a chance to store anything else you might want to save.
r"""Called when saving a checkpoint to give you a chance to store anything else you might want to save.

Args:
trainer: the current :class:`~pytorch_lightning.trainer.Trainer` instance.
Expand All @@ -225,8 +220,7 @@ def on_save_checkpoint(
def on_load_checkpoint(
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", checkpoint: Dict[str, Any]
) -> None:
r"""
Called when loading a model checkpoint, use to reload state.
r"""Called when loading a model checkpoint, use to reload state.

Args:
trainer: the current :class:`~pytorch_lightning.trainer.Trainer` instance.
Expand Down
8 changes: 4 additions & 4 deletions src/pytorch_lightning/callbacks/checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@


class Checkpoint(Callback):
r"""
This is the base class for model checkpointing. Expert users may want to subclass it in case of writing
custom :class:`~pytorch_lightning.callbacksCheckpoint` callback, so that
the trainer recognizes the custom class as a checkpointing callback.
r"""This is the base class for model checkpointing.

Expert users may want to subclass it in case of writing custom :class:`~pytorch_lightning.callbacksCheckpoint`
callback, so that the trainer recognizes the custom class as a checkpointing callback.
"""
5 changes: 2 additions & 3 deletions src/pytorch_lightning/callbacks/device_stats_monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,8 @@


class DeviceStatsMonitor(Callback):
r"""
Automatically monitors and logs device stats during training stage. ``DeviceStatsMonitor``
is a special callback as it requires a ``logger`` to passed as argument to the ``Trainer``.
r"""Automatically monitors and logs device stats during training stage. ``DeviceStatsMonitor`` is a special
callback as it requires a ``logger`` to passed as argument to the ``Trainer``.

Args:
cpu_stats: if ``None``, it will log CPU stats only if the accelerator is CPU.
Expand Down
3 changes: 1 addition & 2 deletions src/pytorch_lightning/callbacks/early_stopping.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@


class EarlyStopping(Callback):
r"""
Monitor a metric and stop training when it stops improving.
r"""Monitor a metric and stop training when it stops improving.

Args:
monitor: quantity to be monitored.
Expand Down
4 changes: 1 addition & 3 deletions src/pytorch_lightning/callbacks/finetuning.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,7 @@ def multiplicative(epoch: int) -> float:


class BaseFinetuning(Callback):
r"""
This class implements the base logic for writing your own Finetuning Callback.
r"""This class implements the base logic for writing your own Finetuning Callback.

Override ``freeze_before_training`` and ``finetune_function`` methods with your own logic.

Expand Down Expand Up @@ -338,7 +337,6 @@ class BackboneFinetuning(BaseFinetuning):
>>> multiplicative = lambda epoch: 1.5
>>> backbone_finetuning = BackboneFinetuning(200, multiplicative)
>>> trainer = Trainer(callbacks=[backbone_finetuning])

"""

def __init__(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,7 @@


class GradientAccumulationScheduler(Callback):
r"""
Change gradient accumulation factor according to scheduling.
r"""Change gradient accumulation factor according to scheduling.

Args:
scheduling: scheduling in format {epoch: accumulation_factor}
Expand Down
3 changes: 1 addition & 2 deletions src/pytorch_lightning/callbacks/lambda_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,7 @@


class LambdaCallback(Callback):
r"""
Create a simple callback on the fly using lambda functions.
r"""Create a simple callback on the fly using lambda functions.

Args:
**kwargs: hooks supported by :class:`~pytorch_lightning.callbacks.callback.Callback`
Expand Down
4 changes: 1 addition & 3 deletions src/pytorch_lightning/callbacks/lr_monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@


class LearningRateMonitor(Callback):
r"""
Automatically monitor and logs learning rate for learning rate schedulers during training.
r"""Automatically monitor and logs learning rate for learning rate schedulers during training.

Args:
logging_interval: set to ``'epoch'`` or ``'step'`` to log ``lr`` of all optimizers
Expand Down Expand Up @@ -84,7 +83,6 @@ def configure_optimizer(self):
)
lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...)
return [optimizer], [lr_scheduler]

"""

def __init__(self, logging_interval: Optional[str] = None, log_momentum: bool = False) -> None:
Expand Down
3 changes: 1 addition & 2 deletions src/pytorch_lightning/callbacks/model_summary.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,7 @@


class ModelSummary(Callback):
r"""
Generates a summary of all layers in a :class:`~pytorch_lightning.core.module.LightningModule`.
r"""Generates a summary of all layers in a :class:`~pytorch_lightning.core.module.LightningModule`.

Args:
max_depth: The maximum depth of layer nesting that the summary will include. A value of 0 turns the
Expand Down
18 changes: 7 additions & 11 deletions src/pytorch_lightning/callbacks/progress/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,9 @@


class ProgressBarBase(Callback):
r"""
The base class for progress bars in Lightning. It is a :class:`~pytorch_lightning.callbacks.Callback`
that keeps track of the batch progress in the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
You should implement your highly custom progress bars with this as the base class.
r"""The base class for progress bars in Lightning. It is a :class:`~pytorch_lightning.callbacks.Callback` that
keeps track of the batch progress in the :class:`~pytorch_lightning.trainer.trainer.Trainer`. You should
implement your highly custom progress bars with this as the base class.

Example::

Expand All @@ -44,7 +43,6 @@ def on_train_batch_end(self, trainer, pl_module, outputs, batch_idx):

bar = LitProgressBar()
trainer = Trainer(callbacks=[bar])

"""

def __init__(self) -> None:
Expand Down Expand Up @@ -225,9 +223,8 @@ def setup(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", stage: s
def get_metrics(
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"
) -> Dict[str, Union[int, str, float, Dict[str, float]]]:
r"""
Combines progress bar metrics collected from the trainer with standard metrics from get_standard_metrics.
Implement this to override the items displayed in the progress bar.
r"""Combines progress bar metrics collected from the trainer with standard metrics from
get_standard_metrics. Implement this to override the items displayed in the progress bar.

Here is an example of how to override the defaults:

Expand Down Expand Up @@ -256,9 +253,8 @@ def get_metrics(self, trainer, model):


def get_standard_metrics(trainer: "pl.Trainer", pl_module: "pl.LightningModule") -> Dict[str, Union[int, str]]:
r"""
Returns several standard metrics displayed in the progress bar, including the average loss value,
split index of BPTT (if used) and the version of the experiment when using a logger.
r"""Returns several standard metrics displayed in the progress bar, including the average loss value, split
index of BPTT (if used) and the version of the experiment when using a logger.

.. code-block::

Expand Down
Loading