Skip to content

Deprecate trainer.num_processe/trainer.num_gpus and remove incorrect tests  #11624

@four4fish

Description

@four4fish

Proposed refactor

Trainer.num_processes() and trainer.num_gpus are not used in the code base and only exists in tests. Propose deprecate/remove these properties and remove incorrect/unnecessary tests

Motivation

  1. Simplify code and reduce confusion. Strategy.num_processes != Trainer.num_processes, and Trainer.num_processes is only called in test. (confusion raised in Lazy initialize Strategy.parallel_devices #11572)
  2. Simplify accelerator_connector rewrite Rewrite Accelerator_connector and follow up tasks #11449. The current accelerator_connector has a lot of logic related to num_processes which is not necessary and confusing, remove it first will simplify the refactor

Pitch

Steps:
1 Deprecate trainer.num_processes
https://github.com/PyTorchLightning/pytorch-lightning/blob/fe34bf2a653ebd50e6a3a00be829e3611f820c3c/pytorch_lightning/trainer/trainer.py#L1969-L1971

  1. Remove trainer.num_processes tests
    https://github.com/PyTorchLightning/pytorch-lightning/blob/fe34bf2a653ebd50e6a3a00be829e3611f820c3c/tests/trainer/test_trainer.py#L2093-L2111

  2. Do not carry self.num_processes logic over to accelerator_connector rewrite.

Notes: this won't impact Strategy.num_processes as it's irrelevant. Strategy.num_processes calculated base on parallel_devices which is not equal or related to trainer.num_processes

same for trainer.num_gpus

Additional context


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

cc @justusschock @awaelchli @akihironitta @rohitgr7 @kaushikb11 @Borda @ananthsub @ninginthecloud @jjenniferdai

Metadata

Metadata

Assignees

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions