Skip to content

Deprecate DeepSpeedPrecisionPlugin #10686

@awaelchli

Description

@awaelchli

Proposed refactor

Motivation

After #10596, the optimizer step and backward methods reside in the TTP (previously in Accelerator). Since DeepSpeed handles optimizer step, backward and precision internally, we can encapsulate all logic directly in the TTP. The DeepSpeedPrecisionPlugin is no longer required. It should not be possible to use custom precision plugins with DeepSpeed, an error handling should be added to prevent that.

Pitch

  1. Move DeepSpeedPrecisionPlugin.backward to DeepSpeedPlugin.backward
  2. Move DeepSpeedPrecisionPlugin.optimizer_step to DeepSpeedPlugin.optimizer_step
  3. Raise a MisconfigurationException when a precision plugin gets passed to the DeepSpeed plugin

It will also simplify an awkward pattern in the code for accessing the model via model.trainer.model.

Additional context

This will also unblock #10657


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

cc @justusschock @awaelchli @akihironitta @tchaton @SeanNaren

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions