-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
📚 Documentation
In Trainer class api,

it said not to use accelerator parameter and will be removed in 1.7.0 (current stable version is 1.7.1), BUT tpu_cores said to use accelator.

In Accelerator: TPU Training (also stable version), all the examples use accelerator='tpu'.

I don't really know which one to follow. And when I head to, What is a Strategy?

I don't really understand why we need strategy="ddp_spawn" here when without it, the model is still trained on 8 cores?
Moreover, Learn more of TPU links to examples that don't use any TPU strategy?

This example should use precision='bf16'

If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
cc @Borda @carmocca @JackCaoG @steventk-g @Liyang90 @rohitgr7 @kaushikb11