-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
accelerator: tpuTensor Processing UnitTensor Processing UnitbugSomething isn't workingSomething isn't working
Milestone
Description
🐛 Bug
I am using a DGX machine (and so, no TPUs), but on initiating Trainer, it logs TPU available: True. This ends up returning Missing XLA configuration when I run my script.
To Reproduce
Code sample
Simply running the following lines on my machine:
>> trainer = pl.Trainer(gpus=[0])
GPU available: True, used: True
TPU available: True, using: 0 TPU coresExpected behavior
>> trainer = pl.Trainer(gpus=[0])
GPU available: True, used: True
TPU available: False, using: 0 TPU coresEnvironment
* CUDA:
- GPU:
- Tesla V100-SXM2-32GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.2
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019
Metadata
Metadata
Assignees
Labels
accelerator: tpuTensor Processing UnitTensor Processing UnitbugSomething isn't workingSomething isn't working