Skip to content

TPU available: true when there are no TPUs #3104

@dalmia

Description

@dalmia

🐛 Bug

I am using a DGX machine (and so, no TPUs), but on initiating Trainer, it logs TPU available: True. This ends up returning Missing XLA configuration when I run my script.

To Reproduce

Code sample

Simply running the following lines on my machine:

>> trainer = pl.Trainer(gpus=[0])                                                                                                                 
GPU available: True, used: True
TPU available: True, using: 0 TPU cores

Expected behavior

>> trainer = pl.Trainer(gpus=[0])                                                                                                                 
GPU available: True, used: True
TPU available: False, using: 0 TPU cores

Environment

* CUDA:
        - GPU:
                - Tesla V100-SXM2-32GB
        - available:         True
        - version:           10.2
* Packages:
        - numpy:             1.18.2
        - pyTorch_debug:     False
        - pyTorch_version:   1.6.0
        - pytorch-lightning: 0.9.0
        - tensorboard:       2.2.0
        - tqdm:              4.45.0
* System:
        - OS:                Linux
        - architecture:
                - 64bit
                - 
        - processor:         x86_64
        - python:            3.6.9
        - version:           #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019

Metadata

Metadata

Assignees

No one assigned

    Labels

    accelerator: tpuTensor Processing UnitbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions