-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
accelerator: cudaCompute Unified Device Architecture GPUCompute Unified Device Architecture GPUbugSomething isn't workingSomething isn't working
Milestone
Description
🐛 Bug
I am trying to update test_trainer_with_gpus_options_combination_at_available_gpus_env in #12589 in preparation for #11040, but it is failing with the following stack trace:
tests/trainer/properties/test_auto_gpu_select.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pytorch_lightning/utilities/argparse.py:339: in insert_env_defaults
return fn(self, **kwargs)
pytorch_lightning/trainer/trainer.py:486: in __init__
self._accelerator_connector = AcceleratorConnector(
pytorch_lightning/trainer/connectors/accelerator_connector.py:194: in __init__
self._set_parallel_devices_and_init_accelerator()
pytorch_lightning/trainer/connectors/accelerator_connector.py:512: in _set_parallel_devices_and_init_accelerator
self._parallel_devices = self.accelerator.get_parallel_devices(self._devices_flag)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
devices = None
@staticmethod
def get_parallel_devices(devices: List[int]) -> List[torch.device]:
"""Gets parallel devices for the Accelerator."""
> return [torch.device("cuda", i) for i in devices]
E TypeError: 'NoneType' object is not iterable
pytorch_lightning/accelerators/gpu.py:82: TypeError
cc @justusschock @kaushikb11 @awaelchli @akihironitta @rohitgr7
awaelchliawaelchli
Metadata
Metadata
Assignees
Labels
accelerator: cudaCompute Unified Device Architecture GPUCompute Unified Device Architecture GPUbugSomething isn't workingSomething isn't working