Skip to content

Can't override Trainer defaults via env variables for LightningCLI #18874

@awaelchli

Description

@awaelchli

Bug description

Using the env vars PL_TRAINER_... to override trainer defaults doesn't work if you are using the LightningCLI. This is because the cli handles the defaults differently and prioritizes them over the env variable.

I need to make the logic to determine these settings consistent with how we do it in Fabric.

What version are you seeing the problem on?

v2.1

How to reproduce the bug

I will immediately work on it and know how to repro/fix it.

Error messages and logs

# Error messages and logs here please

Environment

Current environment
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):

More info

No response

cc @justusschock @awaelchli

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions