-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workingfeatureIs an improvement or enhancementIs an improvement or enhancementlet's do it!approved to implementapproved to implement
Milestone
Description
❓ Questions and Help
What is your question?
I'm not sure, that this is a bug, so I put it like a question.
The problem is: if I want to add the Trainer arguments to my custom ArgumentParser object, I call the add_argparse_args Trainer classmethod. But this method doesn't cast the Trainer arguments to their required types.
It forces me to cast the arguments by myself. Like so:
trainer_args.update(
{
'accumulate_grad_batches': int(trainer_args['accumulate_grad_batches']),
'train_percent_check': float(trainer_args['train_percent_check']),
'val_percent_check': float(trainer_args['val_percent_check']),
'val_check_interval': int(trainer_args['val_check_interval']),
'track_grad_norm': int(trainer_args['track_grad_norm']),
'max_epochs': int(trainer_args['max_epochs']),
'precision': int(trainer_args['precision']),
'gradient_clip_val': float(trainer_args['gradient_clip_val']),
}
)And after that, I can pass updated arguments to the Trainer:
trainer = pytorch_lightning.Trainer(
**trainer_args
)And I can't find a central place, where the Trainer handles an automatically generated arguments (their types).
What have you tried?
I've tried to pass arguments to the trainer without handling their types. For instance, if i'll not cast accumulate_grad_batches to the integer type, the exception will be raised:
TypeError: Gradient accumulation supports only int and dict types
What's your environment?
- OS: [Linux]
- Packaging [pip]
- Version [0.7.1]
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingfeatureIs an improvement or enhancementIs an improvement or enhancementlet's do it!approved to implementapproved to implement