Skip to content

on_train_batch_end receives batch on CPU #7377

@awaelchli

Description

@awaelchli

🐛 Bug

The callback method on_train_batch_end receives the batch still on CPU instead of the batch that the accelerator moved to the device.

To Reproduce

https://colab.research.google.com/drive/1axSx8lmpAhllnlcga6iE2eNgC1fjGK1b?usp=sharing

Expected behavior

The on_train_batch_end hook receives the batch on the device.

Environment

CUDA:
- GPU:
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- GeForce RTX 3090
- available: True
- version: 11.1

  • Packages:
    - numpy: 1.20.2
    - pyTorch_debug: False
    - pyTorch_version: 1.8.1+cu111
    - pytorch-lightning: 1.3.0rc2
    - tqdm: 4.60.0
  • System:
    - OS: Linux
    - architecture:
    - 64bit
    - ELF
    - processor: x86_64
    - python: 3.8.8
    - version: added test model to do also #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020

Additional context

Asked in the discussion:

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinghelp wantedOpen to be worked onpriority: 1Medium priority task

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions