Skip to content

[question] How to override how self.train_transform is applied to batches in LightningDataModule. #3087

@akashpalrecha

Description

@akashpalrecha

❓ Questions and Help

What is your question?

I want to apply the same random transformation on both X and Y (both are batches if RGB image) of a training batch using Kornia after the batch has been moved to the GPU (or TPU).

I cannot figure out what part of PL I need to override to make this work.

As far as Kornia is concerned, I know how to get the state of a random transform and use it again for another batch. But I cannot understand where inside LightningDataModule I am supposed to write that code such that it works properly even for MultiGPU training.

Code

What have you tried?

I can simply include the augmentation step in the LightningModule's training step after receiving the batches, but that would decouple my LightningDataModule from it's transforms which I don't want to do.

Any help would be greatly appreciated !! 🙂

On another note, the documentation isn't very clear on how the self.train_transform property of LightningDataModule is applied to batches of data and how we can change that behavior (which is pretty much what my question is).

What's your environment?

  • OS: [Ubuntu 16.04 LTS]
  • Packaging [ conda ]
  • Version [0.8.5]

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requestedwon't fixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions