Skip to content

Conversation

@rohitgr7
Copy link
Contributor

@rohitgr7 rohitgr7 commented Sep 26, 2020

What does this PR do?

Fixes #3399
Fixes #3087
Fixes #3461

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@mergify mergify bot requested a review from a team September 28, 2020 20:09
@mergify
Copy link
Contributor

mergify bot commented Sep 29, 2020

This pull request is now in conflict... :(

@edenlightning
Copy link
Contributor

Hi @rohitgr7! Mind rebasing your PR?

@rohitgr7
Copy link
Contributor Author

Hi @rohitgr7! Mind rebasing your PR?

will complete this PR this week probably.

@stale
Copy link

stale bot commented Nov 4, 2020

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Nov 4, 2020
@rohitgr7 rohitgr7 removed the won't fix This will not be worked on label Nov 5, 2020
@stale
Copy link

stale bot commented Nov 19, 2020

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Nov 19, 2020
@rohitgr7 rohitgr7 removed the won't fix This will not be worked on label Nov 19, 2020
Copy link
Collaborator

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, is it ready to review (regarding the very last reverting actions needed)

@Borda Borda added the feature Is an improvement or enhancement label Nov 20, 2020
@Borda Borda added this to the 1.1 milestone Nov 20, 2020
@rohitgr7
Copy link
Contributor Author

LGTM, is it ready to review (regarding the very last reverting actions needed)

not yet @Borda.. little busy this week. Will try to complete this ASAP.

@rohitgr7 rohitgr7 marked this pull request as draft November 21, 2020 13:05
@rohitgr7 rohitgr7 force-pushed the feature/before_after_batch_hook branch from 329f9d0 to 0f28e24 Compare November 21, 2020 16:51
@codecov
Copy link

codecov bot commented Nov 21, 2020

Codecov Report

Merging #3671 (6ba5dc3) into master (5157ba5) will decrease coverage by 2%.
The diff coverage is 85%.

@@           Coverage Diff           @@
##           master   #3671    +/-   ##
=======================================
- Coverage      93%     91%    -2%     
=======================================
  Files         160     160            
  Lines       11340   11341     +1     
=======================================
- Hits        10550   10270   -280     
- Misses        790    1071   +281     

@rohitgr7 rohitgr7 changed the title [WIP] Add before_batch_transfer and after_batch_transfer hooks in LightningDataModule Add before_batch_transfer and after_batch_transfer hooks Nov 21, 2020
@rohitgr7 rohitgr7 marked this pull request as ready for review November 21, 2020 18:00
@rohitgr7 rohitgr7 force-pushed the feature/before_after_batch_hook branch from bc0121c to 8ddc2d7 Compare February 16, 2021 21:22
@mergify mergify bot removed the has conflicts label Feb 17, 2021
@williamFalcon williamFalcon enabled auto-merge (squash) February 17, 2021 17:04
@rohitgr7 rohitgr7 requested a review from awaelchli February 17, 2021 20:25
@williamFalcon williamFalcon merged commit bcc0004 into master Feb 18, 2021
@williamFalcon williamFalcon deleted the feature/before_after_batch_hook branch February 18, 2021 11:58
@leifdenby
Copy link

Thanks for adding these hooks! I would like to use these to apply transforms across my entire batch when training. How would I find out if a batch is part of the training, validation or test set? I want to make sure I'm only applying augmentations during training. I had a look to see I could get that from a state variable within the trainer, but I don't think I can access that within methods called by these hooks. Thank you again, it's really a joy porting my code to pytorch-lightning

@SeanNaren
Copy link
Contributor

Thanks for adding these hooks! I would like to use these to apply transforms across my entire batch when training. How would I find out if a batch is part of the training, validation or test set? I want to make sure I'm only applying augmentations during training. I had a look to see I could get that from a state variable within the trainer, but I don't think I can access that within methods called by these hooks. Thank you again, it's really a joy porting my code to pytorch-lightning

Hey @leifdenby! Do you mind making a new question issue or asking the question in the PyTorch Lightning Slack channel? primarily I don't want this getting lost as the issue is closed already :)

@rohitgr7
Copy link
Contributor Author

rohitgr7 commented Mar 3, 2021

hey @leifdenby you can use self.trainer.training/validating/testing (bool) for this purpose. In case you are using .tune or .predict you can use self.trainer.tuning/predicting respectively.

def transfer_batch_to_device(self, batch, ...):
    if self.trainer.testing:
        return test_transforms(batch)
    if self.trainer.training:
        return train_transforms(batch)
   ...

will add this to docs for convenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

design Includes a design discussion feature Is an improvement or enhancement

Projects

None yet