Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Conversation

@dccastro
Copy link
Member

@dccastro dccastro commented Feb 7, 2022

Create a MONAI-style transform to allow subsampling tensors/arrays/lists to a given maximum length. This is to help enforce max_bag_size in a MIL setting after the full data has already been loaded (e.g. after caching).

Piggybacking on this PR, support has also been added for mean pooling in the DeepMIL pipeline.

@dccastro dccastro changed the title [WIP] Add subsampling transform [WIP] Add subsampling transform and mean pooling Feb 8, 2022
@dccastro dccastro changed the title [WIP] Add subsampling transform and mean pooling Add subsampling transform and mean pooling Feb 18, 2022
@dccastro dccastro marked this pull request as ready for review February 18, 2022 15:17
instance_features = self.encoder(instances) # N X L x 1 x 1
attentions, bag_features = self.aggregation_fn(instance_features) # K x N | K x L
bag_features = bag_features.view(-1, self.num_encoding * self.pool_out_dim)
bag_features = bag_features.view(1, -1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why was this changed here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is a more robust way to reshape the outputs, without relying on num_encoding and pool_out_dim provided by the encoder and pooling components. In particular, MeanPoolingLayer ignores all arguments passed to it, so this operation would have failed if we expected a different shape here.

@dccastro dccastro merged commit e2ec5cc into main Feb 21, 2022
@dccastro dccastro deleted the dacoelh/subsample_tiles branch February 21, 2022 11:24
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants