Skip to content

Conversation

@vincentqb
Copy link
Contributor

@vincentqb vincentqb commented Jan 2, 2020

Closes #383

waveform, sample_rate = torchaudio.load(self.test_filepath)

# Single then transform then batch
expected = transforms.MelSpectrogram()(waveform).repeat(3, 1, 1, 1)
Copy link
Contributor

@cpuhrsch cpuhrsch Jan 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is repetitive. I'm wondering if there is a way of creating a test generator that explicitly exercises this decorator. It's a function that accepts a function with some args and kwargs and then assumes that the first input is to be batchable (so it applies various reshapes to args[0] etc.)

def _gen_batchable(self, func, *args, **kwargs):
    self.assertEqual(func(*args, **kwargs),
                                func(*(args[0].reshape(3, -1, -1, -1) + args[1:]), **kwargs)

(not sure on the reshape behavior, could also be a lambda).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I followed the format that I did for testing jitability, and simply introduced common functions to test batching. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm mostly just trying to iterate on top of that. Might be one of these non-problems I'm trying to fix.

Args:
waveform (torch.Tensor): Tensor of audio of dimension (channel, time).
waveform (torch.Tensor): Tensor of audio of dimension (..., time).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ([...], channel, time) to indicate that we still follow that convention of channel x time?

Copy link
Contributor Author

@vincentqb vincentqb Jan 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ... is described in the readme, so I would not use [...] as we discussed in #337.

We can add channel everywhere to emphasize the convention, but that seems repetitive to me, so I wouldn't go for it. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as it's documented and standardized so we can apply it consistently this is fine by me

@vincentqb vincentqb requested a review from cpuhrsch January 10, 2020 22:57
torch.random.manual_seed(42)
tensor = torch.rand((1, 201, 6))

n_fft = 400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these parameters special to the batch test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the same as test_torchscript_spectrogram, but we can change them or we could set them in the class directly.


stft = torch.tensor([
[[4., 0.], [4., 0.], [4., 0.], [4., 0.], [4., 0.]],
[[0., 0.], [0., 0.], [0., 0.], [0., 0.], [0., 0.]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why repeat all 0s instead of something more interesting (maybe just [0., 4.])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm reusing the parameters from test_istft_of_ones.


# pack batch
shape = specgram.size()
specgram = specgram.reshape([-1] + list(shape[-2:]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of reshape I suggest you use view. Reshape dispatches to view if it doesn't require a copy. If this does require a copy I think it'd be good to know about since it's not something we'd expect to be necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, but that means I need to change all of the batching. I suggest to change that all at the same time explicitly as a separate PR.

Copy link
Contributor

@cpuhrsch cpuhrsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks fine, I'd just revisit reshape vs. view

@vincentqb vincentqb merged commit c456524 into pytorch:master Jan 13, 2020
@vincentqb vincentqb self-assigned this Jan 13, 2020
vincentqb added a commit to vincentqb/audio that referenced this pull request Jan 13, 2020
* extend batch support

closes pytorch#383

* function for batch test.

* set seed.
@vincentqb vincentqb mentioned this pull request Jan 13, 2020
vincentqb added a commit that referenced this pull request Jan 14, 2020
* extend batch support

closes #383

* function for batch test.

* set seed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Is batching-for-transform done?

2 participants