-
Notifications
You must be signed in to change notification settings - Fork 739
Closed
Description
torchaudio.transforms.Resample appears to only apply to single channels.
tensor.size()
# torch.Size([2, 276858])
torchaudio.transforms.Resample(frequency, new_frequency)(tensor).size()
# RuntimeError: Given groups=1, weight of size 1 1 121, expected input[1, 2, 276972] to have 1 channels, but got 2 channels instead
torchaudio.transforms.Resample(frequency, new_frequency)(tensor[0,:].view(1,-1))
# torch.Size([1, 27686])
torchaudio.transforms.Compose([
torchaudio.transforms.LC2CL(),
torchaudio.transforms.DownmixMono(),
torchaudio.transforms.LC2CL(),
torchaudio.transforms.Resample(frequency, new_frequency)
])(tensor).size()
# torch.Size([1, 27686])
Since we are in the process of standardizing #152, would it make sense to apply the resample on the last dimension, assumed to be time?
Metadata
Metadata
Assignees
Labels
No labels