-
Notifications
You must be signed in to change notification settings - Fork 738
Description
🐛 Bug
Hi everyone, I am doing some experiments via lowpass_biquad filter in torchaudio, but I was trying to use GPU to speed up by code. I found that running the lowpass_biquad filter in GPU is much slower than running in CPU (x 200 slower). I am not sure whether it is a bug, but I am curious about whether anyone has the same situation as me?
To Reproduce
import time
import torch
from torchaudio.functional import lowpass_biquad
gpu_device = torch.device('cuda:0')
cpu_device = torch.device('cpu')
sample_rate = 44100
cutoff_freq = 1000.
Q = .7
# Run in cpu
x = torch.rand(sample_rate * 10, device=cpu_device)
begin = time.time()
y = lowpass_biquad(x, sample_rate, cutoff_freq, Q)
print(f'Run in cpu: {time.time() - begin}')
# Run in gpu
x = torch.rand(sample_rate * 10, device=gpu_device)
begin = time.time()
y = lowpass_biquad(x, sample_rate, cutoff_freq, Q)
print(f'Run in gpu: {time.time() - begin}')
Run in cpu: 0.01553034782409668
Run in gpu: 21.124146461486816
Expected behavior
Run in GPU should much faster that run in CPU
Environment
-
What commands did you used to install torchaudio (conda/pip/build from source)? pip
-
If you are building from source, which commit is it? no
-
What does
torchaudio.__version__print? (If applicable) 0.8.0 -
PyTorch Version (e.g., 1.0): 1.8.0
-
OS (e.g., Linux): Linux Ubuntu 18.04.3 LTS (Bionic Beaver)
-
How you installed PyTorch (
conda,pip, source): pip -
Build command you used (if compiling from source): no
-
Python version: 3.6.12
-
CUDA/cuDNN version: 10.0.130
Versions of relevant libraries:
[pip3] python==3.6.12
[pip3] conda==4.9.2
[pip3] torch==1.8.0
[pip3] torchaudio==0.8.0