Skip to content

Conversation

@carolineechen
Copy link
Contributor

issue: #1197

Copy link
Contributor

@mthrok mthrok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made some suggestion on testing, otherwise it looks good.

``dtype`` parameter is only effective for ``float32`` Tensor.
"""
if src.dtype == torch.float32 and dtype == None:
warnings.warn('`dtype` default value will be changed to `int16` in 0.9 release')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an instruction for suppress this warning? Many users want to be able to suppress warnings.
Something like "provide dtype argument to suppress this warning."

throw std::runtime_error("dtype conversion only supported for float32 tensors");
}
const auto tgt_dtype = (tensor.dtype() == torch::kFloat32 && dtype.has_value()) ?
get_dtype_from_str(dtype.value().c_str()) : tensor.dtype();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the call c_str() necessary? It looks like the resulting pointer will be implicitly converted to std::string again.

sox_io_backend.save(path, data, 8000, dtype=dtype)
found = load_wav(path, normalize=False)[0]
self.assertEqual(found.dtype, getattr(torch, dtype))
assert(torch.all(found >= range[0]) and torch.all(found <= range[1]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This range assertion does not look quite right. In case of int32 conversion, it will still pass even if the conversion does not happen.

How about performing exact comparison with Tensors that are small enough to catch the element wise parity?

    @parameterized.expand([
         ('float32', torch.Tensor([-1.0, -0.5, 0, 0.5, 1.0]).to(torch.float32)),
         ('int32', torch.Tensor([-2147483648, ..., 0, ..., 2147483647]).to(torch.int32)),
        ...
    ])
    def test_dtype_conversion(self, dtype, expected):
        data = torch.Tensor([-1.0, -0.5, 0, 0.5, 1.0]).to(torch.dtype32)
        ...
        self.assertEqual(found, expected)

``dtype=None`` means no conversion is performed.
``dtype`` parameter is only effective for ``float32`` Tensor.
"""
if src.dtype == torch.float32 and dtype == None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The save function has to be compatible with TorchScript compiler. Check out "Type Refinement" section to resolve None case.

https://pytorch.org/docs/stable/jit_language_reference.html#optional-type-refinement

I am not sure how TorchScript compiler processes warnings.warn. If the torchscript_compatibility test fails, you can move this warnings to _save function.

Copy link
Contributor

@mthrok mthrok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Can you fix the warning message?

"""
if src.dtype == torch.float32 and dtype is None:
warnings.warn(
'`dtype` default value will be changed to `int16` in 0.9 release'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'`dtype` default value will be changed to `int16` in 0.9 release'
'`dtype` default value will be changed to `int16` in 0.9 release. '

if src.dtype == torch.float32 and dtype is None:
warnings.warn(
'`dtype` default value will be changed to `int16` in 0.9 release'
'provide `dtype argument to suppress this warning'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'provide `dtype argument to suppress this warning'
'Specify `dtype` to suppress this warning.'

@mthrok mthrok merged commit 674a71d into pytorch:master Jan 28, 2021
@mthrok
Copy link
Contributor

mthrok commented Jan 28, 2021

Thanks!

mthrok pushed a commit to mthrok/audio that referenced this pull request Feb 26, 2021
* add new speech tutorial.

* update with a few parameter tuned. model takes less than 10 min to run now.

* feedback.

* improve GPU performance. add interactive demo at the end.

* feedback.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants