Skip to content

Conversation

@hubenchang0515
Copy link

Fix the warning of "The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors", while running the code of Tutorials > Quickstart

It was because the bytes in python is readonly, and the function read return a bytes.

The warning details:

/usr/local/lib/python3.9/dist-packages/torchvision/datasets/mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)

The function invoking sequence is FashionMNIST -> MNIST -> __init__ -> _load_data -> read_image_file -> read_sn3_pascalvincent_tensor

the code in read_sn3_pascalvincent_tensor:

def read_sn3_pascalvincent_tensor(path: str, strict: bool = True) -> torch.Tensor:
    """Read a SN3 file in "Pascal Vincent" format (Lush file 'libidx/idx-io.lsh').
       Argument may be a filename, compressed filename, or file object.
    """
    # read
    with open(path, "rb") as f:
        data = f.read()
    # parse
    magic = get_int(data[0:4])
    nd = magic % 256
    ty = magic // 256
    assert 1 <= nd <= 3
    assert 8 <= ty <= 14
    m = SN3_PASCALVINCENT_TYPEMAP[ty]
    s = [get_int(data[4 * (i + 1): 4 * (i + 2)]) for i in range(nd)]
    parsed = np.frombuffer(data, dtype=m[1], offset=(4 * (nd + 1)))
    assert parsed.shape[0] == np.prod(s) or not strict
    return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)

@facebook-github-bot
Copy link
Contributor

Hi @hubenchang0515!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@NicolasHug
Copy link
Member

Thanks for the PR @hubenchang0515 ,

could you check that the issue is still present in the latest master branch? I believe that it should have been fixed already by #4184, which wasn't included in the latest 0.10 release

@hubenchang0515
Copy link
Author

The latest master branch had been fixed it already

@NicolasHug
Copy link
Member

Thanks for checking @hubenchang0515 , in this case I think it's safe to close this PR if you don't mind.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants