-
Couldn't load subscription status.
- Fork 7.2k
Description
Across our transformations we sometimes hardcode the value 255. This is justified if we make sure that point only torch.uint8 images are allowed, like
vision/torchvision/transforms/functional_tensor.py
Lines 471 to 472 in 788ad12
| if interpolation == "bicubic" and out_dtype == torch.uint8: | |
| img = img.clamp(min=0, max=255) |
However, there are a few instances where uint8 is implied but never enforced:
vision/torchvision/transforms/functional_tensor.py
Lines 266 to 267 in 788ad12
| bound = 1.0 if img1.is_floating_point() else 255.0 | |
| return (ratio * img1 + (1.0 - ratio) * img2).clamp(0, bound).to(img1.dtype) |
vision/torchvision/transforms/functional_tensor.py
Lines 778 to 779 in 788ad12
| bound = torch.tensor(1 if img.is_floating_point() else 255, dtype=img.dtype, device=img.device) | |
| return bound - img |
| bound = 1.0 if img.is_floating_point() else 255.0 |
Instead of hardcoding 255 here, we should either use _max_value(dtype) instead or if uint8 is actually required, enforce it.