Skip to content

Conversation

@pmeier
Copy link
Contributor

@pmeier pmeier commented Feb 8, 2023

No description provided.

@pmeier
Copy link
Contributor Author

pmeier commented Feb 9, 2023

After I freaked a little last night seeing 4k failing tests, @vfdev-5 rightfully pointed out that most of v1 does not support float16 either. This is why @NicolasHug only saw failures for ElasticTransform in #7159 rather than for all transforms that use _apply_grid_transform internally. It is an outlier for two reasons:

  1. The computation is simple enough that the only op that doesn't support float16 is grid_sample.

  2. For the reason above, we probably decided to test it explicitly:

    @pytest.mark.parametrize("dt", [None, torch.float32, torch.float64, torch.float16])

    In contrast, the tests for F.affine have this escape hatch:

    if dt == torch.float16 and device == "cpu":
    # skip float16 on CPU case
    return

def sample_inputs_elastic_image_tensor():
for image_loader in make_image_loaders(sizes=["random"]):
for image_loader in make_image_loaders(
sizes=["random"], dtypes=[torch.uint8, torch.float16, torch.float32, torch.float64]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sizes=["random"], dtypes=[torch.uint8, torch.float16, torch.float32, torch.float64]
sizes=["random"], dtypes=[torch.uint8, torch.float32, torch.float64]

Shouldn't we remove f16 from here as well ?

@pmeier
Copy link
Contributor Author

pmeier commented Feb 13, 2023

Handled in #7211.

@pmeier pmeier closed this Feb 13, 2023
@pmeier pmeier deleted the expand-proto-transforms-tests branch February 13, 2023 11:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants