Skip to content

Add Float64 support #2497

@richardk53

Description

@richardk53

Float64 precision is currently not supported.
Why? This should be pretty straightforward, raising an Exception if the device or other configurations are not compatible with it.
Are you planning to add support soon?
What is the best workaround currently? Will it work if I do it manually via model.to(torch.float64) and similar for each dataloader or are there some caveats?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions