You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pull Request resolved: #6488
## Changes
Move the following files to the root directory of Vulkan backend:
* `backends/vulkan/partitioner/supported_ops.py` -> `backends/vulkan/op_registry.py`
* `backends/vulkan/_passes/custom_ops_defs.py` -> `backends/vulkan/custom_ops_lib.py`
In the new `op_registry.py` file, the way operator features are specified is reworked to provide much more detail about the features of the operator implementation in Vulkan. See the new `OpFeatures` class for more details. An example of registering a new operator to the export flow is
```
@update_features(
[
exir_ops.edge.aten._log_softmax.default,
exir_ops.edge.aten._softmax.default,
exir_ops.edge.aten.mean.dim,
exir_ops.edge.aten.sum.dim_IntList,
exir_ops.edge.aten.amax.default,
exir_ops.edge.aten.amin.default,
]
)
def register_reduce_op(features: OpFeatures):
features.texture_impl = TextureImplFeatures(
uses_packed_dim=True,
)
features.resize_fn = True
def check_reduce_node(node: torch.fx.Node) -> bool:
dim_list = node.args[1]
assert isinstance(dim_list, list)
if len(dim_list) != 1:
return False
keepdim = node.args[2]
assert isinstance(keepdim, bool)
if not keepdim:
return False
return True
features.check_node_fn = check_reduce_node
return features
```
## Rationale
The purpose of these changes is to centralize operator definitions so that there is a common source of truth about the capabilities of operator implementation in Vulkan. This way, the partitioner does not have to implement ad-hoc functions for specific operators (i.e. `is_valid_to_copy`) and graph transforms do not have to maintain their own operator metadata (`USES_WEIGHTS` in `insert_prepack_nodes`).
ghstack-source-id: 250279709
@exported-using-ghexport
Differential Revision: [D64915640](https://our.internmc.facebook.com/intern/diff/D64915640/)
Co-authored-by: Stephen Jia <[email protected]>
0 commit comments