You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Speed up the inference of the saved_model(s). Fixes#5847 (#5848)
* Speed up of the inference of saved_model(s).
Signed-off-by: darth-vader-lg <[email protected]>
* Fixed TensorFlowTransform fitting problem.
- Fixed the exception while fitting data with more than one input tensor. Followed the OnnxTransformer schema for the data view getters creation.
Signed-off-by: darth-vader-lg <[email protected]>
* Dispose of the cached tensors in the TensorFlowTransformer.
- The cached tensors are disposed at the end of inference operations.
Signed-off-by: darth-vader-lg <[email protected]>
0 commit comments