You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -719,25 +721,24 @@ public NamedOnnxValue GetNamedOnnxValue()
719
721
/// | Required NuGet in addition to Microsoft.ML | Microsoft.ML.OnnxTransformer (always), either Microsoft.ML.OnnxRuntime 1.3.0 (for CPU processing) or Microsoft.ML.OnnxRuntime.Gpu 1.3.0 (for GPU processing if GPU is available) |
720
722
/// | Exportable to ONNX | No |
721
723
///
724
+
/// To create this estimator use the following APIs:
/// Supports inferencing of models in ONNX 1.6 format (opset 11), using the [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/) library.
723
728
/// Models are scored on CPU if the project references Microsoft.ML.OnnxRuntime and on the GPU if the project references Microsoft.ML.OnnxRuntime.Gpu.
724
729
/// Every project using the OnnxScoringEstimator must reference one of the above two packages.
725
730
///
726
731
/// To run on a GPU, use the
727
732
/// NuGet package [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/) instead of the Microsoft.ML.OnnxRuntime nuget (which is for CPU processing). Microsoft.ML.OnnxRuntime.Gpu
728
733
/// requires a [CUDA supported GPU](https://developer.nvidia.com/cuda-gpus#compute), the [CUDA 10.1 Toolkit](https://developer.nvidia.com/cuda-downloads), and [cuDNN 7.6.5](https://developer.nvidia.com/cudnn) (as indicated on [Onnxruntime's documentation](https://github.com/Microsoft/onnxruntime#default-gpu-cuda)).
729
-
/// Set parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1.
734
+
/// When creating the estimator through [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*), set the parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1. If the GPU device isn't found but 'fallbackToCpu = true' then the estimator will run on the CPU. If the GPU device isn't found but 'fallbackToCpu = false' then the estimator will throw an exception
730
735
///
731
736
/// The inputs and outputs of the ONNX models must be Tensor type. Sequence and Maps are not yet supported.
732
737
///
733
738
/// OnnxRuntime works on Windows, MacOS and Ubuntu 16.04 Linux 64-bit platforms.
734
739
/// Visit [ONNX Models](https://github.com/onnx/models) to see a list of readily available models to get started with.
735
740
/// Refer to [ONNX](http://onnx.ai) for more information.
0 commit comments