You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/commands.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -221,7 +221,7 @@ An array of alternating key-value pairs as follows:
221
221
1.**MINBATCHSIZE**: The minimum size of any batch of incoming requests.
222
222
1.**INPUTS**: array reply with one or more names of the model's input nodes (applicable only for TensorFlow models)
223
223
1.**OUTPUTS**: array reply with one or more names of the model's output nodes (applicable only for TensorFlow models)
224
-
1.**BLOB**: a blob containing the serialized model (when called with the `BLOB` argument) as a String
224
+
1.**BLOB**: a blob containing the serialized model (when called with the `BLOB` argument) as a String. If the size of the serialized model exceeds `MODEL_CHUNK_SIZE` (see `AI.CONFIG` command), then an array of chunks is returned. The full serialized model can be obtained by concatenating the chunks.
225
225
226
226
**Examples**
227
227
@@ -721,6 +721,7 @@ _Arguments_
721
721
***TFLITE**: The TensorFlow Lite backend
722
722
***TORCH**: The PyTorch backend
723
723
***ONNX**: ONNXRuntime backend
724
+
***MODEL_CHUNK_SIZE**: Sets the size of chunks (in bytes) in which model payloads are split for serialization, replication and `MODELGET`. Default is `511 * 1024 * 1024`.
724
725
725
726
_Return_
726
727
@@ -748,3 +749,10 @@ This loads the PyTorch backend with a full path:
0 commit comments