Skip to content

industrial-edge/ai-sdk-tutorials

AI SDK, Tutorials and guides

Introduction to AI SDK

The AI Software Development Kit, or AI SDK for short, is a set of Python libraries. These libraries provide building blocks for automating the creation, packaging, and testing of inference pipelines for the AI Inference Server.

You can consider AI SDK to be the entry point into Siemens' Industrial AI portfolio. AI SDK helps you in a number of steps in a machine learning workflow, like packaging the models and its depenedencies, verifying and testing the package locally, and creating the inference pipeline package to run your model on AI Inference Server.

  • Version of AI SDK 2.4.0 and above is available on pypi.org with name simaticai.
  • Previous versions can be downloaded from

Tutorials and guides

This collection of guidelines, howto examples, and use case specific solutions contain all the necessary information and dependencies for a quick and smooth start of using AI SDK.

We recommend studying the End-to-end tutorials first.
The tutorials will guide you through a notebook-based ML workflow starting with training an example model and show you the recommended way to use the AI SDK for packaging a trained model for deployment and to test such packages.
The End-to-End tutorials cover the following machine learning workflow steps:

  • Training data preparation
  • Training models
  • Packaging models as an inference pipeline
  • Testing of packaged inference pipelines
  • Generating the inference pipeline for AI@Edge

You can also use these tutorials as a starting point for packaging and testing your own models.

Hint
From version 2.6.0 we discontinue testing the tutorials on other environments than Pyhon 3.12 for Linux type OS. All of represented examples can be tried in other environments, but the best experience only guaranteed the environment above.

Hint
If you want to run these end-to-end tutorials, you need to setup your environment with appropriate dependencies. Each tutorial explains how to setup the environment in their README.md file

Our Howto guides provide tips and tricks on specific aspect of packaging your ML model for running on AI Inference Server.

And lastly, the Howto notebooks serves you runnable examples for various steps in the ML workflow that you help you master pipeline creation.