diff --git a/doc/quick_start/tutorial_for_yolov5_converted.md b/doc/quick_start/tutorial_for_yolov5_converted.md index 70ee8421..dfc82ed8 100644 --- a/doc/quick_start/tutorial_for_yolov5_converted.md +++ b/doc/quick_start/tutorial_for_yolov5_converted.md @@ -72,6 +72,17 @@ mkdir -p /opt/openvino_toolkit/models/convert/public/yolov5n/FP32/ sudo cp yolov5n.bin yolov5n.mapping yolov5n.xml /opt/openvino_toolkit/models/convert/public/yolov5n/FP32/ ``` +# yolov5 optimize to yolov5-int8 +``` +The yolov5 optimize to yolov5-int8 refer to the link: + +https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/111-yolov5-quantization-migration + +The installation guide +https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md#-installation-guide + +``` + # FAQ

diff --git a/doc/quick_start/tutorial_for_yolov7_converted.md b/doc/quick_start/tutorial_for_yolov7_converted.md new file mode 100644 index 00000000..9c476634 --- /dev/null +++ b/doc/quick_start/tutorial_for_yolov7_converted.md @@ -0,0 +1,103 @@ +# Tutorial_For_yolov7_Converted + +# Introduction +This document describes a method to convert YOLOv7 nano PyTorch weight files with the .pt extension to ONNX weight files, and a method to convert ONNX weight files to IR +files using the OpenVINO model optimizer. This method can help OpenVINO users optimize YOLOv7 for deployment in practical applications. + +## Reference Phrase +|Term|Description| +|---|---| +|OpenVINO|Open Visual Inference & Neural Network Optimization| +|ONNX|Open Neural Network Exchange| +|YOLO|You Only Look Once| +|IR|Intermediate Representation| + +## Reference Document +|Doc|Link| +|---|---| +|OpenVINO|[openvino_2_0_transition_guide](https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html)| +|YOLOv7|[yolov7](https://github.com/WongKinYiu/yolov7)| + +# Convert Weight File to ONNX +* Copy YOLOv7 Repository from GitHub +``` +git clone https://github.com/WongKinYiu/yolov7.git +``` + +* Set Environment for Installing YOLOv7 +``` +cd yolov7 +python3 -m venv yolo_env // Create a virtual python environment +source yolo_env/bin/activate // Activate environment +pip install -r requirements.txt // Install yolov7 prerequisites +pip install onnx // Install ONNX +pip install nvidia-pyindex // Add NVIDIA PIP index +pip install onnx-graphsurgeon // Install GraphSurgeon +``` + +* Download PyTorch Weights +``` +mkdir model_convert && cd model_convert +wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt" +``` + +* Convert PyTorch weights to ONNX weights +YOLOv7 repository provides export.py script, which can be used to convert PyTorch weight to ONNX weight. +``` +cd .. +python3 export.py --weights model_convert/yolov7.pt +``` + +# Convert ONNX files to IR files +After obtaining the ONNX weight file from the previous section [Convert Weight File to ONNX](#convert-weight-file-to-onnx), we can use the model optimizer to convert it to an IR file. + +* Install the OpenVINO Model Optimizer Environment +To use the model optimizer, you need to run the following command to install some necessary components (if you are still in the yolo_env virtual environment, you need to run the **deactivate** command to exit the environment or start a new terminal). +``` +python3 -m venv ov_env // Create openVINO virtual environment +source ov_env/bin/activate // Activate environment +python -m pip install --upgrade pip // Upgrade pip +pip install openvino[onnx]==2022.3.0 // Install OpenVINO for ONNX +pip install openvino-dev[onnx]==2022.3.0 // Install OpenVINO Dev Tool for ONNX +``` + +* Generate IR file +``` +cd model_convert +mo --input_model yolov7.onnx +``` +Then we will get three files: yolov7.xml, yolov7.bin, and yolov7.mapping under the model_convert folder. + +# Move to the Recommended Model Path +``` +cd ~/yolov7/model_convert +mkdir -p /opt/openvino_toolkit/models/convert/public/yolov7/FP32/ +sudo cp yolov7.bin yolov7.mapping yolov7.xml /opt/openvino_toolkit/models/convert/public/yolov7/FP32/ +``` + +# yolov7 optimize to yolov7-int8 +``` +The yolov7 optimize to yolov7-int8 refer to the link: + +https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/226-yolov7-optimization + +The installation guide +https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md#-installation-guide + +``` + + +# FAQ + +

+

+How to install the python3-venv package? + +On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. +``` +apt-get update +apt-get install python3-venv +``` +You may need to use sudo with that command. After installing, recreate your virtual environment. +
+

diff --git a/doc/quick_start/tutorial_for_yolov8_converted.md b/doc/quick_start/tutorial_for_yolov8_converted.md new file mode 100644 index 00000000..5d9793fe --- /dev/null +++ b/doc/quick_start/tutorial_for_yolov8_converted.md @@ -0,0 +1,99 @@ +# Tutorial_For_yolov8_Converted + +# Introduction +Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. +YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, +image classification and pose estimation tasks. +This document describes a method to convert YOLOv8 nano PyTorch weight files with the .pt extension to ONNX weight files, and a method to convert ONNX weight files to IR +files using the OpenVINO model optimizer. This method can help OpenVINO users optimize YOLOv8 for deployment in practical applications. + +##
Documentation
+ +See below for a quickstart installation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full documentation on training, validation, prediction and deployment. + +
+Install + + +Pip install the ultralytics package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). + +```bash +mkdir -p yolov8 && cd yolov8 +pip install ultralytics +apt install python3.8-venv +python3 -m venv openvino_env +source openvino_env/bin/activate +``` + + #### Train +Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments seethe Configuration page. +YOLOv8 may be used directly in the Command Line Interface (CLI) with a `yolo` command: + +```CLI +# Build a new model from YAML and start training from scratch +yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640 + +# Start training from a pretrained *.pt model +yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640 +``` + + +#### Val + +Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the model retains it's training data and arguments as model attributes. +```CLI +# val official model +yolo detect val model=yolov8n.pt + +``` + +#### Predict +Use a trained YOLOv8n model to run predictions on images. +``` CLI +# predict with official model +yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' +``` + +#### Export +Export a YOLOv8n model to a different format like ONNX, CoreML, etc. +``` +# export official model +yolo export model=yolov8n.pt format=openvino + +``` + +# Move to the Recommended Model Path +``` +cd yolov8n_openvino_model + +mkdir -p /opt/openvino_toolkit/models/convert/public/FP32/yolov8n + +sudo cp yolov8* /opt/openvino_toolkit/models/convert/public/FP32/yolov8n + +``` + +# yolov8n optimize to yolov8n-int8 +``` +The yolov8n optimize to yolov8n-int8 refer to the link: + +https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/230-yolov8-optimization/230-yolov8-optimization.ipynb + +The installation guide +https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md#-installation-guide + +``` + +# FAQ + +

+

+Reference link + +``` +https://github.com/ultralytics/ultralytics +https://docs.ultralytics.com/tasks/detect/#predict + +``` + +
+