Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 29 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
* [x] Age Gender Recognition
* [x] Emotion Recognition
* [x] Head Pose Estimation
* [x] Object Segmentation
* [x] Object Segmentation (Semantic & Instance)
* [x] Person Re-Identification
* [x] Vehicle Attribute Detection
* [x] Vehicle License Plate Detection
Expand All @@ -54,6 +54,7 @@

# Introduction
## Design Architecture
<p><details><summary>Architecture Design</summary>
From the view of hirarchical architecture design, the package is divided into different functional components, as shown in below picture.

![OpenVINO_Architecture](./data/images/design_arch.PNG "OpenVINO RunTime Architecture")
Expand Down Expand Up @@ -94,8 +95,10 @@ See more from [here](https://github.com/openvinotoolkit/openvino) for Intel Open
- **Optimized Models** provided by Model Optimizer component of Intel® OpenVINO™ toolkit. Imports trained models from various frameworks (Caffe*, Tensorflow*, MxNet*, ONNX*, Kaldi*) and converts them to a unified intermediate representation file. It also optimizes topologies through node merging, horizontal fusion, eliminating batch normalization, and quantization. It also supports graph freeze and graph summarize along with dynamic input freezing.
</details>
</p>
</details></p>

## Logic Flow
<p><details><summary> Logic Flow</summary>
From the view of logic implementation, the package introduces the definitions of parameter manager, pipeline and pipeline manager. The following picture depicts how these entities co-work together when the corresponding program is launched.

![Logic_Flow](./data/images/impletation_logic.PNG "OpenVINO RunTime Logic Flow")
Expand All @@ -119,6 +122,7 @@ The contents in **.yaml config file** should be well structured and follow the s
**Pipeline manager** manages all the created pipelines according to the inference requests or external demands (say, system exception, resource limitation, or end user's operation). Because of co-working with resource management and being aware of the whole framework, it covers the ability of performance optimization by sharing system resource between pipelines and reducing the burden of data copy.
</details>
</p>
</details></p>

# Supported Features
## Multiple Input Components
Expand Down Expand Up @@ -152,12 +156,13 @@ Currently, the corresponding relation of supported inference features, models us
|Emotion Recognition| Emotion recognition based on detected face image.|[pipeline_image.yaml](./sample/param/pipeline_image.yaml)<br>[pipeline_image_video.yaml](./sample/param/pipeline_image_video.yaml)<br>[pipeline_people.yaml](./sample/param/pipeline_people.yaml)<br>[pipeline_people_ip.yaml](./sample/param/pipeline_people_ip.yaml)|[emotions-recognition-retail-0003](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/emotions-recognition-retail-0003)|
|Age & Gender Recognition| Age and gender recognition based on detected face image.|[pipeline_image.yaml](./sample/param/pipeline_image.yaml)<br>[pipeline_image_video.yaml](./sample/param/pipeline_image_video.yaml)<br>[pipeline_people.yaml](./sample/param/pipeline_people.yaml)<br>[pipeline_people_ip.yaml](./sample/param/pipeline_people_ip.yaml)|[age-gender-recognition-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/age-gender-recognition-retail-0013)|
|Head Pose Estimation| Head pose estimation based on detected face image.|[pipeline_image.yaml](./sample/param/pipeline_image.yaml)<br>[pipeline_image_video.yaml](./sample/param/pipeline_image_video.yaml)<br>[pipeline_people.yaml](./sample/param/pipeline_people.yaml)<br>[pipeline_people_ip.yaml](./sample/param/pipeline_people_ip.yaml)|[head-pose-estimation-adas-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/head-pose-estimation-adas-0001)|
|Object Detection| Object detection based on SSD-based trained models.|[pipeline_object.yaml](./sample/param/pipeline_object.yaml)<br>[pipeline_object_topic.yaml](./sample/param/pipeline_object_topic.yaml)|[mobilenet-ssd](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/mobilenet-ssd)|
|Object Detection| Object detection based on SSD-based trained models.|[pipeline_object.yaml](./sample/param/pipeline_object.yaml)<br>[pipeline_object_topic.yaml](./sample/param/pipeline_object_topic.yaml)|[mobilenet-ssd](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/mobilenet-ssd)<br>[yolov5](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/111-yolov5-quantization-migration)<br>[yolov7](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/226-yolov7-optimization)<br>[yolov8](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/230-yolov8-optimization)|
|Vehicle and License Detection| Vehicle and license detection based on Intel models.|[pipeline_vehicle_detection.yaml](./sample/param/pipeline_vehicle_detection.yaml)|[vehicle-license-plate-detection-barrier-0106](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/vehicle-license-plate-detection-barrier-0106)<br>[vehicle-attributes-recognition-barrier-0039](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/vehicle-attributes-recognition-barrier-0039)<br>[license-plate-recognition-barrier-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/license-plate-recognition-barrier-0001)|
|Object Segmentation| Object segmentation.|[pipeline_segmentation.yaml](./sample/param/pipeline_segmentation.yaml)<br>[pipeline_segmentation_image.yaml](./sample/param/pipeline_segmentation_image.yaml)<br>[pipeline_video.yaml](./sample/param/pipeline_video.yaml)|[semantic-segmentation-adas-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/semantic-segmentation-adas-0001)<br>[deeplabv3](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/deeplabv3)|
|Object Segmentation - Semantic| semantic segmentation, assign a class label to each pixel in an image. |[pipeline_segmentation.yaml](./sample/param/pipeline_segmentation.yaml)<br>[pipeline_segmentation_image.yaml](./sample/param/pipeline_segmentation_image.yaml)<br>[pipeline_video.yaml](./sample/param/pipeline_video.yaml)|[semantic-segmentation-adas-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/semantic-segmentation-adas-0001)<br>[deeplabv3](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/deeplabv3)|
| Object Segmentation - Instance | Instance Segmentation, combination of semantic segmentation & object detection. | [pipeline_segmentation_instance.launch.yaml](./sample/param/pipeline_segmentation_instance.yaml) | [yolov8-seg](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/230-yolov8-optimization)<br>[mask_rcnn_inception_v2_coco_2018_01_28](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/mask_rcnn_inception_resnet_v2_atrous_coco)|
|Person Attributes| Person attributes based on object detection.|[pipeline_person_attributes.yaml](./sample/param/pipeline_person_attributes.yaml)|[person-attributes-recognition-crossroad-0230](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/person-attributes-recognition-crossroad-0230)<br>[person-detection-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/person-detection-retail-0013)|
|Person Reidentification|Person reidentification based on object detection.|[pipeline_person_reidentification.yaml](./sample/param/pipeline_reidentification.yaml)|[person-detection-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/person-detection-retail-0013)<br>[person-reidentification-retail-0277](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/intel/person-reidentification-retail-0277)|
|Object Segmentation Maskrcnn| Object segmentation and detection based on maskrcnn model.|[pipeline_segmentation_maskrcnn.yaml](./sample/param/pipeline_segmentation_maskrcnn.yaml)|[mask_rcnn_inception_v2_coco_2018_01_28](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/mask_rcnn_inception_resnet_v2_atrous_coco)|
|Object Segmentation Maskrcnn| Object segmentation and detection based on maskrcnn model.[_Deprecated, it is recommended to use `object segementation - instance` for first try._]|[pipeline_segmentation_maskrcnn.yaml](./sample/param/pipeline_segmentation_maskrcnn.yaml)|[mask_rcnn_inception_v2_coco_2018_01_28](https://github.com/openvinotoolkit/open_model_zoo/tree/releases/2022/3/models/public/mask_rcnn_inception_resnet_v2_atrous_coco)|
</details>
</p>

Expand Down Expand Up @@ -212,6 +217,7 @@ OpenCV based image window is natively supported by the package.
To enable window, Image Window output should be added into the output choices in .yaml config file. Refer to [the config file guidance](./doc/quick_start/yaml_configuration_guide.md) for more information about checking/adding this feature in your launching.

## Demo Result Snapshots
<p><details><summary>Demo Snapshots</summary>
For the snapshot of demo results, refer to the following picture.

* Face detection input from standard camera
Expand All @@ -225,6 +231,7 @@ For the snapshot of demo results, refer to the following picture.

* Person reidentification input from standard camera
![person_reidentification_demo_video](./data/images/person-reidentification.gif "person reidentification demo video")
</details></p>

# Installation and Launching
## Deploy in Local Environment
Expand All @@ -240,11 +247,28 @@ For the snapshot of demo results, refer to the following picture.
* OpenVINO api 2.0: Refer to the OpenVINO document for [OpenVINO_api_2.0](https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html) for latest api 2.0 transition guide.

# FAQ
* [How to get the IR file for yolov5?](./doc/quick_start/tutorial_for_yolov5_converted.md)
* How to get the IR file for [yolov5](./doc/quick_start/tutorial_for_yolov5_converted.md) | [yolov7](./doc/quick_start/tutorial_for_yolov7_converted.md) | [yolov8](./doc/quick_start/tutorial_for_yolov8_converted.md) ?
* [How to build OpenVINO by source?](https://github.com/openvinotoolkit/openvino/wiki#how-to-build)
* [How to build RealSense by source?](https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md)
* [What is the basic command of Docker CLI?](https://docs.docker.com/engine/reference/commandline/docker/)
* [What is the canonical C++ API for interacting with ROS?](https://docs.ros2.org/latest/api/rclcpp/)
<p><details><summary> How to change logging level?</summary>
This project provides to logging levels: *DEBUG* & *INFO*.<br>
You may follow the steps to change logging level:<br>

- Update ./openvino_wrapper_lib/CMakeLists.txt by uncommenting (for DEBUG level) or commenting (for INFO level) this line:
```code
#add_definitions(-DLOG_LEVEL_DEBUG)
```
- Rebuild project<br>
Refer corresponding quick-start documents to rebuild this project. e.g.:<br>
```code
source /opt/ros/<ros-distro>/setup.bash
colcon build --symlink-install
```
- Launch OpenVINO Node<br>
You will see the logging is changed.
</details></p>

# Feedback
* Report questions, issues and suggestions, using: [issue](https://github.com/intel/ros2_openvino_toolkit/issues).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
_background
person
bicycle
car
Expand Down Expand Up @@ -87,4 +88,4 @@ vase
scissors
teddy_bear
hair_drier
toothbrush
toothbrush
1 change: 1 addition & 0 deletions openvino_param_lib/src/param_manager.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,7 @@ void ParamManager::print() const
for (auto & infer : pipeline.infers) {
slog::info << "\t\tName: " << infer.name << slog::endl;
slog::info << "\t\tModel: " << infer.model << slog::endl;
slog::info << "\t\tModel-Type: " << infer.model_type << slog::endl;
slog::info << "\t\tEngine: " << infer.engine << slog::endl;
slog::info << "\t\tLabel: " << infer.label << slog::endl;
slog::info << "\t\tBatch: " << infer.batch << slog::endl;
Expand Down
6 changes: 5 additions & 1 deletion openvino_wrapper_lib/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ set(CMAKE_CXX_FLAGS "-std=c++17 ${CMAKE_CXX_FLAGS}")
####################################
## to get verbose log,
## then, uncomment below line
add_definitions(-DLOG_LEVEL_DEBUG)
#add_definitions(-DLOG_LEVEL_DEBUG)
####################################

# environment variable OpenVINO_DIR can be use instead of relaive path to specify location of configuration file
Expand Down Expand Up @@ -189,6 +189,7 @@ add_library(${PROJECT_NAME} SHARED
src/inferences/head_pose_detection.cpp
src/inferences/object_segmentation.cpp
src/inferences/object_segmentation_maskrcnn.cpp
src/inferences/object_segmentation_instance.cpp
src/inferences/person_reidentification.cpp
src/inferences/person_attribs_detection.cpp
#src/inferences/landmarks_detection.cpp
Expand All @@ -209,6 +210,8 @@ add_library(${PROJECT_NAME} SHARED
src/models/head_pose_detection_model.cpp
src/models/object_segmentation_model.cpp
src/models/object_segmentation_maskrcnn_model.cpp
src/models/object_segmentation_instance_model.cpp
src/models/object_segmentation_instance_maskrcnn_model.cpp
src/models/person_reidentification_model.cpp
src/models/person_attribs_detection_model.cpp
#src/models/landmarks_detection_model.cpp
Expand All @@ -217,6 +220,7 @@ add_library(${PROJECT_NAME} SHARED
src/models/license_plate_detection_model.cpp
src/models/object_detection_ssd_model.cpp
src/models/object_detection_yolov5_model.cpp
src/models/object_detection_yolov8_model.cpp
src/outputs/image_window_output.cpp
src/outputs/ros_topic_output.cpp
src/outputs/rviz_output.cpp
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
// Copyright (c) 2023 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#ifndef OPENVINO_WRAPPER_LIB__INFERENCES__OBJECT_SEGMENTATION_INSTANCE_HPP_
#define OPENVINO_WRAPPER_LIB__INFERENCES__OBJECT_SEGMENTATION_INSTANCE_HPP_
#include <object_msgs/msg/object.hpp>
#include <object_msgs/msg/object_in_box.hpp>
#include <object_msgs/msg/objects_in_boxes.hpp>
#include <rclcpp/rclcpp.hpp>
#include <memory>
#include <vector>
#include <string>
#include "openvino_wrapper_lib/models/object_segmentation_instance_model.hpp"
#include "openvino_wrapper_lib/engines/engine.hpp"
#include "openvino_wrapper_lib/inferences/base_inference.hpp"
#include "openvino/openvino.hpp"
#include "opencv2/opencv.hpp"
// namespace
namespace openvino_wrapper_lib
{
/**
* @class ObjectSegmentationInstanceResult
* @brief Class for storing and processing object segmentation result.
*/
class ObjectSegmentationInstanceResult : public Result
{
public:
friend class ObjectSegmentationInstance;
explicit ObjectSegmentationInstanceResult(const cv::Rect & location);
inline std::string getLabel() const
{
return label_;
}
inline void setLabel(const std::string& label)
{
label_ = label;
}
/**
* @brief Get the confidence that the detected area is a face.
* @return The confidence value.
*/
inline float getConfidence() const
{
return confidence_;
}
inline void setConfidence(float conf)
{
confidence_ = conf;
}
inline cv::Mat getMask() const
{
return mask_;
}
inline void setMask(const cv::Mat& mask)
{
mask_ = mask;
}

private:
std::string label_ = "";
float confidence_ = -1;
cv::Mat mask_;
};
/**
* @class ObjectSegmentation
* @brief Class to load object segmentation model and perform object segmentation.
*/
class ObjectSegmentationInstance : public BaseInference
{
public:
using Result = openvino_wrapper_lib::ObjectSegmentationInstanceResult;
explicit ObjectSegmentationInstance(double);
~ObjectSegmentationInstance() override;
/**
* @brief Load the object segmentation model.
*/
void loadNetwork(std::shared_ptr<Models::ObjectSegmentationInstanceModel>);
/**
* @brief Enqueue a frame to this class.
* The frame will be buffered but not infered yet.
* @param[in] frame The frame to be enqueued.
* @param[in] input_frame_loc The location of the enqueued frame with respect
* to the frame generated by the input device.
* @return Whether this operation is successful.
*/
bool enqueue(const cv::Mat &, const cv::Rect &) override;

/**
* @brief Start inference for all buffered frames.
* @return Whether this operation is successful.
*/
bool submitRequest() override;
/**
* @brief This function will fetch the results of the previous inference and
* stores the results in a result buffer array. All buffered frames will be
* cleared.
* @return Whether the Inference object fetches a result this time
*/
bool fetchResults() override;
/**
* @brief Get the length of the buffer result array.
* @return The length of the buffer result array.
*/
int getResultsLength() const override;
/**
* @brief Get the location of result with respect
* to the frame generated by the input device.
* @param[in] idx The index of the result.
*/
const openvino_wrapper_lib::Result * getLocationResult(int idx) const override;
/**
* @brief Show the observed detection result either through image window
or ROS topic.
*/
void observeOutput(const std::shared_ptr<Outputs::BaseOutput> & output);
/**
* @brief Get the name of the Inference instance.
* @return The name of the Inference instance.
*/
const std::string getName() const override;
const std::vector<cv::Rect> getFilteredROIs(
const std::string filter_conditions) const override;

private:
std::shared_ptr<Models::ObjectSegmentationInstanceModel> valid_model_;
std::vector<Result> results_;
int width_ = 0;
int height_ = 0;
double show_output_thresh_ = 0;

std::vector<cv::Vec3b> colors_ = {
{128, 64, 128}, {232, 35, 244}, {70, 70, 70}, {156, 102, 102}, {153, 153, 190},
{153, 153, 153}, {30, 170, 250}, {0, 220, 220}, {35, 142, 107}, {152, 251, 152},
{180, 130, 70}, {60, 20, 220}, {0, 0, 255}, {142, 0, 0}, {70, 0, 0},
{100, 60, 0}, {90, 0, 0}, {230, 0, 0}, {32, 11, 119}, {0, 74, 111},
{81, 0, 81}
};
};
} // namespace openvino_wrapper_lib
#endif // OPENVINO_WRAPPER_LIB__INFERENCES__OBJECT_SEGMENTATION_HPP_
Loading