SSDMobileNetv2 implementation for OpenVINO

Saksham Sharma

Saksham Sharma

Chennai, Tamil Nadu

0 0
  • 0 Collaborators

Implementation of SSDMobileNetv2 in OpenVINO ...learn more

Project status: Published/In Market

oneAPI, Artificial Intelligence

Intel Technologies
OpenVINO, oneAPI

Code Samples [1]

Overview / Usage

This project demonstrates how to convert a TensorFlow model to run on OpenVINO and shows that the inference results are identical between the two frameworks.

Methodology / Approach

  1. Convert the model: Use the OpenVINO Model Optimizer to convert the SSD-MobileNetv2 model from TensorFlow to the Intermediate Representation (IR) format used by OpenVINO. This will generate two files: an XML file that describes the network topology and a binary file that contains the weights.

  1. Run inference with TensorFlow: Load the original SSD-MobileNetv2 model in TensorFlow and run inference on a test image. Record the results for later comparison.

3.Run inference with OpenVINO: Load the IR files generated in step 1 into the OpenVINO Inference Engine and run inference on the same test image used in step 2. Record the results.

  1. Run inference with OpenVINO: Load the IR files generated in step 1 into the OpenVINO Inference Engine and run inference on the same test image used in step 2. Record the results.

Technologies Used

1.)TensorFlow: For showing original inference and loading the original model

2.) OpenVINO: For Model Conversion and Inference on the converted model

Repository

https://github.com/AlexFierro9/openvino_notebooks/blob/main/notebooks/236-ssd-mobilenet-v2/236-ssd-mobilenet-v2.ipynb

Comments (0)