OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2)

#OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2)


Download Latest VirtualBox from [https://www.virtualbox.org/wiki/Downloads] https://www.virtualbox.org/wiki/Downloads

Make sure to download extension pack

Oracle VM VirtualBox Extension Pack

Download and Install Vagrant from[https://www.vagrantup.com/downloads.html]https://www.vagrantup.com/downloads.html

First step is to setup VM.

vagrant up

Vagrant is configured to apply all usb filters required to access Neural Compute Stick 1 and 2 inside Virtualbox Ubuntu Xenial VM

This will create VM on your host machine with name “OpenVinoVM”

This will also automatically download OpenVINO to /home/vagrant/openvino


Install OpenVINO Dependencies

cd /home/vagrant/openvino/l_openvino_toolkit_p_2019.1.094/ && sudo ./install_openvino_dependencies.sh

Install OpenVINO

cd /home/vagrant/openvino/l_openvino_toolkit_p_2019.1.094/ && sudo ./install.sh

This will have multiple manual steps like accepting license and selecting kind of installation

Default installation path the download configured in Vagrantfile


Setup Vars

source /opt/intel/openvino_2019.1.094/bin/setupvars.sh

echo “source /opt/intel/openvino_2019.1.094/bin/setupvars.sh” >> /home/vagrant/.bashrc

Install UDEV Rules

THese are required for USB to be activated and used

sh/opt/intel/openvino_2019.1.094/install_dependencies/install_NCS_udev_ru es.sh

Update UDEV Rules are below

sudo vi /etc/udev/rules.d/97-myriad-usbboot.rules

I modified MODE to 666 from 660. OpenVINO has 660 as default.

SUBSYSTEM==“usb”, ATTRS{idProduct}“2150”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”

SUBSYSTEM==“usb”, ATTRS{idProduct}“2485”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”

SUBSYSTEM==“usb”, ATTRS{idProduct}“f63b”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”

Reload UDEV

sudo udevadm control –reload-rules && sudo udevadm trigger && sudo ldconfig

Configure Model Optimizer

cd /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/install_prerequisites && sudo ./install_prerequisites.sh

If you dont want to run optimizer for all different kind of model you can choose specific optimizer

Example: sudo ./install_prerequisites_caffe.sh ( For Caffe Model)

Example: sudo ./install_prerequisites_tf.sh ( For Tensorflow Model)

Verify if USB is attached

Type lsusb

You should see some USB device with vendor id like

Bus 002 Device 002: ID 03e7:2485

Finally Test

cd /opt/intel/openvino_2019.1.094/deployment_tools/demo && ./demo_squeezenet_download_convert_run.sh

This should print something like this

Image /opt/intel/openvino_2019.1.094/deployment_tools/demo/car.png

8170.8363345sports car, sport car
4790.0419131car wheel
7510.0091071racer, race car, racing car
4360.0068161beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
5860.0025741half track
7170.0016069pickup, pickup truck
8640.0012027tow truck, tow car, wrecker
5810.0005882grille, radiator grille

total inference time: 11.7261708

Average running time of one iteration: 11.7261708 ms

Throughput: 85.2793311 FPS

Downloading Public Model and Running Test

cd /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/

List public models that are known to work with OpenVINO

python3 downloader.py –print_all

Download a specific model, say GoogLeNet V2

Make sure vagrant has access to folder

sudo chmod -R 777 /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/

python3 downloader.py –name googlenet-v2

Convert Pretained Model to IR ( Intermediate Represenation. This can run on multiple hardware)

cd /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/classification/googlenet/v2/caffe

Use model optimizer to convert googlenet.caffemodel to IR

/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo.py –data_type FP16 –input_model googlenet-v2.caffemodel –input_proto googlenet-v2.prototxt

Deploy the converted IR model onto Intel NCS 2 using the toolkit’s IE API

cd /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/samples/python_samples

Download a test image from the internet

sudo wget -Nhttps://upload.wikimedia.org/wikipedia/commons/b/b6/Felis_catus-cat_on_snow.jpg

Run an inference on this image using a built-in sample code

python3 classification_sample/classification_sample.py -m /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/classification/googlenet/v2/caffe/./googlenet-v2.xml -i Felis_catus-cat_on_snow.jpg -d MYRIAD

This should give results like

Image Felis_catus-cat_on_snow.jpg


Look at [GIST]https://gist.github.com/ashwinrayaprolu1984/7245a37b86e5fd1920f8e4409e276132

Next write up will be on Image Classification Using OpenCV and OpenVINO

Published on System Code Geeks with permission by Ashwin Kumar, partner at our SCG program. See the original article here: OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2)

Opinions expressed by System Code Geeks contributors are their own.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button