OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2)
#OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2)
Prerequsites
Download Latest VirtualBox from [https://www.virtualbox.org/wiki/Downloads] https://www.virtualbox.org/wiki/Downloads
Make sure to download extension pack
Oracle VM VirtualBox Extension Pack
Download and Install Vagrant from[https://www.vagrantup.com/downloads.html]https://www.vagrantup.com/downloads.html
First step is to setup VM.
1
|
vagrant up |
Vagrant is configured to apply all usb filters required to access Neural Compute Stick 1 and 2 inside Virtualbox Ubuntu Xenial VM
This will create VM on your host machine with name “OpenVinoVM”
This will also automatically download OpenVINO to /home/vagrant/openvino
Setup
Install OpenVINO Dependencies
cd /home/vagrant/openvino/l_openvino_toolkit_p_2019.1.094/ && sudo ./install_openvino_dependencies.sh
Install OpenVINO
cd /home/vagrant/openvino/l_openvino_toolkit_p_2019.1.094/ && sudo ./install.sh
This will have multiple manual steps like accepting license and selecting kind of installation
Default installation path the download configured in Vagrantfile
/opt/intel/openvino_2019.1.094
Setup Vars
source /opt/intel/openvino_2019.1.094/bin/setupvars.sh
echo “source /opt/intel/openvino_2019.1.094/bin/setupvars.sh” >> /home/vagrant/.bashrc
Install UDEV Rules
THese are required for USB to be activated and used
sh/opt/intel/openvino_2019.1.094/install_dependencies/install_NCS_udev_ru es.sh
Update UDEV Rules are below
sudo vi /etc/udev/rules.d/97-myriad-usbboot.rules
I modified MODE to 666 from 660. OpenVINO has 660 as default.
SUBSYSTEM==“usb”, ATTRS{idProduct}“2150”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”
SUBSYSTEM==“usb”, ATTRS{idProduct}“2485”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”
SUBSYSTEM==“usb”, ATTRS{idProduct}“f63b”, ATTRS{idVendor}“03e7”, GROUP=“users”, MODE=“0666”, ENV{ID_MM_DEVICE_IGNORE}=“1”
Reload UDEV
sudo udevadm control –reload-rules && sudo udevadm trigger && sudo ldconfig
Configure Model Optimizer
cd /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/install_prerequisites && sudo ./install_prerequisites.sh
If you dont want to run optimizer for all different kind of model you can choose specific optimizer
Example: sudo ./install_prerequisites_caffe.sh ( For Caffe Model)
Example: sudo ./install_prerequisites_tf.sh ( For Tensorflow Model)
Verify if USB is attached
Type lsusb
You should see some USB device with vendor id like
Bus 002 Device 002: ID 03e7:2485
Finally Test
cd /opt/intel/openvino_2019.1.094/deployment_tools/demo && ./demo_squeezenet_download_convert_run.sh
This should print something like this
Image /opt/intel/openvino_2019.1.094/deployment_tools/demo/car.png
classid | probability | label |
---|---|---|
817 | 0.8363345 | sports car, sport car |
511 | 0.0946488 | convertible |
479 | 0.0419131 | car wheel |
751 | 0.0091071 | racer, race car, racing car |
436 | 0.0068161 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon |
656 | 0.0037564 | minivan |
586 | 0.0025741 | half track |
717 | 0.0016069 | pickup, pickup truck |
864 | 0.0012027 | tow truck, tow car, wrecker |
581 | 0.0005882 | grille, radiator grille |
total inference time: 11.7261708
Average running time of one iteration: 11.7261708 ms
Throughput: 85.2793311 FPS
Downloading Public Model and Running Test
cd /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/
List public models that are known to work with OpenVINO
python3 downloader.py –print_all
Download a specific model, say GoogLeNet V2
Make sure vagrant has access to folder
sudo chmod -R 777 /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/
python3 downloader.py –name googlenet-v2
Convert Pretained Model to IR ( Intermediate Represenation. This can run on multiple hardware)
cd /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/classification/googlenet/v2/caffe
Use model optimizer to convert googlenet.caffemodel to IR
/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo.py –data_type FP16 –input_model googlenet-v2.caffemodel –input_proto googlenet-v2.prototxt
Deploy the converted IR model onto Intel NCS 2 using the toolkit’s IE API
cd /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/samples/python_samples
Download a test image from the internet
sudo wget -Nhttps://upload.wikimedia.org/wikipedia/commons/b/b6/Felis_catus-cat_on_snow.jpg
Run an inference on this image using a built-in sample code
python3 classification_sample/classification_sample.py -m /opt/intel/openvino_2019.1.094/deployment_tools/tools/model_downloader/classification/googlenet/v2/caffe/./googlenet-v2.xml -i Felis_catus-cat_on_snow.jpg -d MYRIAD
This should give results like
Image Felis_catus-cat_on_snow.jpg
classid | probability |
---|---|
173 | 0.4843750 |
54 | 0.2985840 |
7 | 0.1647949 |
200 | 0.0359497 |
66 | 0.0035839 |
10 | 0.0024872 |
473 | 0.0024281 |
84 | 0.0016794 |
198 | 0.0014591 |
152 | 0.0006762 |
Look at [GIST]https://gist.github.com/ashwinrayaprolu1984/7245a37b86e5fd1920f8e4409e276132
Next write up will be on Image Classification Using OpenCV and OpenVINO
Published on System Code Geeks with permission by Ashwin Kumar, partner at our SCG program. See the original article here: OpenVINO Ubuntu Xenial, Virtualbox and Vagrant Install, Intel NCS2 (Neural Compute Stick 2) Opinions expressed by System Code Geeks contributors are their own. |