PLAI

PLAI is a PyTorch based tool for NN quantization – used to convert floating-point neural networks to a fixed-point implementation (as used by GTI’s USB dongle), or to train fixed-point models from scratch. PLAI uses the host system’s CPUand/or GPU for training and GTI’s USB dongle for inferencing, and is supported on both Windows and Linux operating systems.

PLAI now supports three models of GNet1, GNet18 and GNetfc based on VGG-16.

Operation requirements and suggestions

PLAI’s operating hardware requirements are as follows:

  • Dominate frequency of Intel i5 3.0 GHz and above or CPU with better capacity (Intel i7 is a better choice)
  • 8 GB memory or more
  • For graphics card with 6GB memory or more, it is recommended to use GTX 1060 or better, and AMD is not suitable. (As an option, it is highly recommended which can largely reduce the training time)

PLAI will eventually use USB Dongle for inferencing, and if there is, it can be configured and used.

PLAI now supports the following systems:

  • Ubuntu LTS 16.04
  • Windows 10

Running environment configuration

Environmental dependency

  • Python3
  • PyTorch
  • OpenCV
  • Version of CUDA 9.0 and above (optional)

Ubuntu

Miniconda is used here for environment configuration as an example.

First of all, download Miniconda in Python3.7 version from https://conda.io/miniconda.html, here 64-bit version is downloaded.

The installation procedure is as follows:

ubunut16.04:~$ sudo chmod +x Downloads/Miniconda3-latest-Linux-x86_64.sh   
ubunut16.04:~$ ./Downloads/Miniconda3-latest-Linux-x86_64.sh  
Welcome to Miniconda3 4.5.11   

In order to continue the installation process, please review the license   
agreement.
Please, press ENTER to continue   
>>> (Enter)   
...
Do you accept the license terms? [yes|no]   
[no] >>> yes (Enter)   
...
  - Press CTRL-C to abort the installation   
  - Or specify a different location below   

[/home/firefly/miniconda3] >>> (Enter)   
... (Installation procedure)   
Do you wish the installer to prepend the Miniconda3 install location   
to PATH in your /home/firefly/.bashrc ? [yes|no]   
[no] >>> yes (Enter)   

It installs Miniconda under the user’s root directory miniconda3, and sets to use Miniconda’s program by default at the same time.

Miniconda can be validated and tested by the following operations:

ubunut16.04:~$ source ~/.bashrc   
ubunut16.04:~$ conda -v
conda 4.5.11   

Next, if it is with the acceleration of NVIDIA discrete graphics, PyTorch and OpenCV can be installed by the following operations:

ubunut16.04:~$ conda install pytorch torchvision -c pytorch   
ubunut16.04:~$ pip install opencv-contrib-python   

Otherwise, please doing the following procedures:

ubunut16.04:~$ conda install pytorch-cpu torchvision-cpu -c pytorch   
ubunut16.04:~$ pip install opencv-contrib-python   

If it is with the acceleration of the graphics cards, this page can be used as a reference for installation and configuration, otherwise it can be skipped.

The copy of cuda installation and operation are as follows:

ubunut16.04:~$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
ubunut16.04:~$ sudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb   
ubunut16.04:~$ sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub   
ubunut16.04:~$ sudo apt-get update   
ubunut16.04:~$ sudo apt-get install cuda   

Finally, carry out the USB Dongle configuration (optional), the following operating configuration can be carried out and verified under PLAI directory:

ubunut16.04:~/PLAI$ sudo cp lib/python/gtilib/*.rules /etc/udev/rules.d/   
ubunut16.04:~/PLAI$ ls /dev/sg* -l   
crw-rw-rw- 1 root disk 21, 0 11月 20 10:28 /dev/sg0   
crw-rw-rw- 1 root disk 21, 1 11月 20 10:28 /dev/sg1   

If the device is not found, please refer to FAQ for a checkout.

Windows 10

Need to be improved… (Ubuntu configuration process can be partially referred)

Environment testing

The following operations can test the integrity of the environment, and the configuration will be completed if without errors.

ubunut16.04:~$ python   
Python 3.7.0 (default, Jun 28 2018, 13:15:42)   
[GCC 7.2.0] :: Anaconda, Inc. on linux   
Type "help", "copyright", "credits" or "license" for more information.   
>>> import torch,cv2   
>>> torch.cuda.is_available()   
True   
>>>   

Parameter setting

training.json file

  • num_classes - the number of classes in your defined dataset, i.e. (categories) number of folders in data/train and data/val
  • max_epoch - the number of times all of the training vectors are used once to update the weights
  • learning_rate - determines how fast the weights change
  • train_batch_size - depends on the GPU memory available
  • test_batch_size - depends on the GPU memory available
  • mask_bits - represents the masks for each main layer (convolutional layer)
  • act_bits - activations for each main layer (convolutional layer)
  • resume - resume training from a known checkpoint
  • finetune - optional, usually leads to greater accuracy
  • full - train a full precision network

The parameters mask_bits and act_bits are referenced as follows:

  1. GNetfc
    • mask_bits: 3,3,1,1,1,1
    • act_bits: 5,5,5,5,5,5
  2. GNet18
    • mask_bits: 3,3,3,3,1
    • act_bits: 5,5,5,5,5
  3. GNet1
    • mask_bits: 3,3,1,1,1
    • act_bits: 5,5,5,5,5

PLAI. parameter

PLAI.py does not support command line parameter, and PLAI.py source code needs to be modified. Generally the modified source code is approximately located at 142 lines, and the original contents are as follows:

gtiPLAI = PLAI(num_classes=2, data_dir=data_dir, checkpoint_dir=checkpoint_dir, model_type=0, module_type=1, device_type=1)   
  • num_classes - could be set in the training.json
  • data_dir - it defaults to data directory
  • checkpoint_dir - it defaults to checkpoint directory
  • model_type - set the training model, 0: GNetfc, 1: GNet18, 2:GNet1
  • module_type - 0: Conv(w/o bias) + bn + bias,1: Conv(w/ bias) + bn
  • device_type - GTI device type used for inferencing, ftdi: 0, emmc: 1

Model training

The training picture data is placed by type in a folder named after the type under the PLAI data directory, then adjust the training.json and modify the PLAI.py (the default network model is GNetfc) by doing the following operations under the PLAI root directory.

ubunut16.04:~$ python PLAI.py   

Model use

After the training, coefDat_2801.dat、coefBin_2801.bin (GNetfc does not have this file) and data/pic_label.txt will be generated, test in the sample in U disk of AI data if GNet1 is available. Among which userinput.txt can be found under PLAI nets directory, such as netConfig_2801_gnet1.txt. The example for use are as follows:

liteSample -c coefDat_2801.dat -u netConfig_2801_gnet1.txt -f coefBin_2801.bin -l pic_label.txt   

Please note whether userinput.txt that is device nodes in the -u parameter file are right during the testing.