There is a NPU module in RK3566, using this NPU module needs to download RKNN SDK which provides programming interfaces for RK3566/RK3568 chip platforms with NPU. This SDK can help users deploy RKNN models exported by RKNN-Toolkit2 and accelerate the implementation of AI applications

RKNN Model

RKNN is the model type used by the Rockchip NPU platform. It is a model file ending with the suffix .rknn . RKNN SDK provides a complete model transformation Python tool for users to convert their self-developed algorithm model into RKNN model

The RKNN model can run directly on the RK3566 platform. There are demos under RKNN_API_for_RK356X_v1.1.0*/examples/. Refer to the to compile Android or Linux Demo (Need cross-compile environment). You can also just download compiled demo.

Run demo on the ROC-RK3566-PC as follows:

:/ # cd /data/rknn_ssd_demo_Android/    (Use rknn_ssd_demo_Linux in Linux)
:/data/rknn_ssd_demo_Android # chmod 777 rknn_ssd_demo
:/data/rknn_ssd_demo_Android # export LD_LIBRARY_PATH=./lib
:/data/rknn_ssd_demo_Android # ./rknn_ssd_demo model/ssd_inception_v2.rknn model/road.bmp
Loading model ...
rknn_init ...
model input num: 1, output num: 2
input tensors:
index=0 name=Preprocessor/sub:0 n_dims=4 dims=[3 300 300 1] n_elems=270000 size=270000 fmt=0 type=2 qnt_type=2 fl=0 zp=0 scale=0.007812
output tensors:
index=0 name=concat:0 n_dims=4 dims=[4 1 1917 1] n_elems=7668 size=7668 fmt=0 type=2 qnt_type=2 fl=0 zp=53 scale=0.089455
index=1 name=concat_1:0 n_dims=4 dims=[1 91 1917 1] n_elems=174447 size=174447 fmt=0 type=2 qnt_type=2 fl=0 zp=53 scale=0.143593
ssd - loadLabelName ./model/coco_labels_list.txt
person @ (13 125 59 212) 0.982374
person @ (110 119 152 197) 0.969119
bicycle @ (171 165 278 234) 0.969119
person @ (206 113 256 216) 0.964519
car @ (146 133 216 170) 0.959264
person @ (49 133 58 156) 0.606060
person @ (83 134 92 158) 0.606060
person @ (96 135 106 162) 0.464163

Non-RKNN Model

For other models like Caffe, TensorFlow, etc, to run on RK3566 platform, conversions are needed. Use RKNN-Toolkit2 to convert other model into RKNN model.


Introduction of Tool

RKNN-Toolkit2 is a development kit that provides users with model conversion, inference and performance evaluation on PC and Rockchip NPU platforms. Users can easily complete the following functions through the Python interface provided by the tool:

  • Model conversion: support to convert Caffe / TensorFlow / TensorFlow Lite / ONNX / Darknet / PyTorch model to RKNN model, support RKNN model import/export, which can be used on Rockchip NPU platform later

  • Quantization: support to convert float model to quantization model, currently support quantized methods including asymmetric quantization(asymmetric_quantized-8, asymmetric_quantized-16). and support hybrid quantization. Asymmetric_quantized-16 not supported yet

  • Model inference: Able to simulate Rockchip NPU to run RKNN model on PC and get the inference result. This tool can also distribute the RKNN model to the specified NPU device to run, and get the inference results

  • Performance evaluation: distribute the RKNN model to the specified NPU device to run, and evaluate the model performance in the actual device

  • Memory evaluation: Evaluate memory consumption at runtime of the model. When using this function, the RKNN model must be distributed to the NPU device to run, and then call the relevant interface to obtain memory information

  • Quantitative error analysis: This function will give the Euclidean or cosine distance of each layer of inference results before and after the model is quantized. This can be used to analyze how quantitative error occurs, and provide ideas for improving the accuracy of quantitative models

Environment Dependence

  • The system needs: Ubuntu 18.04 (x64) or later. The Toolkit can only be installed on PC, and Windows, MacOS or Debian not supported yet

  • Python version: 3.6

  • Python rely on:


RKNN-Toolkit2 installation

It is recommended to use virtualenv to manage the python environment because there may be multiple versions of the python environment in the system at the same time

# 1)Install virtualenv、Python3.6 and pip3
sudo apt install virtualenv \
sudo apt-get install python3 python3-dev python3-pip
# 2)Install dependent libraries
sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 libsm6 \
libgl1-mesa-glx libprotobuf-dev gcc
# 3)Use virtualenv and install Python dependency,such as requirements-1.1.0b0.txt
virtualenv -p /usr/bin/python3 venv
source venv/bin/activate
pip3 install -r doc/requirements*.txt
# 4)Install RKNN-Toolkit2,such as rknn_toolkit2-1.1.0b0-cp36-cp36m-linux_x86_64.whl
sudo pip3 install packages/rknn_toolkit2*.whl
# 5)Check if RKNN-Toolkit2 is installed successfully or not,press key ctrl+d to exit
(venv) firefly@T-chip:~/rknn-toolkit2-1.1.0b0$ python3
>>> from rknn.api import RKNN

The installation is successful if the import of RKNN module doesn’t fail. One of the failures is as follows:

>>> from rknn.api import RKNN
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'rknn'

Model Conversion Demo

Toolkit Demos are under rknn-toolkit2-1.1.0*/examples. Here we run a model conversion demo for example, this demo shows the process of converting tflite to RKNN, exporting model, inferencing, deploying on NPU and fetching results. For detailed implementation of the model conversion, please refer to the source code in Demo and the documents at the end of this page.

Simulate the running example on PC

RKNN-Toolkit2 has a built-in simulator, simply run a demo to deploy on NPU simulator.

(venv) firefly@T-chip:~/rknn-toolkit2-1.1.0b0$ cd examples/tflite/mobilenet_v1
(venv) firefly@T-chip:~/rknn-toolkit2-1.1.0b0/examples/tflite/mobilenet_v1$ ls
dataset.txt  dog_224x224.jpg  mobilenet_v1_1.0_224.tflite
(venv) firefly@T-chip:~/rknn-toolkit2-1.1.0b0/examples/tflite/mobilenet_v1$ python3 
--> config model
--> Loading model
--> Building model
Analysing :   0%|                                                            | 0Analysing :  84%|██████████████████████████████████████████▏       | 49/58 [00:0Analysing : 100%|██████████████████████████████████████████████████| 58/58 [00:00<00:00, 235.93it/s]
Quantizating :   0%|                                                         | 0Quantizating :   2%|| 1/58 [00:Quantizating :  48%|███████████████████████▏                        | 28/58 [00:Quantizating : 100%|███████████████████████████████████████████████| 58/58 [00:00<00:00, 117.84it/s]
I RKNN: librknnc version: 1.1.0b0 (8d7e25ad@2021-06-30T18:33:39)
I RKNN: set log level to 0
--> Export RKNN model
--> Init runtime environment
W init_runtime: target is None, use simulator!
--> Running model
I RKNN: librknnc version: 1.1.0b0 (8d7e25ad@2021-06-30T18:33:39)
I RKNN: set log level to 0
-----TOP 5-----
[156]: 0.84228515625
[155]: 0.08807373046875
[205]: 0.01416015625
[284]: 0.0082550048828125
[194 260]: 0.0028209686279296875

Run on ROC-RK3566-PC NPU connected to the PC

RKNN Toolkit2 runs on the PC and connects to the ROC-RK3566-PC through the PC’s USB. RKNN Toolkit2 transfers the RKNN model to the NPU device of ROC-RK3566-PC to run, and then obtains the inference results, performance information, etc. from the ROC-RK3566-PC

  • First prepare ROC-RK3566-PC environment: update and run rknn_server


adb root && adb remount
adb push rknnrt/Android/rknn_server/arm64-v8a/vendor/bin/rknn_server /vendor/bin
adb push rknnrt/Android/librknn_api/arm64-v8a/ /vendor/lib64
adb push rknnrt/Android/librknn_api/arm64-v8a/ /vendor/lib

# run rknn_server on the serial terminal 
chmod +x /vendor/bin/rknn_server
setenforce 0


adb push RKNN_SDK/Linux/rknn_server/aarch64/usr/bin/rknn_server /usr/bin/
adb push RKNN_SDK/Linux/librknn_api/aarch64/ /usr/lib/
adb push RKNN_SDK/Linux/librknn_api/aarch64/ /usr/lib/

# run rknn_server on the serial terminal
chmod +x /usr/bin/rknn_server
  • Then modify the demo file examples/tflite/mobilenet_v1/ on PC, add the target platform in it.

diff --git a/ b/
index 61ad668..51a01e2 100644
--- a/
+++ b/
@@ -62,7 +62,7 @@ if __name__ == '__main__':
     # init runtime environment
     print('--> Init runtime environment')
-    ret = rknn.init_runtime()
+    ret = rknn.init_runtime(target='rk3566')
     if ret != 0:
         print('Init runtime environment failed')
  • Run on host PC

(venv) firefly@T-chip:~/rknn-toolkit2-1.1.0b0/examples/tflite/mobilenet_v1$ python3 
--> config model
--> Loading model
--> Building model
Analysing : 100%|██████████████████████████████████████████████████| 58/58 [00:00<00:00, 186.60it/s]
Quantizating : 100%|███████████████████████████████████████████████| 58/58 [00:00<00:00, 468.02it/s]
I RKNN: librknnc version: 1.1.0b0 (8d7e25ad@2021-06-30T18:33:39)
I RKNN: set log level to 0
--> Export RKNN model
--> Init runtime environment
I NPUTransfer: Starting NPU Transfer Client, Transfer version 2.1.0 (b5861e7@2020-11-23T11:50:36)
D RKNNAPI: ==============================================
D RKNNAPI:   API: 1.1.0b0 (ccc3bbc build: 2021-06-30 20:30:36)
D RKNNAPI:   DRV: 1.1.0b0 (74e78f5 build: 2021-06-30 20:09:41)
D RKNNAPI: ==============================================
--> Running model
-----TOP 5-----
[156]: 0.84228515625
[155]: 0.08807373046875
[205]: 0.01415252685546875
[284]: 0.0082550048828125
[194 260]: 0.0028209686279296875


Other Toolkit Demo

Other Toolkit demos can be found under rknn-toolkit2-1.1.0*/examples/functions/, such as quantization, accuracy analysis demos. For detailed implementation, please refer to the source code in Demo and the detailed development documents.

Detailed Development Documents

Please refer to <<Rockchip_RK356X_User_Guide_RKNN_API_*.pdf>> and <<Rockchip_User_Guide_RKNN_Toolkit2_*.pdf>> in RKNN SDK for development.