Where is source code for rknn-toolkit2?

I mean they have their repo https://github.com/rockchip-linux/rknn-toolkit2
where you can download their compiled wheel packages for ubuntu (https://github.com/rockchip-linux/rknn-toolkit2/blob/master/packages/rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl) as well as for rknn_toolkit_lite2 , but where is the source code to compile it yourself … or is it closed source??

What part are you looking for? I believe they have python code to convert the various models into their RKNN format. As far as training up your own models, as I understand it you have do it in one of the standard formats (tensorflow, onnx, etc) depending on which pretrained model you want to base it on, then convert it to their RKNN.

I am looking exactly for source code of their rknn-toolkit2 , in other word source code of the rknn_toolkit2.whl package

as well as their runtime library - rknn.so

python code is in there:

@JonGroff no its not
just pre-built whl packages and example files without source code of their rknn libraries!

Uh, yeah, it is, see all the test.py files in the github link I sent you?

ok so this example


from rknn.api import RKNN

i.e importing their RKNN library

I am asking where is the source code of that library?

github repo rknn-toolkit2 contains just prebuilt python libraries and examples

You want the firmware to their runtime? lol, doubt it. thats not the ‘source code’ they are handing out

thats what i am asking for, so why is it closed source? as far as I have seen – everything for rockchip rk3588 is released as opensource, and this is their only closed source part – why?

Now if you want to see the lower level calls to the NPU, calls to the runtime upon which a lot of that python library is also built, you can look at the C code in the examples. It has all the calls in the RockChip_RKNPU_User_guide_RKNN_API_V1.4.0_EN.pdf
but they dont hand out the firmware to the librknnrt.so. If you look at the guide and the examples in C I imagine it has everything thats used to call the NPU in the python RKNN code. I built a whole set of Java bindings to the NPU using the C examples. In other words, everything you can do with the lower level NPU calls.

so than they are same EVIL as NVIDIA with their CUDA.

Yes i am successfully using their python binding (rknn-toolkit-lite2) for inference of our models.

But i am doing proof-of-concept for our next generation of product, which should have long therm support (5+ years). And rn. we have switched from nvidia jetson platform because of their closed source approach to everything.
So i hit the wall with rockchip as well, and when they decide to give up on it in 1 year, oh well – we are doomed :frowning: source code would def. help to troubleshoot it on own if that happens, or to give it a community – to take care

Thank you very much for your response, appreciate it tho.

Yeah, I had to piece it together from the examples, which are basically just imagenet YOLOv5 and InceptionSSDv2 object segmentation and recognition. The calls are for npu_init, load model (their RKNN proprietary which the python converts standard models to) get_inputs, run_inference, get_outputs, release_model, query_model, etc… I would love to have access to the low level operators they mention in their release notes like matrix multiply and such but I dont see that code.

1 Like

I’ll ask radxa guys on discord forum, if they have anything to say …

1 Like