RKNN-Toolkit2 Quantize Tensorflow Frozen graph model

Hello community,

I’m seeking your help in solving the following issue.

I have a float32 Tensorflow Frozen Graph, and I’m trying to export it to an rknn model using RKNN-Toolkit2-v-1.5.0 so I can deploy it on Rock5B board.

I managed to convert it to a float model (without quantization) and checked the accuracy of the rknn model (poor accuracy drop ~0.8%, though the toolkit documentation says that converting to float16 rknn models won’t affect the accuracy)

However, the issue is with converting the frozen graph to a quantized rknn model, the accuracy drastically drops by more than 90% !!

I’m configuring the conversion mean and std parameters exactly with the same training normalization parameters I used, and calibrating the quantization process with real data from a valid distribution.

Thanks in advance,