RKNN-Toolkit2 unable to convert Tensorflow 1 model

I have converted my frozen graph pb to saved_model.pb

When I try to load that model using

# %%

import numpy as np

import cv2

from rknn.api import RKNN

# Create RKNN object

rknn = RKNN(verbose=True)

# %%

# Pre-process config

print("--> Config model")

rknn.config(mean_values=[0, 0, 0], std_values=[255, 255, 255], target_platform="rk3588")

print("done")

# %%

# Load model

print('--> Loading model')

ret = rknn.load_tensorflow(tf_pb='./saved/saved_model.pb',inputs=["input:0"],outputs=["d_predictions:0"],input_size_list = [[1, 3, 24, 94]])

if ret != 0:

print('Load model failed!')

exit(ret)

print('done')

i am getting following error

W __init__: rknn-toolkit2 version: 1.4.0-22dcfef4
--> Config model
done
--> Loading model
2023-03-19 15:50:35.550797: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/vscode/.local/lib/python3.8/site-packages/cv2/../../lib64:
2023-03-19 15:50:35.550820: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
W load_tensorflow: The inputs name should be a tensor name instead of node name
Traceback (most recent call last):
  File "lprnet-notebook.py", line 20, in <module>
    ret = rknn.load_tensorflow(tf_pb='./saved/saved_model.pb',inputs=["input:0"],outputs=["d_predictions:0"],input_size_list = [[1, 3, 24, 94]])
  File "/home/vscode/.local/lib/python3.8/site-packages/rknn/api/rknn.py", line 120, in load_tensorflow
    return self.rknn_base.load_tensorflow(tf_pb=tf_pb, inputs=inputs,
  File "rknn/api/rknn_base.py", line 899, in rknn.api.rknn_base.RKNNBase.load_tensorflow
  File "rknn/api/rknn_base.py", line 901, in rknn.api.rknn_base.RKNNBase.load_tensorflow
google.protobuf.message.DecodeError: Error parsing message

Could you please help?

Model was trained using tensorflow 1.15 --> is that an issue?

hey, could you please help me with this?

I am getting Invalid tensor data type when converting my model to rknn

Hello, do you have discord or any other IM where can i chat with you?

I have my model implemented in pytorch, and I am exporting it to ONNX via pytorch using:

convert to onnx

import torch.onnx

dummy_input = torch.randn(1, 3, 24, 94, device="cpu")
torch.onnx.export(
lprnet,
dummy_input,
"lprnet.onnx",
verbose=True,
opset_version=12,
export_params=True,
)

I have no option to specify int8 there, only opset, which i set to 12

Then I am converting it like this

from rknn.api import RKNN

# Create RKNN object

rknn = RKNN(verbose=True)

# pre-process config

print("--> config model")

rknn.config(target_platform="rk3588", mean_values=[0, 0, 0], std_values=[255, 255, 255])

print("done")

# Load model
print('--> Loading model')
ret = rknn.load_onnx(model="lprnet.onnx")
if ret != 0:
    print('Load model failed!')
    exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset='./dataset.txt')
if ret != 0:
    print('Build model failed!')
    exit(ret)
print('done')

which is same as you posted

and it yields this error

INVALID_ARGUMENT : Failed to load model with error: Invalid tensor data type

which could be the issue of float32 or smth not supported on RKNN.

Btw. I tried to use direct conversion from pytorch using rknn-toolkit2 but it yelds error, on 2 layers not being supported by rknn

output is

type: float32[1,68,18]

i also created export using β€œhalf” – fp16

where output is

type: float16[1,68,18]

I am uploading the onnx model – if you have possibility to check it yourself

lprnet.zip (968.0 KB)

found the issue

E load_pytorch: Traceback (most recent call last):
E load_pytorch:   File "rknn/api/rknn_base.py", line 1250, in rknn.api.rknn_base.RKNNBase.load_pytorch
E load_pytorch:   File "/workspaces/rocm/venv/rknn/lib/python3.8/site-packages/torch/onnx/__init__.py", line 316, in export
E load_pytorch:     return utils.export(model, args, f, export_params, verbose, training,
E load_pytorch:   File "/workspaces/rocm/venv/rknn/lib/python3.8/site-packages/torch/onnx/utils.py", line 107, in export
E load_pytorch:     _export(model, args, f, export_params, verbose, training, input_names, output_names,
E load_pytorch:   File "/workspaces/rocm/venv/rknn/lib/python3.8/site-packages/torch/onnx/utils.py", line 737, in _export
E load_pytorch:     proto, export_map, val_use_external_data_format = graph._export_onnx(
E load_pytorch: RuntimeError: ONNX export failed: Couldn't export operator aten::log_softmax
1 Like