RKNN: Failed to submit! op id: 227, op name: OutputOperator:139_convolutional, flags: 0x1

Hello,

I’m trying to convert YOLOv4 .onnx model to .rknn with the help of rknn-toolkit2. It performes well on PC using test.py code from toolkit examples but fails on AIO-3568J using rknpu .cc code. It returns error during inference (rknn.run):

E RKNN: [02:49:39.674] failed to submit!, op id: 227, op name: OutputOperator:139_convolutional, flags: 0x1, task start: 0, task number: 2583, run task counter: 2322, int status: 0

Tried to convert original darknet model to .rknn but recieved the same mistake. Have anyone tried to inference YOLOv4 rknn model before?

@frootonator did you solve this?

If the error occurs in the first layer of convolution and the zero-copy interface is used, the possible cause is insufficient memory allocation for the input tensor. In this case, size_with_stride in the tensor attribute should be used to allocate memory.

If the error occurs in the middle NPU layer, the possible reason is that the model configuration is wrong. At this time, you can find the latest SDK link in the error log. It is recommended to upgrade the latest toolchain or assign this layer to the CPU when converting the RKNN model.