Gsteramer still experiences significant delays when using mppvideodec for hardware decoding

I hope to use RKNN to infer the Rtsp stream of a wireless camera, which is in the format of H264. Due to excessive latency, I attempted to use Gstreamer’s mppvideodec for hardware decoding and made attempts in this regard.
I tried to use the following Python program to receive the stream:

`import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import Gst, GObject, GLib

Gst.init(None)
loop = GLib.MainLoop()

pipeline = Gst.parse_launch(“rtspsrc location=rtsp://192.168.0.133:554/test ! rtph264depay ! h264parse ! mppvideodec ! videorate ! video/x-raw,framerate=60/1 ! videoscale ! videoconvert ! video/x-raw,width=1280,height=720 ! autovideosink sync=false”)

pipeline.set_state(Gst.State.PLAYING)

try:
loop.run()
except KeyboardInterrupt:

pipeline.set_state(Gst.State.NULL)
loop.quit()`

At this moment, the delay is relatively small, around 0.2 seconds, but when I test with the following pipeline and program, there is a significant delay, up to about one minute:
`import gi
import cv2
import numpy as np
from gi.repository import Gst
gi.require_version(‘Gst’, ‘1.0’)

初始化GStreamer

Gst.init(None)
import time

def gst_to_opencv(sample):
start_time = time.time()
buf = sample.get_buffer()
caps = sample.get_caps()
print(“图像数据格式:”, caps.to_string())
array = np.ndarray(
(caps.get_structure(0).get_value(‘height’),
caps.get_structure(0).get_value(‘width’),
3),
buffer=buf.extract_dup(0, buf.get_size()),
dtype=np.uint8)
end_time = time.time()
print(“gst_to_opencv耗时: {:.5f}秒”.format(end_time - start_time)) # 打印函数耗时
return array

def new_sample(sink, data):
sample = sink.emit(‘pull-sample’)
start_time = time.time()
frame = gst_to_opencv(sample)
cv2.imshow(‘Live Video’, frame)
cv2.waitKey(1)
end_time = time.time()
print(“OpenCV显示耗时: {:.5f}秒”.format(end_time - start_time)) # 打印OpenCV处理+显示耗时
return Gst.FlowReturn.OK

pipeline = Gst.parse_launch(‘rtspsrc location=rtsp://192.168.0.133:554/test ! rtph264depay ! h264parse ! mppvideodec ! videorate ! video/x-raw,framerate=60/1 ! videoscale ! videoconvert ! video/x-raw,format=BGR,width=1280,height=720 ! appsink emit-signals=True name=mysink’)

sink = pipeline.get_by_name(‘mysink’)
sink.set_property(‘sync’, False)
sink.connect(‘new-sample’, new_sample, None)
pipeline.set_state(Gst.State.PLAYING)

try:
while True:
pass
except KeyboardInterrupt:
print(“Interrupted by user”)

cv2.destroyAllWindows()
pipeline.set_state(Gst.State.NULL)
`
The delay in the second program is likely due to various factors such as inefficient code, resource usage, or the complexity of the operations performed. To reduce the delay, you can consider optimizing the code, using more efficient algorithms, reducing unnecessary computations, or parallelizing tasks to make use of multiple cores or threads. Additionally, profiling the code to identify bottlenecks and addressing them systematically can help improve performance.