I am trying to capture RTSP stream from an IP-camera using GStreamer pipeline from within a C++ OpenCV application on-board of Radxa Zero 3W. The main loop logic looks like this:
...
// in_pipeline is a std::string instance loaded from a YAML config
cv::VideoCapture cap(in_pipeline, cv::CAP_GSTREAMER); // <--- Here the failure occurs
if (!cap.isOpened()) {
std::cerr << "Failed to open stream with pipeline: " << in_pipeline << "\n";
return 1;
};
cv::Mat frame;
cap.read(frame);
...
cv::VideoWriter writer(
out_pipeline_template, // Also loaded from YAML config
0,
OUTPUT_FPS, // Defined by a command-line arg with default value manually set to that of the camera
cv::Size(640, 480),
true
);
while (cap.read(frame)) {
...
writer.write(frame);
...
cv::waitKey(1);
}
I’m aiming to achieve this goal using hardware acceleration through mppvideodec
. However, although it runs smoothly, when used with gst-launch-1.0
, it fails to capture frames, when launched from within OpenCV, producing repeated warnings into serial console at GST_DEBUG=3 level:
transform could not tramsform video/x-raw(memory:DMABuf), format=(string)NV12, width=(int)2560, height=(int)1920, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)30/1 in anything we support.
At the same time, it succeeds to run with avdec_h264
instead of mppvideodec
from within OpenCV, but yields latency up to 15 sec, since, as far as I understand, no hardware acceleration is being used in this case.
OpenCV build includes GStreamer support, which is confirmed by opencv_version -v
:
Video I/O:
DC1394: YES (2.2.6)
FFMPEG: YES
avcodec: YES (58.91.100)
avformat: YES (58.45.100)
avutil: YES (56.51.100)
swscale: YES (5.7.100)
avresample: NO
GStreamer: YES (1.18.4)
PvAPI: NO
v4l/v4l2: YES (linux/videodev2.h)
gPhoto2: YES
I have tried both decodebin
/decodebin3
and manually set up pipelines with various combinations:
rtspsrc location=rtsp://user:passwd@<webcam_ip>/stream=0 latency=0 ! decodebin ! videoconvert ! appsink sync=false drop=true
rtspsrc location=rtsp://user:passwd@<webcam_ip>/stream=0 latency=0 ! rtph264depay ! h264parse ! mppvideodec ! videoconvert ! video/x-raw,format=BGR ! appsink sync=false drop=true
All of them run successfully with gst-launch-1.0
and fail with OpenCV, unless mppvideodec
is substituted with avdec_h264
for the latter.
All trivial issues like URL correctness and compliance with the camera’s endpoints have been double-checked.
I have the necessary gstreamer1.0-rockchip1
packages installed. mpp
support seems to be known to my GStreamer build, according to gst-inspect-1.0 | grep mpp
output:
rockchipmpp: mpph264enc: Rockchip Mpp H264 Encoder
rockchipmpp: mpph265enc: Rockchip Mpp H265 Encoder
rockchipmpp: mppvp8enc: Rockchip Mpp VP8 Encoder
rockchipmpp: mppjpegenc: Rockchip Mpp JPEG Encoder
rockchipmpp: mppvideodec: Rockchip's MPP video decoder
rockchipmpp: mppjpegdec: Rockchip's MPP JPEG image decoder
Zero 3W’s on-board OS I use is OpenIPC groundstation build, which is Debian Bullseye arm64 under the hood. The issue is reproducible for versions v2.0.0 beta 2 and v1.9.9.
I would appreciate any help or hints in resoliving this issue. Alternatively, I would also be thankful for any hints, how to remove huge (10-15s) latency, while streaming video with OpenCV (since further frames modification and/or text overlays would be necessary). I have also tried FFMPEG as a backend, including ffmpeg-rockchip, but the latency is even larger with it so far.