Yes. It is used automatically.
[FFmpeg] Introduce FFmpeg-Rockchip for hyper fast video transcoding via CLI
Oh derp i thought i already installed the libraries oopsâŚ
What was the way to view the ffmpeg in real time instead of it encoding onto my desktop for computer vision?
P.S your a life saver.
If viewing the video in real time is enough, then use ffplay
(this method is not the most efficient because ffplay does not support zero-copy).
ffplay -vcodec h264_rkmpp -i /path/to/video
For a better playback experience, you can use MPV or Kodi instead.
If you want to see on real time see Avafingerâs thread, look for the last replies on it
https://forum.radxa.com/t/object-detection-with-npu-h264-streams-h264-camera-rtsp
I donât know any way of doing it via CLI. But Iâve found it easier to do with Gstreamer
gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! rkximagesink sync=false
Iâll try 4K encoding later, really excited to try it out. If I can get at least 15fps Iâll be very happy
Got 20 FPS with 3840x2160 on RK3566 with HEVC encoding. Iâm very sastified
Good result. If you need more, the RK3588 can give you 4K 120FPS.
My custom board has RK3566 on it. I am fully committed at this point. Besides I think 20fps is good enough
Hi @nyanmisaka,
I was wondering if with your ffmpeg version is possible to do the following using a RK3588 board.
I want to receive either a SRT or RTMP stream, add an html overlay (scoreboard) and restream back to SRT or RTMP(S)
Of course ffmpeg doesnât support html overlay, but I see through rga filter, it is possible to blend an image.
Now, the key here would be the possibility to ârefreshâ say every second the image to be blended in the ffmpeg pipe, and in parallel, run a headless browser instance that could just âcaptureâ the image also every second.
Not very elegant for sure, butâŚ
The other alternative Iâm aware of would be RidgeRunâs HTML overlay plugin for gstreamer, but is expensive and doesnât support RK3588.
So, would this be possible?
Thank you
As far as I know FFmpeg cannot interpret HTML/JS text into RGBA images. Both the software filter overlay
and the hardware filter overlay_rkrga
require two images as input and another image as output. The output can be fed to the hardware encoder and streamed through RTMP.
What you should do is render the HTML/JS text to RGBA images using an arbitrary method and pipe it to FFmpeg CLI as an overlay input. In addition, you also need to ensure that the two inputs have matching timestamps or framerates, otherwise they will become out of sync.
ffmpeg -hwaccel rkmpp -hwaccel_output_format drm_prime -afbc rga -i "rtmp://..." \
-f rawvideo -s 1920x1080 -pix_fmt rgba -i - \
-filter_complex "[1:v]hwupload[html];[0:v][html]overlay_rkrga=format=nv12:afbc=1:eof_action=pass:repeatlast=0" \
-c:v h264_rkmpp -b:v 6M -maxrate 6M -g:v 250 \
-c:a copy -sn -dn \
-f rtmp "rtmp://..."
If you can use FFmpeg through libav* libraries, then you only need to wrap each rendered RGBA image into an AVFrame and blend it with another AVFrame decoded by the decoder from RTMP/H264 input. Donât forget to use hwupload
filter (sw frame -> hw frame, required by rkrga hw filters).
Alternatively, if you are familiar with libavfilter, you can integrate any c/cpp-friendly HTML renderer to FFmpeg as a filter/vsrc. IMO this is probably the most elegant solution.
Yes, Iâm aware ffmpeg is not able to directly process html content. That is why I said, a headless browser is running as a different process and âcapturingâ an image every whatever time (suggested 1 second for very simple case)
But the key is, as you say, I need to blend two images, one will come from the video source, and the other is just generated by that capture process.
But can I make this âdynamicâ, I mean, make ffmpeg use âa refreshâ every second?
As for potential out of sync situations, is not that relevant. Ffmpeg will use the same âto be blendedâ image for the whole second. In that time should have plenty of time to generate the ânextâ image.
Perhaps the second way in this answer is feasible. https://stackoverflow.com/questions/64043336/ffmpeg-pipe-input-for-concat
Hi. I was testing the ffmpeg-rockchip project on my RK3568 board. But I found out that aftering compiling files in ffmpeg/doc/examples/ dir. Then type
âsudo ./hw_decode rkmpp /path/to/video.mp4 rawfileâ
And I got âDecoder h264 does not support device type rkmpp.â this error. Which I believe something went wrong in avcodec_get_hw_config(decoder, i), and value of i seems to be 0.
And not just rkmpp doesnât work, any other hw_type_names[] donât work too.
While typing
â./ffmpeg -stream_loop -1 -hwaccel rkmpp -hwaccel_output_format drm_prime -afbc rga -i /path/to/any-h264-video.mp4 -an -sn -vframes 5000 -f null -â
it works fine.
Could you please give me some hint? Is there something Im not doing right?
Hello, this example program only supports ffmpegâs built-in hw accelerators instead of hw decoders.
For example, Intel QSV (not VAAPI), Nvidia CUVID (not NVDEC) and RKMPP are all hw decoders instead of hw accelerators. ffmpeg uses a trick to make both accels and decoders share the -hwaccel
command.
Thank you for your reply! So is it possible if I just use ffmpeg api to decoding and transcoding(using mpp and rga unit) in my own project? Or I would probably have to use mpp and rga lib api instead?
Of course you can. ffmpeg itself uses the libav* API, and you can too, but you canât use the paths in the hw_decode example.
Thanks for this work, I will seek to use it along with TVHeadend.
Does this also work for Radxa Zero 3W? (RK3566 with H264/5 decode/encode)
It should. MPP/RGA is platform agnostic.
Hi.
I need to record a USB video capture ("/dev/video0") that can only reach 20fps in MJPEG format, not yuv.
Its possible to encode from MJPEG -> H264 in .mp4 extension ?
Like this:
ffmpeg -f v4l2 -input_format mjpeg -framerate 20 -video_size 1280x720 -i /dev/video0 -c:v h264_rkmpp -b:v 2M -r 20 Record1.mp4
For this task, wich Rock Series do you recomend ?
Rock 3 is enought?
Confirmed working on Radxa Zero 3W (RK3566)
Thank you!