ROCK 5B Debug Party Invitation

I grabbed a frame from the camera, asked 1920x1080 nv12 and receive this:
-rw-rw-r-- 1 rock rock 12441600 Aug 7 16:03 frame_1920x1080.nv12

i need to be sure about what i am really getting. I expect 4k@30 with current driver, but the size is wrong.
v4l2-ctl --device /dev/video0 --stream-mmap=4 --stream-count=1 --stream-skip=150 --set-fmt-video=width=1920,height=1080,pixelformat=NV12
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.00 fps

3840x2160 * 3 / 2 = 12441600 = 4k@30

You can update you review, current driver can stream 4k@30fps.

Some additional info so far: i could not make camera engine works properly, and current driver delivers only ‘4k’ frames. Passing wrong frame size freeze the board.

I need to find out how to set up the engine correctly, so i can experiment with a few other things:

video device: /dev/video0
width: 3840 x height: 2160
INFO: libSDL: compiled with=2.0.20 linked against=2.0.20
INFO: Renderer Driver (default): (null)
INFO: Renderer Driver (set): opengles2
SDL information:
    SDL_FRAMEBUFFER_ACCELERATION: (null)
    SDL_RENDER_DRIVER: opengles2
    SDL_RENDER_OPENGL_SHADERS: (null)
    SDL_RENDER_LOGICAL_SIZE_MODE: (null)
    SDL_RENDER_SCALE_QUALITY: (null)
    SDL_RENDER_VSYNC: (null)
    SDL_VIDEO_HIGHDPI_DISABLED: (null)
    SDL_VIDEO_WIN_D3DCOMPILER: (null)
    SDL_VIDEO_WINDOW_SHARE_PIXEL_FORMAT: (null)
    SDL_VIDEO_DOUBLE_BUFFER: (null)
videodevice: /dev/video0 (final)
Device information:
  Device path:  /dev/video0
Stream settings:
  Frame format: NV12
  Frame size:   3840x2160
Unable to set frame rate: Inappropriate ioctl for device
Unable to read out current frame rate: Inappropriate ioctl for device
arm_release_ver of this libmali is 'g6p0-01eac0', rk_so_ver is '5'.
frame rate: 30.0601

Done, beautiful image, imx415 4k@30fps rendered on 1920x1080 with kmsdrm. led light bulb here.

Update:
Currently all frame sizes works at 30 fps on my tests.

Tested so far:

1280x768@30fps
160x120@30fps
1920x1080@30fps
320x240@30fps
3840x2160@30fps
640x480@30fps
800x600@30fps

Apparently, the stream is always at 3840x2160, and the engine scales the image and enhances it for the other sizes. getting a frame of size 3840x2160 takes 9% CPU usage, and the 1280x768 takes 13% CPU. There is a mode for 1920x1080 in hdr.

2 Likes

Trying to rmmod bifrost_kbase when building it as a module causes a kernel panic:

[  104.686695][ T2260] Kernel panic - not syncing: panic_on_set_domain set ...
[  104.687322][ T2260] CPU: 2 PID: 2260 Comm: rmmod Not tainted 5.10.66-ixn-97906-gc78a1e33c719 #6
[  104.688089][ T2260] Hardware name: Radxa ROCK 5B (DT)
[  104.688534][ T2260] Call trace:
[  104.688829][ T2260]  dump_backtrace+0x0/0x1e0
[  104.689221][ T2260]  show_stack+0x1c/0x24
[  104.689581][ T2260]  dump_stack_lvl+0xc4/0xe8
[  104.689971][ T2260]  dump_stack+0x14/0x50
[  104.690328][ T2260]  panic+0x170/0x3a4
[  104.690665][ T2260]  rockchip_pd_power+0x4e4/0x5dc
[  104.691091][ T2260]  rockchip_pd_power_on+0x28/0x30
[  104.691526][ T2260]  _genpd_power_on+0xbc/0x15c
[  104.691928][ T2260]  genpd_power_on+0xac/0x1a0
[  104.692330][ T2260]  genpd_runtime_resume+0x9c/0x240
[  104.692776][ T2260]  __rpm_callback+0x90/0x154
[  104.693177][ T2260]  rpm_callback+0x28/0x8c
[  104.693556][ T2260]  rpm_resume+0x51c/0x780
[  104.693934][ T2260]  __pm_runtime_resume+0x40/0x90
[  104.694360][ T2260]  __device_release_driver+0x3c/0x230
[  104.694828][ T2260]  driver_detach+0xc4/0x150
[  104.695219][ T2260]  bus_remove_driver+0x60/0xe0
[  104.695631][ T2260]  driver_unregister+0x34/0x60
[  104.696045][ T2260]  platform_driver_unregister+0x18/0x20
[  104.696791][ T2260]  kbase_platform_driver_exit+0x18/0x217c [bifrost_kbase]
[  104.697410][ T2260]  __arm64_sys_delete_module+0x198/0x264
[  104.697903][ T2260]  el0_svc_common.constprop.0+0x80/0x230
[  104.698394][ T2260]  do_el0_svc+0x28/0x90
[  104.698751][ T2260]  el0_svc+0xc/0x14
[  104.699086][ T2260]  el0_sync_handler+0xe0/0x110
[  104.699498][ T2260]  el0_sync+0x158/0x180
[  104.699862][ T2260] SMP: stopping secondary CPUs
[  105.866945][ T2260] SMP: failed to stop secondary CPUs 0-7

This makes Panfrost development a bit tricker, though I probably could find workarounds to keep using kbase.

It appears that GPU frequencies are a bit of a lie. Here are frequencies I measured, though it might be different for different chips:

devfreq Actual MHz
300 330
400 423
500 536
600 637
700 703
800 780
900 889
1000 990

Update from my side, decoding is working with ffplay, i tested the following files:

bbb_sunflower_1080p_30fps_normal.mp4  
jellyfish-20-mbps-hd-hevc-10bit.mkv
jellyfish-120-mbps-4k-uhd-h264.mkv    
jellyfish-20-mbps-hd-hevc.mkv
jellyfish-20-mbps-hd-h264.mkv

OPS, i realized ffplay is using software decoding:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND  
   5823 rock      20   0 1861860 200304  63700 S 195.0   1.2   0:43.55 ffplay 

Encoding with gstreamer also works, but encoding h264 from a camera stream (3840x2160) results in 5 fps (playback on my old intel box).

H264:
gst-launch-1.0 v4l2src device=/dev/video11 io-mode=dmabuf ! 'video/x-raw,format=NV12,width=3840,height=2160,framerate=30/1' ! mpph264enc ! filesink location=test_3840x2160_30fps_h264.mp4

H265:
gst-launch-1.0 v4l2src device=/dev/video11 io-mode=dmabuf ! 'video/x-raw,format=NV12,width=3840,height=2160,framerate=30/1' ! mpph265enc ! filesink location=test_3840x2160_30fps_h265.mkv

For some reason, my intel box (ancient) displays it at 3 fps on 1080p. So maybe my intel box could not handle the 4k h264/h265 files properly.

Playing the h265 file in Rock5B with 1080p is just fine:

ffplay -i test_3840x2160_30fps_h265.mkv 
ffplay version git-2022-05-25-73d7bc2 Copyright (c) 2003-2021 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
  configuration: --prefix=/usr --disable-libopenh264 --disable-vaapi --disable-vdpau --disable-decoder=h264_v4l2m2m --disable-decoder=vp8_v4l2m2m --disable-decoder=mpeg2_v4l2m2m --disable-decoder=mpeg4_v4l2m2m --disable-libxvid --disable-libx264 --disable-libx265 --enable-librga --enable-rkmpp --enable-nonfree --enable-gpl --enable-version3 --enable-libmp3lame --enable-libpulse --enable-libv4l2 --enable-libdrm --enable-libxml2 --enable-librtmp --enable-libfreetype --enable-openssl --enable-opengl --enable-libopus --enable-libvorbis --enable-shared --enable-decoder='aac,ac3,flac' --extra-cflags=-I/usr/src/linux-headers-5.10.66-rk3588/include
  libavutil      57.  7.100 / 57.  7.100
  libavcodec     59. 12.100 / 59. 12.100
  libavformat    59.  8.100 / 59.  8.100
  libavdevice    59.  0.101 / 59.  0.101
  libavfilter     8. 16.100 /  8. 16.100
  libswscale      6.  1.100 /  6.  1.100
  libswresample   4.  0.100 /  4.  0.100
  libpostproc    56.  0.100 / 56.  0.100
arm_release_ver of this libmali is 'g6p0-01eac0', rk_so_ver is '5'.
arm_release_ver of this libmali is 'g6p0-01eac0', rk_so_ver is '5'.
[hevc @ 0x7f94000c10] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, hevc, from 'test_3840x2160_30fps_h265.mkv':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: hevc (Main), yuv420p(tv), 3840x2160, 30 fps, 30 tbr, 1200k tbn
    nan M-V:    nan fd=   9 aq=    0KB vq=    0KB sq=    0B f=0/0
1 Like

I don’t understand why you’re speaking about the rock4, I don’t have it and never spoke about it. Confused.

@willy, Nvm, I’m messed with your photo, just forget i said anything, sorry

That’s interesting. @tkaiser and I noticed the same for the CPU frequencies. Lower ones measure slightly higher than advertised and higher ones measure a bit lower. Typically my CPU claims to run at 2304 MHz but measures 2267. It’s as if instead of cheating by making all upper frequencies equal, they would now progressively dampen them so that it’s harder to detect.

Rock5b IS working with PCIe switch (in my case 8port U.2 nvme board) on radxa ubuntu focal!

2 Likes

inspiring your M.2 > PCIe x16 > sff-8643 x8 > adapter nvme
each ssf-8643 can give 4 lane sata3.

I plan to try M.2 > sff-8643 > PCIe x16 > 4x1 bifurcation > JMB585 x2 > SSD Sata3



or 2x2 bifurcation > JMB585 x 2 > SSD Sata3

I’m a little afraid of the sff-8643 to the pciex16…

Also working with 4 NVMe switch board ANM24PE16. So a quick test regarding it with fio (only 30s and 4k io=32 and 64k io=32). All NVMe formatted as 4k logical.

root@rock-5b:/home/rock# lsscsi
[N:0:1:1]    disk    KINGSTON SKC2500M81000G__1                 /dev/nvme0n1
[N:1:1:1]    disk    KINGSTON SKC2500M81000G__1                 /dev/nvme1n1
[N:2:4:1]    disk    Samsung SSD 983 DCT M.2 960GB__1           /dev/nvme2n1
[N:3:4:1]    disk    Samsung SSD 983 DCT M.2 960GB__1           /dev/nvme3n1
Single NVMe (n1 + n2) via switch 4k read

4k io=32 1 thread, random read:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=4k -iodepth=32 -rw=randread -filename=/dev/nvme1n1
read: IOPS=205k, BW=800MiB/s (838MB/s)(23.4GiB/30001msec)
nvme1n1: ios=6110887/0, merge=0/0, ticks=100603/0, in_queue=100602, util=99.68%

4k io=32 4 threads, random read:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=4k -iodepth=32 -rw=randread -filename=/dev/nvme1n1
Jobs (all jobs): 4 (f=4): [r(4)][100.0%][r=1624MiB/s][r=416k IOPS]
nvme1n1: ios=12407866/0, merge=0/0, ticks=3120570/0, in_queue=3120569, util=99.96%

4k io=32 1 threads, random read:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=4k -iodepth=32 -rw=randread -filename=/dev/nvme2n1
read: IOPS=204k, BW=796MiB/s (834MB/s)(23.3GiB/30001msec)
 nvme2n1: ios=6081630/0, merge=0/0, ticks=144942/0, in_queue=144942, util=99.69%

4k io=32 4 threads, random read:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=4k -iodepth=32 -rw=randread -filename=/dev/nvme2n1
Jobs (all jobs): 4 (f=4): [r(4)][100.0%][r=2225MiB/s][r=570k IOPS][eta 00m:00s]
nvme2n1: ios=16851241/0, merge=0/0, ticks=3599192/0, in_queue=3599192, util=99.89%
Single NVMe (n1 + n2) via switch 4k write

4k io=32 1 thread, random write:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=4k -iodepth=32 -rw=randwrite -filename=/dev/nvme1n1
 write: IOPS=171k, BW=667MiB/s (700MB/s)(19.6GiB/30001msec); 0 zone resets
nvme1n1: ios=17/5100863, merge=0/0, ticks=1/137856, in_queue=137857, util=99.82%

4k io=32 4 threads, random write:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=4k -iodepth=32 -rw=randwrite -filename=/dev/nvme1n1
Jobs (All jobs): 4 (f=4): [w(4)][100.0%][w=954MiB/s][w=244k IOPS][eta 00m:00s]
nvme1n1: ios=14/7506401, merge=0/0, ticks=0/3006631, in_queue=3006631, util=99.77%

4k io=32 1 threads, random write:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=4k -iodepth=32 -rw=randwrite -filename=/dev/nvme2n1
 write: IOPS=190k, BW=742MiB/s (778MB/s)(21.7GiB/30001msec); 0 zone resets
 nvme2n1: ios=13/5673605, merge=0/0, ticks=1/73403, in_queue=73404, util=99.66%

4k io=32 4 threads, random write:

fio -name=rndr4k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=4k -iodepth=32 -rw=randwrite -filename=/dev/nvme2n1
Jobs (All jobs): 4 (f=4): [w(4)][100.0%][w=1191MiB/s][w=305k IOPS][eta 00m:00s]
nvme2n1: ios=17/8963788, merge=0/0, ticks=1/3680680, in_queue=3680681, util=99.74%
Single NVMe (n1 + n2) via switch 64k read

64k io=32 1 threads, seq read:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=64k -iodepth=32 -rw=read -filename=/dev/nvme1n1
  read: IOPS=47.6k, BW=2974MiB/s (3118MB/s)(87.1GiB/30001msec)
nvme1n1: ios=1422056/0, merge=0/0, ticks=941038/0, in_queue=941038, util=99.78%

64k io=32 4 threads, seq read:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=4k -iodepth=32 -rw=read -filename=/dev/nvme1n1
 Jobs (all jobs): 4 (f=4): [R(4)][100.0%][r=2972MiB/s][r=47.6k IOPS][eta 00m:00s]
 nvme1n1: ios=1105233/0, merge=0/0, ticks=3568212/0, in_queue=3568212, util=99.93%

64k io=32 1 threads, seq read:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=64k -iodepth=32 -rw=read -filename=/dev/nvme2n1
read: IOPS=45.7k, BW=2857MiB/s (2996MB/s)(83.7GiB/30001msec)
nvme2n1: ios=1366948/0, merge=0/0, ticks=933056/0, in_queue=933057, util=99.73%

64k io=32 4 threads, seq read:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=64k -iodepth=32 -rw=read -filename=/dev/nvme2n1
Jobs (All jobs): 4 (f=4): [R(4)][100.0%][r=2293MiB/s][r=36.7k IOPS][eta 00m:00s]
nvme2n1: ios=1247837/0, merge=0/0, ticks=3762737/0, in_queue=3762738, util=99.80%
Single NVMe (n1 + n2) via switch 64k write

64k io=32 1 threads, seq write:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=64k -iodepth=32 -rw=write -filename=/dev/nvme1n1
write: IOPS=29.7k, BW=1856MiB/s (1946MB/s)(54.4GiB/30001msec); 0 zone resets
nvme1n1: ios=14/886685, merge=0/0, ticks=1/857459, in_queue=857460, util=99.76%

64k io=32 4 threads, seq write:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=64k -iodepth=32 -rw=write -filename=/dev/nvme1n1
Jobs (All jobs): 4 (f=4): [W(4)][100.0%][w=1912MiB/s][w=30.6k IOPS][eta 00m:00s]
 nvme1n1: ios=42/912157, merge=0/0, ticks=2/3250556, in_queue=3250559, util=99.88%

64k io=32 1 threads, seq write:

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=1 -bs=64k -iodepth=32 -rw=write -filename=/dev/nvme2n1
write: IOPS=19.5k, BW=1217MiB/s (1276MB/s)(35.6GiB/30002msec); 0 zone resets
nvme2n1: ios=45/581677, merge=0/0, ticks=4/924309, in_queue=924313, util=99.88%

64k io=32 4 threads, seq write (basically, Samsung 983 is 1.2GiB on write by Samsung design):

fio -name=seqw64k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=64k -iodepth=32 -rw=write -filename=/dev/nvme2n1
Jobs (all jobs): 4 (f=4): [W(4)][100.0%][w=1126MiB/s][w=18.0k IOPS][eta 00m:00s]
nvme2n1: ios=13/580539, merge=0/0, ticks=1/3760516, in_queue=3760517, util=99.78%

And a bit of max perfomance from pcie 3.0 x4 and CPU:

mdadm raid0 4 NVMe 128k seq read

128k io=32 4 threads, seq read:

mdadm --create /dev/md0 --verbose --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
fio -name=seq128k32 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 -runtime=30 -numjobs=4 -bs=128k -iodepth=32 -rw=read -filename=/dev/md0
 Jobs: 4 (f=4): [R(4)][100.0%][r=3133MiB/s][r=25.1k IOPS][eta 00m:00s]
md0: ios=749024/0, merge=0/0, ticks=3745024/0, in_queue=3745024, util=99.94%, aggrios=187942/0, aggrmerge=0/0, aggrticks=940406/0, aggrin_queue=940406, aggrutil=99.49%
nvme0n1: ios=187951/0, merge=0/0, ticks=2691964/0, in_queue=2691965, util=99.49%
nvme3n1: ios=187936/0, merge=0/0, ticks=124958/0, in_queue=124958, util=99.46%
nvme2n1: ios=187937/0, merge=0/0, ticks=123346/0, in_queue=123346, util=99.46%
nvme1n1: ios=187947/0, merge=0/0, ticks=821358/0, in_queue=821358, util=99.48%
3 Likes

did you note the values ​​of the Cpu line in the end of task log?

Not really, that was just for quick tests.
I will redo them anyway after I recompile kernel to add support for my 10G nic

it’s great that you did it.
the SAS standard had no use in PCIe 2 but with PCIe 3 it becomes fun and gives a lot of freedom.
Anyway, I’m glad to hear that it works.

Just in case. That wasn’t SAS. That was NVMe

P.S. I only now noticed, that second pcie is E-key, and now i need to find an adapter to make him b key…

Hi,
I finally received my 2.5/5/10G RJ45 NIC for testing. It’s a TP-Link TX-401 equipped with an Aquantia AQC107 chip:
02:00.0 Ethernet controller [0200]: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] [1d6a:07b1] (rev 02)

IMG_1311s

Thus I could run some quic tests to validate the 2.5G capability of the Rock5B with it. Well, to put it short, I find that Realtek has made lots of progress, as I was expecting to see traffic drops, hangs or anything but on the opposite, it went very well, with 2.35 Gbps of HTTP traffic flowing in each direction. I monitored the CPU as well and it reached 16-17%, knowing that both the client and the server were running on the board, all this is extremely correct:

Thus my conclusion here is that this board, out of the box, is already very capable as an application server, file server or networked equipment. Maybe even used as a single-port firewall via 802.1q port tagging would to a great job. And in any case this Realtek chip is way ahead of the dwmac controller usually found on such families of SoCs, so it was a good decision to sacrifice one PCIe lane instead of using the internal NICs. The board could possibly even be extended via the second M.2 slot if a 5GbE NIC was available in this format (e.g. using the AQC107 chip above maybe or rather its little brother AQC108 which is limited to 5GbE and could make optimal use of a single PCIe3 lane).

3 Likes

From reading rk_pm_callback_power_on in mali_kbase_config_rk.c, I’ve found a workaround which allows removing kbase to work without causing a kernel panic:

# echo 1 >/sys/kernel/debug/regulator/vdd_gpu_s0/enable
# rmmod bifrost_kbase

At the moment, it is unlikely to be useful for anyone except for myself, but once Panfrost becomes more usable…

2 Likes

For those interested in inexpensive network extensions, I’ve found an M.2 A+E RTL8125 board for ~20 EUR which should be compatible with the M.2 slot designed for the WiFi board on the top side: https://www.aliexpress.com/item/1005004166784408.html
Since they’re using passive RJ45 jacks and the transformers are on the board, there’s no room for a second chip nor the PCIe bridge that the PCIe bus would easily support, but that allows to easily add a second port without taking much space, if needed.

I’ve also found an M.2 A+E dual-sata3 board for ~14 EUR: https://www.aliexpress.com/item/1005003545144525.html

It’s really great that this SBC doesn’t impose a WiFi chip by default and leaves the port spare like this, that makes it particularly modular and more extensible than what’s usually found with SBCs.

It’s getting tempting to move my file server to this board, but I’ll rather wait for mainline support to appear first or I know I’ll eventually regret it.

3 Likes

tl;dr - 1 core of rk3588 is enough for 10G channel and it have a lot of space for other tasks.

So after a bit of learnings I was able to get my NC552SFP running on Rock5 as well. Since I have discovered that second pcie is E slot and I don’t have e --> pcie expander I’m waiting for one from Ali.
Since e slot is just PCIe 2.1 x1, which means 4Gbit/s at max. Not 10G, but still more than 2.5Gbit

In meantime here is 10G results (connection throught mikrotik CRS305-1G-4S+IN):

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  65.7 GBytes  9.40 Gbits/sec    3             sender
[  5]   0.00-60.00  sec  65.7 GBytes  9.40 Gbits/sec                  receiver

image

whole logs

CPU:

iperf3:

root@rock-5b:/home/rock# iperf3 -c 192.168.1.38 -t 60
Connecting to host 192.168.1.38, port 5201
[  5] local 192.168.1.50 port 39420 connected to 192.168.1.38 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.09 GBytes  9.38 Gbits/sec    3   1.14 MBytes
[  5]   1.00-2.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.25 MBytes
[  5]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.25 MBytes
[  5]   3.00-4.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.25 MBytes
[  5]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.26 MBytes
[  5]   5.00-6.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.27 MBytes
[  5]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.27 MBytes
[  5]   7.00-8.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.28 MBytes
[  5]   8.00-9.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.28 MBytes
[  5]   9.00-10.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.29 MBytes
[  5]  10.00-11.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.29 MBytes
[  5]  11.00-12.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.29 MBytes
[  5]  12.00-13.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.30 MBytes
[  5]  13.00-14.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.30 MBytes
[  5]  14.00-15.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.30 MBytes
[  5]  15.00-16.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.31 MBytes
[  5]  16.00-17.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.31 MBytes
[  5]  17.00-18.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.40 MBytes
[  5]  18.00-19.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.51 MBytes
[  5]  19.00-20.00  sec  1.09 GBytes  9.41 Gbits/sec    0   2.33 MBytes
[  5]  20.00-21.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  21.00-22.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  22.00-23.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  23.00-24.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  24.00-25.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  25.00-26.00  sec  1.09 GBytes  9.41 Gbits/sec    0   2.33 MBytes
[  5]  26.00-27.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  27.00-28.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  28.00-29.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  29.00-30.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  30.00-31.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  31.00-32.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  32.00-33.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  33.00-34.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  34.00-35.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  35.00-36.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  36.00-37.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  37.00-38.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  38.00-39.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  39.00-40.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  40.00-41.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  41.00-42.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  42.00-43.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  43.00-44.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  44.00-45.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  45.00-46.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  46.00-47.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  47.00-48.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  48.00-49.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  49.00-50.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  50.00-51.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  51.00-52.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  52.00-53.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  53.00-54.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  54.00-55.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  55.00-56.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  56.00-57.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  57.00-58.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  58.00-59.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
[  5]  59.00-60.00  sec  1.09 GBytes  9.40 Gbits/sec    0   2.33 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  65.7 GBytes  9.40 Gbits/sec    3             sender
[  5]   0.00-60.00  sec  65.7 GBytes  9.40 Gbits/sec                  receiver

iperf Done.