Guys, has anyone tested eMMC cards and can share the read/write speeds?
I have info that read speed can be 100 MB/s or even 300 MB/s. That’s fantastic, but I need first-hand report.
Guys, has anyone tested eMMC cards and can share the read/write speeds?
I have info that read speed can be 100 MB/s or even 300 MB/s. That’s fantastic, but I need first-hand report.
I tested on eMMC 64GB.
From ubuntu server that installed on eMMC module. If do the same command over and over again there will be noticeable drop in performance. If just work with board after this a bit and do command again (seems to be around 60sec?) performance will go back to first value.
1GiB 512K blocks
write speed
root@localhost:~# sleep 90s
root@localhost:~# dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.46047 s, 310 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.71836 s, 228 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.87945 s, 277 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.47951 s, 240 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.91423 s, 218 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 14.1075 s, 76.1 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 21.6402 s, 49.6 MB/s
read speed
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=512K count=2048
3
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.72453 s, 188 MB/s
real 0m5.731s
user 0m0.008s
sys 0m1.896s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=512K count=2048
3
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.8293 s, 184 MB/s
real 0m5.834s
user 0m0.020s
sys 0m1.912s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=512K count=2048
3
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.78113 s, 186 MB/s
real 0m5.785s
user 0m0.016s
sys 0m1.952s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=512K count=2048
3
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.74879 s, 187 MB/s
real 0m5.754s
user 0m0.032s
sys 0m2.080s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=512K count=2048
3
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.72402 s, 188 MB/s
real 0m5.729s
user 0m0.016s
sys 0m2.108s
1GiB 4K blocks
write speed
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.22876 s, 254 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.42605 s, 243 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.70888 s, 228 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.0084 s, 153 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.62131 s, 232 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.43805 s, 242 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 13.3003 s, 80.7 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.9377 s, 89.9 MB/s
root@localhost:~# sync; dd if=/dev/zero of=~/temp.tmp bs=4K count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.057 s, 89.1 MB/s
read speed
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=4K count=262144
3
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.84455 s, 184 MB/s
real 0m5.850s
user 0m0.164s
sys 0m2.372s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=4K count=262144
3
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.84832 s, 184 MB/s
real 0m5.853s
user 0m0.172s
sys 0m2.344s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=4K count=262144
3
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.86102 s, 183 MB/s
real 0m5.867s
user 0m0.120s
sys 0m2.316s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=4K count=262144
3
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.84357 s, 184 MB/s
real 0m5.850s
user 0m0.176s
sys 0m2.264s
root@localhost:~# sync; echo 3 | tee /proc/sys/vm/drop_caches; sync; time dd if=~/temp.tmp of=/dev/null bs=4K count=262144
3
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.90845 s, 182 MB/s
real 0m5.914s
user 0m0.172s
sys 0m2.256s
td;lr
write speed drops after 5-7 command to below 100 MB/s.
Additional testing with 10GiB - 11GB (4K block size, 2621440 block count - resulted speed: 45.8 MB/s) file shows, that speed indeed dropping after X GiB was wrote (reason is same as with SSD i guess).
From debian desktop that was installed on sd card.
Read speed higher (up to 200 MB/s, sorry, no 300 MB/s)
Write speed was much lower (120-140 MB/s)
Hope it helped. It took a bit of time to do all this tests, that’s why I wasn’t able to answer earlier
Debian Stretch image
glmark2-es2 --off-screen
Score 241
glmark2-es2
Score 51
glmark2-es2 --fullscreen used 1920x1080 resolution
Score 33
Logs
root@linaro-alip:/home/linaro# glmark2-es2 --off-screen
=======================================================
glmark2 2017.07
=======================================================
OpenGL Information
GL_VENDOR: ARM
GL_RENDERER: Mali-T860
GL_VERSION: OpenGL ES 3.2 v1.r14p0-01rel0-git(966ed26).fb73b5772aa0adfbd3ad68351d4226c5
=======================================================
[build] use-vbo=false: FPS: 270 FrameTime: 3.704 ms
[build] use-vbo=true: FPS: 350 FrameTime: 2.857 ms
[texture] texture-filter=nearest: FPS: 433 FrameTime: 2.309 ms
[texture] texture-filter=linear: FPS: 416 FrameTime: 2.404 ms
[texture] texture-filter=mipmap: FPS: 405 FrameTime: 2.469 ms
[shading] shading=gouraud: FPS: 274 FrameTime: 3.650 ms
[shading] shading=blinn-phong-inf: FPS: 265 FrameTime: 3.774 ms
[shading] shading=phong: FPS: 239 FrameTime: 4.184 ms
[shading] shading=cel: FPS: 233 FrameTime: 4.292 ms
[bump] bump-render=high-poly: FPS: 149 FrameTime: 6.711 ms
[bump] bump-render=normals: FPS: 390 FrameTime: 2.564 ms
[bump] bump-render=height: FPS: 381 FrameTime: 2.625 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 205 FrameTime: 4.878 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 242 FrameTime: 4.132 ms
[pulsar] light=false:quads=5:texture=false: FPS: 396 FrameTime: 2.525 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 242 FrameTime: 4.132 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] effect=shadow:windows=4: FPS: 262 FrameTime: 3.817 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 25 FrameTime: 40.000 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 26 FrameTime: 38.462 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 27 FrameTime: 37.037 ms
[ideas] speed=duration: FPS: 104 FrameTime: 9.615 ms
[jellyfish] <default>: FPS: 176 FrameTime: 5.682 ms
[terrain] <default>: FPS: 44 FrameTime: 22.727 ms
[shadow] <default>: FPS: 206 FrameTime: 4.854 ms
[refract] <default>: FPS: 39 FrameTime: 25.641 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 375 FrameTime: 2.667 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 212 FrameTime: 4.717 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 362 FrameTime: 2.762 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 275 FrameTime: 3.636 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 188 FrameTime: 5.319 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 269 FrameTime: 3.717 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 271 FrameTime: 3.690 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 208 FrameTime: 4.808 ms
=======================================================
glmark2 Score: 241
=======================================================
root@linaro-alip:/home/linaro# glmark2-es2
=======================================================
glmark2 2017.07
=======================================================
OpenGL Information
GL_VENDOR: ARM
GL_RENDERER: Mali-T860
GL_VERSION: OpenGL ES 3.2 v1.r14p0-01rel0-git(966ed26).fb73b5772aa0adfbd3ad68351d4226c5
=======================================================
[build] use-vbo=false: FPS: 58 FrameTime: 17.241 ms
[build] use-vbo=true: FPS: 59 FrameTime: 16.949 ms
[texture] texture-filter=nearest: FPS: 59 FrameTime: 16.949 ms
[texture] texture-filter=linear: FPS: 59 FrameTime: 16.949 ms
[texture] texture-filter=mipmap: FPS: 59 FrameTime: 16.949 ms
[shading] shading=gouraud: FPS: 59 FrameTime: 16.949 ms
[shading] shading=blinn-phong-inf: FPS: 59 FrameTime: 16.949 ms
[shading] shading=phong: FPS: 59 FrameTime: 16.949 ms
[shading] shading=cel: FPS: 59 FrameTime: 16.949 ms
[bump] bump-render=high-poly: FPS: 59 FrameTime: 16.949 ms
[bump] bump-render=normals: FPS: 59 FrameTime: 16.949 ms
[bump] bump-render=height: FPS: 59 FrameTime: 16.949 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 59 FrameTime: 16.949 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 30 FrameTime: 33.333 ms
[pulsar] light=false:quads=5:texture=false: FPS: 59 FrameTime: 16.949 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 30 FrameTime: 33.333 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] effect=shadow:windows=4: FPS: 59 FrameTime: 16.949 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 19 FrameTime: 52.632 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 19 FrameTime: 52.632 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 19 FrameTime: 52.632 ms
[ideas] speed=duration: FPS: 46 FrameTime: 21.739 ms
[jellyfish] <default>: FPS: 59 FrameTime: 16.949 ms
[terrain] <default>: FPS: 24 FrameTime: 41.667 ms
[shadow] <default>: FPS: 59 FrameTime: 16.949 ms
[refract] <default>: FPS: 28 FrameTime: 35.714 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 59 FrameTime: 16.949 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 59 FrameTime: 16.949 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 59 FrameTime: 16.949 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 59 FrameTime: 16.949 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 59 FrameTime: 16.949 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 59 FrameTime: 16.949 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 59 FrameTime: 16.949 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 59 FrameTime: 16.949 ms
=======================================================
glmark2 Score: 51
=======================================================
root@linaro-alip:/home/linaro# glmark2-es2 --fullscreen
=======================================================
glmark2 2017.07
=======================================================
OpenGL Information
GL_VENDOR: ARM
GL_RENDERER: Mali-T860
GL_VERSION: OpenGL ES 3.2 v1.r14p0-01rel0-git(966ed26).fb73b5772aa0adfbd3ad68351d4226c5
=======================================================
[build] use-vbo=false: FPS: 43 FrameTime: 23.256 ms
[build] use-vbo=true: FPS: 44 FrameTime: 22.727 ms
[texture] texture-filter=nearest: FPS: 46 FrameTime: 21.739 ms
[texture] texture-filter=linear: FPS: 46 FrameTime: 21.739 ms
[texture] texture-filter=mipmap: FPS: 46 FrameTime: 21.739 ms
[shading] shading=gouraud: FPS: 43 FrameTime: 23.256 ms
[shading] shading=blinn-phong-inf: FPS: 42 FrameTime: 23.810 ms
[shading] shading=phong: FPS: 40 FrameTime: 25.000 ms
[shading] shading=cel: FPS: 39 FrameTime: 25.641 ms
[bump] bump-render=high-poly: FPS: 38 FrameTime: 26.316 ms
[bump] bump-render=normals: FPS: 44 FrameTime: 22.727 ms
[bump] bump-render=height: FPS: 44 FrameTime: 22.727 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 28 FrameTime: 35.714 ms
libpng warning: iCCP: known incorrect sRGB profile
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 15 FrameTime: 66.667 ms
[pulsar] light=false:quads=5:texture=false: FPS: 43 FrameTime: 23.256 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 16 FrameTime: 62.500 ms
libpng warning: iCCP: known incorrect sRGB profile
[desktop] effect=shadow:windows=4: FPS: 36 FrameTime: 27.778 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 16 FrameTime: 62.500 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 16 FrameTime: 62.500 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 16 FrameTime: 62.500 ms
[ideas] speed=duration: FPS: 30 FrameTime: 33.333 ms
[jellyfish] <default>: FPS: 30 FrameTime: 33.333 ms
[terrain] <default>: FPS: 10 FrameTime: 100.000 ms
[shadow] <default>: FPS: 27 FrameTime: 37.037 ms
[refract] <default>: FPS: 18 FrameTime: 55.556 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 44 FrameTime: 22.727 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 33 FrameTime: 30.303 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 44 FrameTime: 22.727 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 39 FrameTime: 25.641 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 30 FrameTime: 33.333 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 39 FrameTime: 25.641 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 39 FrameTime: 25.641 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 32 FrameTime: 31.250 ms
=======================================================
glmark2 Score: 33
=======================================================
As for real experience. I tried vcmi (Heroes 3), and debian feels smoother than ubuntu.
I am getting constant 213 MB/s at most linear read from eMMC 64GB (Radxa=AllNet). Linear write NULs at 165 MB/s.
Just tested system on eMMC. Not much faster in use, but my tests give other info:
=== WRITE 1GiB ===
sync...ok
1061158912 bytes (1,1 GB, 1012 MiB) copied, 4 s, 265 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 8,30272 s, 129 MB/s
sync...ok
real 0m8,336s
user 0m0,016s
sys 0m3,600s
sync...ok
Done.
I am looking for a good script for testing and Benchmark.
Something That will give me a Nicely formatted output.
I have a Rockpi with 250 SSD,eMMc 32 meg, POE Hat.
I need Average TEMP,
The speed of disks, Voltage/Power Consumption, video performance
Thanks
pierre
Install armbian,do armbianmonitor. For gpu do glmark2-es2.
You can’t measure Voltage/Power consumpition on any board (and computer as far as i know) without extra device
Or SBC-Bench from Thomas Kaiser.
It runs many benchmark programs and gives a nice output file.
Has somebody done a comparision between the small heatsink, delivered with the Performance Set
and the big heatsink?
How great or small are the differences in temperature when idle and under full load?
Hi. I can tell you how it is with the small heatsink.
This just isn’t sufficient. It overheats immediatly without a fan.
Wth the big heatsink you can do normal tasks without a fan, but for heay load I’d still use a fan.
I still don’t have the big heatsink, I need to order one. I’ve got the NanoPi M4 with such a big heatsink, it’ a lot more useable since I don’t need a fan.
Debian armhf
No fan idle : 50°C
No fan max load : 85°C throttle keeps rising to +90°C
With fan idle : 37°C
With fan max load : 77°C
Ubuntu arm64
No fan idle : 56°C
No fan max load : 85°C throttle keeps rising to 95°C
With fan idle : 38°C
With fan max load : 83°C
If have two boards, one with small one with big heatsink
The big heatsink’s mass acts in two ways: First like a capacity, to quickly absorb the generated heat and dissipate it with a time delay.
Second with a larger surface to the surrounding air.
This means in a pure passive setup it can help to go through longer peak phases, and also give off more energy to the surrounding air. It depends on the surrounding air temperature, humidity and airflow for how long this works, without CPU throttling.
The small cooler should work fine with a solid active fan, or in aircooled invironments with some “natural” airflow and only sporadic short CPU spikes.
Just look a bit up (just a bit before ask), i typed full set of tests about this.
Thanks for the detailed tests. It’s really help to understand the difference compared with other boards.
Can you make one more test to check WebGL performance in Firefox and Chrome? There are two most popular pages for this:
It’s very interesting what FPS rates you’ll see there
If WebGL support is disabled in the browser, you can easily enable it:
And thanks for your work. We appreciate it.
Ubuntu
(ATTENTION: Testing was done on custom build xserver from rockchip with this merge request and with modesetting.conf)
Chrome detected ARM Mali-T860. Started with taskset -c 4-5 (flags in spoiler at the end)
webglsamples Canvas 1024x1024
On 500 Fishes - 20-21 fps
On 1000 - 11-12 fps
On 30000 - 1-2 fps
threejs - 9 fps when i throw paint ball, 30 fps otherwise
Firefox don’t have egl without rebuilding it from the scratch, so it’s use VMware. If i try to use gl4es it’s just fails to detect WebGL.
webglsamples
On 500 Fishes - 3 fps
On 1000 - 3 fps
On 30000 - less than 1 fps
threejs - 3 fps when i throw ball, 10 fps otherwise
–disable-low-res-tiling
–num-raster-threads=6
–profiler-timing=0
–disable-composited-antialiasing
–disk-cache-dir=/tmp/
–no-sandbox
–test-type
–show-component-extension-options
–ignore-gpu-blacklist
–use-gl=egl
Debian
Fresh debian with update&&upgrade with radxa’s apt
Chrome detected ARM Mali-T860. Started with taskset -c 4-5
webglsamples Canvas 1024x1024
On 500 Fishes - 12-28 fps (unstable)
On 1000 - 14-17 fps
On 30000 - 1 fps
threejs - 22 fps when i throw a ball, 30-38 fps otherwise
And firefox ESR just crash on me
Chrome results looks good Many thanks
I’ve changed the script used to test write speed. Added “progress” and “direct”.
#!/usr/bin/env bash
echo === WRITE 1GiB ===
echo -n "sync..." ; sync ; echo "ok"
echo 3 > /proc/sys/vm/drop_caches
time {
dd if=/dev/zero of=temp conv=fdatasync bs=1024k count=1k status=progress oflag=direct
echo -n "sync..." ; sync ; echo "ok"
}
rm -f temp
echo -n "sync..." ; sync ; echo "ok"
echo Done.
With 4.4.154-87-rockchip-00029-g8216f17 #2 SMP Sat Jun 22 11:06:39 CST 2019 aarch64 aarch64 aarch64 GNU/Linux I am observing these speeds of write:
What is worse for eMMC (previous >200 MB/s) and better for pendrive (previous <26 MB/s).
Internet speedtest-cli (nominal 600/30 Mb/s) at 11pm:
Same kernel. my emmc speed~160mb/s on dd backup
Since i finally got my SATA board i was able to perform some testing. First i wanna mention - heatsink that on this board is ONE HELL OF HOT. Don’t touch it. I mean that.
So there is result for sda (that’s 4 same HDD, so result for one should be sufficient for 3 others) and raid5 with same HDDs. Tests was done with 128mb and 1 GB file. Read as
Block size speed in kb/s (speed in mb/s)
Tests from device with iozone:
Write:
4 332383 (324.59) 8 348421 (340.25) 16 353218 (344.94) 32 357343 (348.97) 64 356478 (348.12) 128 361514 (353.04) 256 351786 (343.54) 512 342210 (334.19) 1024 329700 (321.97) 2048 338333 (330.40) 4096 339343 (331.39) 8192 348757 (340.58) 16384 359484 (351.06)
Read:
4 809356 (790.39) 8 892562 (871.64) 16 974223 (951.39) 32 1071200 (1046.09) 64 1098703 (1072.95) 128 1122347 (1096.04) 256 1117885 (1091.68) 512 1022427 (998.46) 1024 920636 (899.06) 2048 905542 (884.32) 4096 915364 (893.91) 8192 953070 (930.73) 16384 993776 (970.48)
Random-write:
4 803230 (784.40) 8 894549 (873.58) 16 906960 (885.70) 32 980652 (957.67) 64 1024255 (1000.25) 128 1005987 (982.41) 256 945405 (923.25) 512 947045 (924.85) 1024 876172 (855.64) 2048 884146 (863.42) 4096 945569 (923.41) 8192 713049 (696.34) 16384 925911 (904.21)
Random-read:
4 807487 (788.56) 8 899096 (878.02) 16 1013718 (989.96) 32 1072795 (1047.65) 64 1090203 (1064.65) 128 1132588 (1106.04) 256 1137648 (1110.98) 512 1028621 (1004.51) 1024 919113 (897.57) 2048 910841 (889.49) 4096 1088421 (1062.91) 8192 985978 (962.87) 16384 1000633 (977.18)
Write:
4 343456 (335.41) 8 358572 (350.17) 16 367968 (359.34) 32 369691 (361.03) 64 377710 (368.86) 128 385298 (376.27) 256 368468 (359.83) 512 376797 (367.97) 1024 357785 (349.40) 2048 357411 (349.03) 4096 368411 (359.78) 8192 370446 (361.76) 16384 373314 (364.56)
Read:
4 1481405 (1446.68) 8 1670994 (1631.83) 16 1869676 (1825.86) 32 1866069 (1822.33) 64 1959958 (1914.02) 128 1960395 (1914.45) 256 1927104 (1881.94) 512 1640139 (1601.70) 1024 1455746 (1421.63) 2048 1450599 (1416.60) 4096 1462528 (1428.25) 8192 1495591 (1460.54) 16384 1539422 (1503.34)
Random-write:
4 223774 (218.53) 8 262970 (256.81) 16 299696 (292.67) 32 307355 (300.15) 64 313597 (306.25) 128 286243 (279.53) 256 293720 (286.84) 512 327104 (319.44) 1024 345816 (337.71) 2048 407107 (397.57) 4096 445832 (435.38) 8192 441840 (431.48) 16384 466637 (455.70)
Random-read:
4 1259552 (1230.03) 8 1759485 (1718.25) 16 2176616 (2125.60) 32 2257633 (2204.72) 64 2432958 (2375.94) 128 2636963 (2575.16) 256 2516539 (2457.56) 512 1908698 (1863.96) 1024 1599158 (1561.68) 2048 1585513 (1548.35) 4096 1612268 (1574.48) 8192 1668123 (1629.03) 16384 1665969 (1626.92)
Raid5 was creating with this guide (since i’m using Ubuntu Server - OMV is out)
Write:
4 335157 (327.30) 8 344593 (336.52) 16 348991 (340.81) 32 351179 (342.95) 64 363238 (354.72) 128 367085 (358.48) 256 360178 (351.74) 512 343040 (335.00) 1024 342137 (334.12) 2048 339985 (332.02) 4096 344234 (336.17) 8192 340940 (332.95) 16384 364138 (355.6)
Read:
4 787247 (768.80) 8 897679 (876.64) 16 1025740 (1001.70) 32 1091311 (1065.73) 64 1131804 (1105.28) 128 1138588 (1111.90) 256 1163418 (1136.15) 512 1020929 (997.00) 1024 979253 (956.30) 2048 979494 (956.54) 4096 962299 (939.75) 8192 990538 (967.32) 16384 1027596 (1003.51)
Random Read:
4 1068780 (1043.73) 8 1023079 (999.10) 16 960306 (937.80) 32 1274648 (1244.77) 64 1343348 (1311.86) 128 1397714 (1364.96) 256 1397522 (1364.77) 512 1298296 (1267.87) 1024 1250949 (1221.63) 2048 1191790 (1163.86) 4096 1222114 (1193.47) 8192 1485417 (1450.60) 16384 1589308 (1552.06)
Random Write:
4 826222 (806.86) 8 867365 (847.04) 16 833786 (814.24) 32 959355 (936.87) 64 986445 (963.33) 128 769079 (751.05) 256 596690 (582.71) 512 564344 (551.12) 1024 894335 (873.37) 2048 873924 (853.44) 4096 907989 (886.71) 8192 926985 (905.26) 16384 984356 (961.29)
Write:
4kb 265222 (259.01) 8kb 282061 (275.45) 16kb 296587 (289.64) 32kb 311825 (304.52) 64kb 296117 (289.18) 128kb 263135 (256.97) 256kb 282588 (275.96) 512kb 284515 (277.85) 1024kb 296193 (289.25) 2048kb 307933 (300.72) 4096kb 300861 (293.81) 8192kb 332411 (324.62) 16384kb 317546 (310.10)
Read:
4kb 1393603 (1360.94) 8kb 1665445 (1626.41) 16kb 1843673 (1800.46) 32kb 1934453 (1889.11) 64kb 1960551 (1914.60) 128kb 1940865 (1895.38) 256kb 1955017 (1909.20) 512kb 1706125 (1666.14) 1024kb 1469892 (1435.44) 2048kb 1476722 (1442.11) 4096kb 1493222 (1458.22) 8192kb 1561214 (1524.62) 16384kb 1523600 (1487.89)
Random Read:
4kb 1184736 (1156.97) 8kb 1674836 (1635.58) 16kb 2120730 (2071.03) 32kb 2395427 (2339.28) 64kb 2625923 (2564.38) 128kb 2690621 (2627.56) 256kb 2526013 (2466.81) 512kb 1970715 (1924.53) 1024kb 1641461 (1602.99) 2048kb 1645405 (1606.84) 4096kb 1645628 (1607.06) 8192kb 1647894 (1609.27) 16384kb 1661632 (1622.69)
Random Write:
4kb 41963 (40.98) 8kb 60039 (58.63) 16kb 79713 (77.84) 32kb 109152 (106.59) 64kb 136454 (133.26) 128kb 204676 (199.88) 256kb 255391 (249.41) 512kb 307286 (300.08) 1024kb 300449 (293.41) 2048kb 336749 (328.86) 4096kb 345409 (337.31) 8192kb 377150 (368.31) 16384kb 374459 (365.68)
Tests from PC (using samba). Iozone is broken for windows. At some point i got 1505124
kbytes/s as WRITE speed. That’s x1.5 MORE than bandwidth of channel between PC and rockpi. On 2nd pass different between numbers was around 14 times. So i did not used iozone for windows to test this setup. Instead i used CrystalMark and manually transferring files
[ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 112 MBytes 938 Mbits/sec [ 4] 1.00-2.00 sec 113 MBytes 948 Mbits/sec [ 4] 2.00-3.00 sec 113 MBytes 949 Mbits/sec [ 4] 3.00-4.00 sec 113 MBytes 949 Mbits/sec [ 4] 4.00-5.00 sec 113 MBytes 949 Mbits/sec [ 4] 5.00-6.00 sec 113 MBytes 949 Mbits/sec [ 4] 6.00-7.00 sec 113 MBytes 949 Mbits/sec [ 4] 7.00-8.00 sec 113 MBytes 949 Mbits/sec [ 4] 8.00-9.00 sec 113 MBytes 949 Mbits/sec [ 4] 9.00-10.00 sec 113 MBytes 949 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.10 GBytes 948 Mbits/sec sender [ 4] 0.00-10.00 sec 1.10 GBytes 948 Mbits/sec receiver
CrystallDisk
Read:
Seq Q32T1 99399.68 (97.07)
Write:
Seq Q32T1 119705.6 (116.9)
Random-read:
4 KiB Q1T1 6703.104 (6.546) 4 KiB Q8T8 79708.16 (77.84) 4 KiB Q32T1 79093.76 (77.24)
Random-write:
4 KiB Q1T1 6984.96 (6.821) 4 KiB Q8T8 86538.24 (84.51) 4 KiB Q32T1 87500.80 (85.45)
As for moving files manually - 5.53Gb video file was moved on average speed 97 MBytes/s with 120 MBytes at begining and dropping to as low as 94 MBytes in middle, then increasing to 96-99 MBytes to the end.
Read:
Seq Q32T1 116326.4 (113.6)
Write:
Seq Q32T1 112537.6 (109.9)
Random-read:
4 KiB Q1T1 6703.104 (6.546) 4 KiB Q8T8 81295.36 (79.39) 4 KiB Q32T1 82391.04 (80.46)
Random-write:
4 KiB Q1T1 8243.2 (8.05) 4 KiB Q8T8 96471.04 (94.21) 4 KiB Q32T1 84899.84 (82.91)
td;lr. Basically, the only real difference in performance between raid5 and just 4 disks is Random Write performance. And network basically annihilate this difference because.
Testing Penta Sata HAT
For USB 3.0 PC and Sata PC I used CrystalDiskMark 7.0.0h x64, for USB 3.0 Rock and Penta SATA HAT I used FIO 3.19 (cloned and compiled on Rock from git). In all tests taken medium out of 4 results. The following command was used for FIO.
For seq read and write:
> /usr/local/bin/fio --loops=4 --size={512m;1G;4G} --filename=/mnt/flash//test.tmp --stonewall --ioengine=libaio --direct=1 --name=Seqread --bs=4m --rw=read --name=Seqwrite --bs=4m --rw=write
For 512k read/write:
> /usr/local/bin/fio --loops=4 --size=4000m --filename=/mnt/flash//test.tmp --stonewall --ioengine=libaio --direct=1 --name=512Kread --bs=512k --rw=randread --name=512Kwrite --bs=512k --rw=randwrite
For 4k read/write:
> /usr/local/bin/fio --loops=4 --size={512m;1G;4G} --filename=/mnt/flash//test.tmp --stonewall --ioengine=libaio --direct=1 --name=4kQD32read --bs=4k --iodepth=32 --rw=randread --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
For testing i’ve used HDD x1, SSD x1 and 1 Raid5 x1 (which have 4 drives, that will be added a bit later)