I installed 4 hard disks with quad sata, one of them is ssd. I did not form a raid, they run in sata mode, I found that the maximum transfer speed does not exceed 16mbit/s for either hdd or ssd, I would like to know what is the reason, I tested it using smb or sftp, also if I do not go through quad sata, but use external removable drive connection my speed can exceed 50mbit/s.
Hello, what is the filesystem of the hard disk?
The file system I am using is ntfs
Try using ext4. NTFS performs very poorly under Linux.
I do the same thing with ext4, the transfer lags after a few seconds and then becomes slow.
Please tell me the printing of
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M
|__ Port 1: Dev 3, If 0, Class=Mass Storage, Driver=uas, 5000M
|__ Port 2: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/1p, 480M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M
Are you really talking about mbit/sec or MB/sec?
And is this ‘measurement’ done all the time over network? If so you should test locally first. And switch cpufreq governor to performance first. As
sudo su -)
echo performance >/sys/devices/system/cpu/cpufreq/policy0/scaling_governor apt-add-repository multiverse apt update apt install iozone3 cd $ssd-mountpoint iozone -e -I -a -s 500M -r 16384k -i 0 -i 1
This prints the real storage performance measured locally. If this is drastically higher than what you get through the network it’s time to diagnose this separately. And output from
dmesg | curl -s -F 'f:1=<-' ix.io would be great too.
Most probably @setq will advise to ‘blacklist UAS’ but I would do the aforementioned performance testing first.
The transfer unit is Mibps. I transferred a single 10GB file from mac to sda1 (ssd) and the actual speed was only about 17Mibps.I checked dmesg and also disabled uas, without disabling uas my speed can only reach 15Mibps, after disabling the speed can probably reach 17-20Mibps