Version 2.6.5 contains an edit by bonienl to allow it to work in version 6.4

Note: Disks under 25 GB will be skipped. Should work in virtualized environments.

This utility will perform read tests at different points on each drive attached to the system (even those not assigned to the UNRAID array) to compute the average speed & generate a graph. Useful if you want to see if you have a drive that's slower than the others and negatively affecting your parity drive speeds. Even drives of the same make & model can perform marginally different - example with the cache drive and disk 9 in the below graph utilizing my backup server with drives retired from my main server.

To execute, ensure no other processes are running on your UNRAID server that are accessing your hard drives (which will result in slower results) and run the diskspeed.sh script. Execution time will be approx. 90 seconds multiplied by the number of drives attached to the system with the default sample & test iterations.

If you suspect a disk is failing, run the following command and replace "sdx" with the drive you want to test. It will test just that drive at every 2% of the capacity.

diskspeed.sh -i 3 Test each sample point three times and take the average

diskspeed.sh -s 21 Test the drive every 5%

diskspeed.sh -f Perform a fast test where 200 MB is read at each location vs 1 GB. Not as accurate.

diskspeed.sh -x sda,sdb Exclude drives sda & sdb

diskspeed.sh -n sdc,sdd Only test drives sdc & sdd

Expanding info on sample points: The script will check the start of the hard drive (0%) and the end of the hard drive (100%). The rest of the sample points are divided evenly between the start and end. So a sample request of 3 would test the start, end, and middle of the drive.

If your graph seems "spiky", try running the script with the "-i 3" option to test each location three times and take the average.

To view the graph, navigate to the location you executed the script file via your preferred file explorer on your UNRAID share (ex: \\tower\flash\scripts) and open the diskspeed.html file. You can toggle each drive off & on by clicking on its designation in the legend.

The script utilizes the dd utility to do a direct read at various offsets

Change Log

Version 2.6.4
Added support for UNRAID 6.3.0-RC9
Version 2.6.3
Changed memory check to ignore cache memory
Version 2.6.2
Added a check to ensure there is enough free memory available to execute the dd command
Added -n | --include option to specify which drives to test, comma delimited
Ignore floppy drives
Added support for nvme drives
Version 2.6.1
Fixed issue identifying drives assigned sdxx (more than 26 drives attached)
Fixed issue with data drives over 9 having the last digit truncated
Version 2.6
Removed checks for invalid drives, redundent
Altered drive inventory to exclude md? drives/identify drive/cache/parity assignments
Modified to support UNRAID 6.2 running under OS 4.4.x and higher
Version 2.5
Fixed computation for percentages less than 10%
Reverted to 1 GB scans for better results but slower
Added -f --fast to scan 200 MB instead of 1 GB, same as version 2.3 & 2.4
Version 2.4
If the drive model is not able to be determined via fdisk, extract it from mdmcd
Add -l --log option to create the debug log file diskspeed.log
Modified to not display the MB sec in drive inventory report for excluded drives
Modified to compute the drive capacity from the number of bytes UNRAID reports for external drive cards.
Added -g --graph option to display the drive by percentage comparison graph
Added warning if files on the array are open which could mean drives are active
Added spin up drive support by reading a random spot on the drive
Version 2.3
Changed to use the "dd" command for speed testing, eliminates risk of hitting the end of the drive. The app will read 200MB
of data at each testing location.
Before scanning each spot, uses the "dd" command to place the drive head at the start of the test location.
Added -o --output option for saving the file to a given location/name (credit pkn)
Added report generation date & server name to the end of the report (credit pkn)
Added a Y axis floor of zero to keep the graph from display negative ranges
Hid graph that compared each drive by percentage. If you wish to re-enable it, change the line "ShowGraph1=0" to
"ShowGraph1=1"
Added average speed to the drive inventory list below the graph
Added -x --exclude option to ignore drives, comma seperated. Ex: -x sda,sdb,sdc
Added -o --output option to specify report HTML file name
Version 2.2
Changed method of identifying the UNRAID boot drive and/or USB by looking for the file /bzimage or /config/ident.cfg if the
device is mounted
Skip drives < 25 GB
Route fdisk errors to the bit bucket
Removed the max size on the 2nd graph to allow smaller drives to scale if larger drives are hidden
Version 2.1
Fixed GB Size determination to minimize hdparm hitting the end of the drive while performing a read test at the end of the
drive (credit doron)
Fixed division error in averaging sample sizes (credit doron)
Updated graphs to size to 1000 px wide
Added 2nd graph which shows drive speeds in relation to the largest drive size; this is a better indication of how your
parity speeds may run
Added drive identification details below the graphs
Added support for scanning all hard drives attached to the system
Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample will be at the start of the drive,
last sample at the end, and the rest spread out evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive
Version 1.1
Fix bug for >= 10 drives in array (credit bonienl)
Fix graph bug so graph displays in MB
Version 1.0
Initial Release

Share this post

Link to post

Thanks bonienl, I implemented your drive fix into my script but needed to take a different route to fix the graph and still maintain the console output.

Good point on the out-of-sequence drives, I used to have one of those myself.

Unfortunately, I can't test the script/graph on a physical server as my new MB in the UNRAID server let the smoke out last night but fortunately, I have an UNRAID test server running under VirtualPC. Things match up properly now.

Share this post

Link to post

Did a lot of work on the script yesterday adding new features. Please see the first post to download.

Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample
will be at the start of the drive, last sample at the end, and the rest spread out
evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive

Share this post

Link to post

Did a lot of work on the script yesterday adding new features. Please see the first post to download.

Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample
will be at the start of the drive, last sample at the end, and the rest spread out
evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive

Share this post

Link to post

In v2.0, if I use "-s" with any number other than the default, the average comes out skewed (check e.g. with -s 3; you can check with -s 33 to get really interesting result). Average calculation seems wrong.

Share this post

Link to post

Thanks doron for the help in finding & fixing bugs. I swear that 1000/1024 thing is a PITA because different parts of that app uses either 1000 or 1024. It did minimize the issue of hdparm hitting the end of the drive.

I'm waiting for my USB key file for my backup NAS that has a wide mix of drives I retired from my regular NAS which is still out of commission due to a faulty mother board. Once I get the key file, I'll test the script out against an actual server and not UNRAID running in an Oracle VM with three VM drives of varying sizes running on a RAM Drive. Speeds are very unpredictable with it. Once everything checks out and baring new features to add which pop into my mind, I'll upload the script.

# Version 2.1

# Fixed GB Size determination to minimize hdparm hitting the end of the drive while

# performing a read test at the end of the drive (credit doron)

# Fixed divison error in averging sample sizes (credit doron)

# Updated graphs to size to 1000 px wide but shrinkable

# Added 2nd graph which shows drive speeds in relation to the largest drive size; this

Share this post

Link to post

Thanks for posting the new version! Clearly lots of hard work goes into this.

Some issues with the new version:

[*]fdisk (on my system at least, 5.0) doesn't like GPT and spews a warning message. Doesn't interrupt the working of the script, just aesthetics. Suggest to add "2> /dev/null" to the "fdisk -l" invocation.

[*]On my system, there's one HDD that's smaller than 10GB. It's not part of the array so it didn't show up until now. However now it breaks the script completely, as the calculations seem to not have anticipated such a small drive. The thing is that fdisk reports its size as "8589 MB" (MB and not GB); the script assumes the number is in GB and then, hell breaks loose (okay, not THAT bad ). Suggest to look at the string output by fdisk and if the unit is MB, calculate accordingly. For all I care, you can just ignore such drives and move on. (In case you're curious, this is the unRAID boot drive - since my unRAID is virtualized, and hypervisors cannot boot from USB for some god-knows-what reason.)

[*]During the run of that small disk, for some reason, awk spews out its input line. See below.

[*]Suggestion - add a new command line argument to test only the drives in the array. Or reverse logic - some "-a" to test all drives, while default is array only.

And here's a quick-n-dirty patch that solved the problems for me. Perhaps the "correct" way to fix the size issue is to look at the 4th positional output off of fdisk - number of bytes - and to calculate MB/GB/TB off it.

Share this post

Link to post

Thanks for those. I did test with a 25 GB drive on my Virtual PC setup but that's the smallest I went and I didn't test this current version at all on it since I had my backup UNRAID server up & running. I assumed it would always be GB because drives that small haven't been made in over a decade but a virtual drive never occurred to me.

I'll also add an exclusion to omit the drive mounted as /boot. The script uses data from the fdisk for displaying the location being scanned so that's probably why it's barfing out all those awk errors. I'll redirect the fdisk errors out to a file and use that to ignore devices that it can't handle.