Could you try converting the results through bon_csv2html (comes with bonnie++) and post it here? It would be more readable than the output you posted.

__________________
The best way to learn UNIX is to play with it, and the harder you play, the more you learn.
If you play hard enough, you'll break something for sure, and having to fix a badly broken system is arguably the fastest way of all to learn. -Michael Lucas, AbsoluteBSD

Could you try converting the results through bon_csv2html (comes with bonnie++) and post it here? It would be more readable than the output you posted.

Forum HTML code is Off it seems, though I might have missed something. Next week I will install a four-disk RAID5 array (on the test machine) in addition to the existing two-disk RAID0 array. I would like to evaluate PostgreSQL performance - on both arrays, on encrypted and unencrypted partitions - for both FreeBSD and OpenBSD. I've been rebuilding often while exploring the various characteristics of different configurations. If I don't explicitly save data then it's lost. If anyone has any recommendations for specific tests, configurations, and/or data that should be collected, let me know and I'll do what I can to collect it and present it in a reasonable fashion.

Each layer between a block to be read or written and the read or write adds both complexity and a possibility for an error.

RAID and/or encryption/decryption both add to this complexity. It does not matter if you are using software or hardware implementations of these.

An error will most commonly occur through human error. The more complexity, the greater the risk of human error.

Less commonly, an error/failure will occur through software error. The same relative level of risk is involved if this is "software RAID / software encryption" or if these are "hardware" implementations. The only difference is where the software is executed.

There will always be hardware failures with storage devices; which is why manfuacturers publish MTBF and related specifications.

Depending on the types of errors that occur, and errors will occur -- human, software, or hardware -- the risk of data loss is of critical concern. Every layer of complexity increases the risk of data loss. The prudent storage infrastructure architect will endeavor to mitigate these risks.

---

I spent several decades in IT infrastructure consulting, sales, marketing and management, specializing in data storage infrastructures . For whatever that may be worth.

I am quite happy with the RAID1 performance, even if my disks are not the better to be paired. I wrote a post weeks about it. I even encrypted a partition on this RAID1. I can say that what I like of softraid is its consistent performace.

A note about encryption on laptops or frequency variable processors: The CPU frequency affects largely to encryption performance.

softraid crypto does seem to be CPU intensive (unnecessarily?). I've been surprised recently to see (on one particular machine) that geom_eli has a significantly lower CPU load with [overall] significantly better performance.

Each layer between a block to be read or written and the read or write adds both complexity and a possibility for an error.

The goals of mission-critical, safety-critical and security-critical systems are not necessarily achieved through Luddism

Joking aside, I hear what you're saying. Software is fundamentally fragile; competence is fickle and fleeting. Strategies for probable contingencies can mitigate risk but there is a point at which we all just have to roll the dice and deal with what comes to us.