Hi out there,
I've some little questions, perhaps you can help me...
At the moment, we're planning our new clustered ERP system which
consists of a java application server and a postgresql database. The
hardware, which is actually used for that system isn't able to handle
the workload (2 Processors, load of 6-8 + 12GB Ram), so it is very, very
slow - and that although we already deactived a lot of stuff we normally
want to do, like a logging and something like that...
We've already choosen some hardware for the new cluster (2x quadcore
Xeon + 64GB Ram should handle that - also in case of failover when one
server has to handle both, applicaton and database! The actual system
can't do that anymore...) but I also have to choose the filesystem
hardware. And that is a problem - we know, that the servers will be fast
enough, but we don't know, how many I/O performance is needed.
At the moment, we're using a scsi based shared storage (HP MSA500G2 -
which contains 10 disks for the database - 8xdata(raid 1+0)+2x
logs(raid1) ) and we often have a lot wait I/O when 200 concurrent users
are working... (when all features we need are activated, that wait I/O
will heavy increase, we think...)
So in order to get rid of wait I/O (as far as possible), we have to
increase the I/O performance. Because of there are a lot storage systems
out there, we need to know how many I/O's per second we actually need.
(To decide, whether a storage systems can handle our load or a bigger
system is required. ) Do you have some suggestions, how to measure that?
Do you have experience with postgres on something like HP MSA2000(10-20
disks) or RamSan systems?
Best regards,
Andre