After two years of proposing the project, I finally sold the executives on migrating users e-mail accounts to a hosted webmail software. This streamlines support and eliminates the need for a locally installed mail client.

11 Replies

I know you stated you are going to use OpenFiler. My experience with OpenFiler is not good. Buggy and had issues with conuming too much memory and CPU resources. I use Overland Storage and Synology for all NAS purchases now. They are both very reliable and relatively cheap. I don't know if they are available in Russia.

I was using OpenFiler in an iSCSI lab ... never really worked out well for me. Then later I tried to use it in production as a CIFS share as a virtulized NAS running on VMware ESX. It always seemed to want to consume all the memory I allocated for it and would get sluggish. I replaced it with a Synology NAS. In defense of OpenFiler - I have only my own experience and may very well have been doing something wrong.

I was using OpenFiler in an iSCSI lab ... never really worked out well for me. Then later I tried to use it in production as a CIFS share as a virtulized NAS running on VMware ESX. It always seemed to want to consume all the memory I allocated for it and would get sluggish. I replaced it with a Synology NAS. In defense of OpenFiler - I have only my own experience and may very well have been doing something wrong.

Linux caching will do that. That is how a filer should behave. You have to tune the memory manually to have it perform well virtualized and on non-PV platforms it requires even more work. Does VMWare even provide PV drivers for OF? It works awesome under Xen with full PV.

I was using OpenFiler in an iSCSI lab ... never really worked out well for me. Then later I tried to use it in production as a CIFS share as a virtulized NAS running on VMware ESX. It always seemed to want to consume all the memory I allocated for it and would get sluggish. I replaced it with a Synology NAS. In defense of OpenFiler - I have only my own experience and may very well have been doing something wrong.

OpenFiler's iSCSI breaks the spec. VMware has an entire KB article dedicated why you shouldn't put it into production...

DL185 G5 was replaced with the DL180 G6. Same chassis, Intel procs instead of AMD. Same drive configurations.

How i missed this one...?

It can support up to 25 disks SFF. Interesting.

I could put a Smart Array P410 for 24 disks inside.

You have good references for SFF enterprise SATA disks ?

Sorry to make you guys repeat things that have been posted all around but hardware recommandation i found was a bit outdated.

Thanks Scott.

Generally you'd look at the LFF drives for capacity and the SFF for performance, and hence SAS. You'll get more capacity with the LFF drives at a fraction of the price (14 x 3TB = 42TB!!) But 24x 15K SAS + one hot spare is one screaming machine!

HDS and Datacore are my production iSCSI.
Currently got a skunkworks project on an in house clustered scale out NAS I'm working on as a VMware NFS backend. I'll likely be blogging about it once I get it vetted.

OpenFiler's iSCSI breaks the spec. VMware has an entire KB article dedicated why you shouldn't put it into production...

I'm pretty sure their current version has the SCST iSCSI stack available and allows you to choose between it and the IETD iSCSI stack that was problematic with VMware. Scale and Open-E are VMware certified using the SCST stack, so it would seem the better option...

0

This discussion has been inactive for over a year.

You may get a better answer to your question by starting a new discussion.