Re: Netapp FAS vs EMC VNX

Your completely correct about sizing. Which is why when Netapp pundits go around selling the one size fits all cache + sata architecture I cringe.

Netapp came in saying we would save money over the type of storage we typically bought (symmetrix, USP) but in the end we needed as many Netapp controllers to satisfy the requirements where cost is back in enterprise storage array territory.

Re: Netapp FAS vs EMC VNX

When trying to save money on storage, it's important to see where it's all going in the first place.

For example:

People managing storage and how much time they spend

People managing backups and how much time they spend

Number of storage and backup products (correlating to people/time in addition to CapEx)

How much storage is used to do certain things (backups, clones)

In the right environment, NetApp Unified can save companies a boatload of money.

In some other environments, it might be a wash.

In other environments still, going unified may be more expensive, and you may want to explore other options.

The problem with storage that has as much functionality as NetApp Unified, is that, in order to do a comprehensive ROI analysis, a LOT of information about how you spend money on storage is needed - often far beyond just storage.

For example, how much does it cost to do DR? And why?

How much time do developers spend waiting for copies to be generated in order for them to test something?

I've been able to reduce someone's VNX spend from 100TB to about 20TB. Yes - with 100TB the VNX could do everything the customer needed to do (many full clones of a DB environment plus local backups plus replication).

We were able to do it with 20TB and have space to spare.

The end result also depends on how much of the NetApp technology one is willing to use.

If you use our arrays like traditional boxes you get a storage system that's fast and resilient but it won't necessarily cost you any less...

D

46Replies

0
Likes

Re: Netapp FAS vs EMC VNX

It's much more difficult to do routine tasks on an EMC box. I know that is relative, and subject to everyone's opinion. I've worked on both, and NetApp just makes more sense to me, and it is so much easier to do basic stuff.

An example would be shrinking a volume. With NetApp, it is one command, and the FlexVol that contains the share is grown or shrunk to whatever size you want. With Celerra, you can't shrink the container that the share is in. They call it a file system, and it cannot be shrunk. If you want unused space back from a bloated file system, you've got to create a new file system, copy all the data, re-share everything from the new path, and destroy the old. This plays hell with replication. If you want to move your Celerra LUNs around in the storage array with the old CLARiiON LUN Migrator tool, too bad, you can't. Again, it's create new, copy data, and delete the old. Obviously, this would cause a loss of service to your users.

If you're running a small dedicated NAS array these may not be a big problem for you. If you're hoping to run a large array with CIFS/NFS/iSCSI/FC with dozens or hundreds of TB behind it, then these are useful features that you'll be missing out on.

I understand the FAST is a big deal for you. On the surface, it does sound pretty cool. There are some drawbacks, though, and of course EMC doesn't talk about them. Once you put disks in a FAST pool, they are there forever. You CANNOT pull drives out of a pool. You've got to create a new pool on new spindles, copy ALL the data, and then destroy the entire pool. Any SAN LUNs could be moved with CLARiiON Migration, but the LUNs you've allocated to the Celerra cannot be moved this way. It's a manual process with robocopy, rsync, or your tool of choice. Obviously, this would cause a loss of service to your users.

If you're running a small dedicated NAS array these may not be a big problem for you. If you're hoping to run a large array with CIFS/NFS/iSCSI/FC with dozens or hundreds of TB behind it, then these are useful features that you'll be missing out on.

Maybe some of these things have changed with VNX, but from what I understand it is still the same in these respects as Celerra. If someone in the community knows more about VNX than I do, please correct me.

Re: Netapp FAS vs EMC VNX

Having both in my environment it's not quite as horrible as you make it to be.

In one way I've conceptually thought of a Celerra filesystem == a NetApp aggregate, both were limited to 16TB of space (until recently) and neither could shrink. I have a 600TB NS960 Celerra on the floor and not having to think about balancing aggregate space for volumes on it is very nice. I've only had to shrink a NetApp volume maybe two-three times (trying to fit a new 2TB or so volume into an existing aggregate), generally for our environment all that happens is storage consumption nobody gives back, unless they are completely done at which point we delete the volume. If you really want to shrink it there are a number of easier ways than using host level migration either nas_copy (similar to a qtree snapmirror copying at the file rather than block level) or using CDMS where you point your clients to the new filesystem and it hot-pulls files over on demand (still requires outage to point your clients to new location, but measured in minutes rather than hours)

While not suggested (because you can hurt yourself badly if done wrong) you can move Celerra luns around in the storage array using lun migrator. Caveat is that it needs to be the same drive type, raid type and raid layout else AVM will be confused on the state of the lun. i.e. AVM thinks it's a mirrored FC disk, you migrate it to a raid5 SSD disk, next time AVM queries the storage it will have a lun defined in a mirrored FC pool that has a different characteristic. If you aren't using AVM or are using thin devices from the array this might not be a problem but 99.9% of people don't run that way.

On shrinking a block level pool NetApp has the exact same drawback and they don't talk about it either. You want to shrink an aggregate... how do you do it? You do the same process you mention on the NetApp, drain the entire aggregate, pretty much just as painful. Additionally, if you aren't shrinking the volume on the Celerra you would replicate it to another pool on the same or different NAS head, only if you are shrinking a filesystem would you have to do anything at a per file level.

That's all on the older Celerra NS not the VNX (but at this time they basically have the same feature set, just more power, capacity, etc).

I'd say that the EMC pools with FAST tiering is better than NetApp fixed aggregates, but if you are using both SAN & NAS I'd say that it almost becomes a throw away value. It's nice to have one big storage pool for the array that can go to really huge sizes (100TB is certainly bigger than 16TB it isn't really that huge anymore): I don't have to worry about balancing space, wide striping just happens, etc. That's all great but to use the same thin block pool for SAN & NAS you give up NAS filesystem thin provisioning, you present the NAS head thin luns and create a thick filesystem on it; there is no thin on thin. While not the end of the world as it's still thin provisioned in the backend, nobody really does it. You have 200TB of storage and you want to split it evenly, you generally give 100TB traditional raid luns to NAS AVM pool and 100TB thin pool luns to SAN pool. I haven't explored using a thick provisioned pool lun... but still nobody really does it that way so why bother. With that quantity of storage you still would create 2x 100TB aggregates but there is no issue with mixing and matching NAS & SAN in the same aggregate.

I personally have found them both annoying in their own ways to manage. I'm more CLI type person, but I'd probably put the Unisphere interface higher than NetApp if you like GUI's (the previous version of Celerra manager not so much)

The NetApp is nice in it's similarity to traditional unix file structure: mount vol0; edit exports, quota, netgroup, etc done. It is a bit annoying in that depending upon the change you can't do everything from one location: i.e. change exportfs file, have to login to the filer to activate it. I can copy all those config files somewhere else for backup before I make a change (very nice!) or to apply if I'm moving from filer to filer: replacing filer A with B, copy exports from A to B done. General real-time performance stats are easy to get "login run sysstats -x", detailed not so much.

Celerra you change those values via a command which is rather esoteric, "server_export ...",etc once figured out not a big deal, but it's not as obvious as exports line. It's nice in that everything is done from the same system, login to the control station issue whatever command done, don't have to ssh into the NAS head to do anything. Having said that if you are using scripts for things because most everything is a command it makes things very simple. Don't have to edit a file, ssh in anywhere run a script and it's just done which for our environment with thousands of clients and petabytes of storage is very, very nice. Detailed real-time performance stats are very easily accessible via, "server_stat".

They both suck in long term performance stats without adding on additional packages, they both suck in finding out oversubscription rates with qtrees, etc.

Re: Netapp FAS vs EMC VNX

Having read all these posts I feel compelled to respond. I have used all flavors of both the Netapps and EMC... The Netapp 2040 does compete with the EMC configuration. However, a 2240-4 or 2240-2 have ultra attractive price points these days. One thing I would like to say about day to day tasks on the EMC vs the Netapp is that I do observe most things being easier and more central on the Netapp. EMC's unisphere has bridged the gap to some respect, but Netapp is still ahead of the curve. Not too long ago I used a Celerra NS-120 backended by clariion CX4-120's. Most folks in a virtual environment are looking to leverage storage efficiencies and by and large the NAS portion of the devices. The Celerra consistently had issues with both replication and NFS. By issues I mean the head on the Celerra would kernel panic becuase of the NFS mounts and fail over to the other head. Talk about unacceptable. To further add to the pain; EMC admitted the issue and said there was no fix or workaround yet available. Also, they went further to say there was no projected date to alleviate the problem and their work around was to present via the clariion portion and use fiber channel. Really? Why did I WASTE money on a NAS head if all it could do were SAN operations effectively? To my knowledge EMC has now remediated these issues. However, how much confidence does this give me in EMC? Answer, NONE! EMC has a solid SAN that solidly replicates. As far as NAS and deduplication; NEVER AGAIN.

46Replies

0
Likes

Re: Netapp FAS vs EMC VNX

First of all to remove any confusion, you should know that I am 100% in favor of EMC, that's what I sell, but that does not mean I cannot talk positive about other vendors, especially NetApp whom I always highlight as one of the 2 single best positioned

- When replying on why it is not possible with NetApp to "tiering without PAM" the answer is; "we don't need automated tiering as caching is better". Come on, automated tiering is true almost in every case.

- then the dialog starts on routing cables ...

Whether EMC's FAST and/or FAST Cache makes sense or not, depends on the customer requirements, which can be identified true dialog and cooperation. I urge you all to read the article from the below

link on relevant usecases for FAST and/or FAST Cache and when NOT. Also I fully support the author to never go negative on the other guy, and I believe strongly in that to focus on how you're offering can

add value or the project/customer, is the right thing to do. I hope we all can see a more non-FUD, and factual discussions.