This comes up all of the time and I can't believe that it has taken this long to get around to publishing a guide to understanding the differences. I should probably make a graphic to go with it but I'm lazy and might just fill that in later.

45 Replies

Good timing and great article/book Scott - I am keeping it for future reference as well so I can re-read - you shared a lot of information. Since I am getting first hand/first time experience with a SAN, this is very helpful. One of the first differences I noticed was the management interface for a SAN has much more functionality than any NAS interface I have ever seen. And the SAN is very redundant vs most NAS boxes I have worked on. I'm sure that is just scratching the surface - thanks for being helpful with my questions!

And the SAN is very redundant vs most NAS boxes I have worked on. I'm sure that is just scratching the surface - thanks for being helpful with my questions!

People often buy that way because they make the assumption. But both can go to the same levels of functionality and redundancy, but SAN can go "lower" than NAS, but that's all. Since NAS uses the equivalent of SAN internally, anything SAN can do, NAS can do plus more.

Enjoyed that.... thanks...... I will admit it still isn't crystal clear..... but that's just how I learn..... incrementally. My understanding happens in bits and pieces until the clouds have all lifted and I eventually have a clear understanding.... This one was a big help... and worth a re-read after I'm done my coffee....

Oh, and one small edit... just to note "In this case we are hear to talk ........" (here)

Well written, what are your thoughts on devices like NETAPP that blur the lines and mix the definitions of both a NAS and a SAN?

That's a unified device. I address that in the article. What makes it a NAS or a SAN is the use of it, not the device itself. Any given portion of its storage still acts discretely as either SAN or NAS. It is just that you get both from one unit. Pretty much every device in the SMB segment is unified storage. The only major exception is Drobo that makes very discrete DAS, SAN and NAS products rather than unified. There are benefits both ways. ReadyNAS, ReadyDATA, QNAP, Synology, Thecus, Buffalo, etc. are all unified makers. OpenFiler, FreeNAS, NAS4Free... all unified as well.

Even just Linux or Windows as an OS is a unified storage platform as they do both. So it is very common.

The problem with unified is that pretty much you have to chose one to be the base and the other is bolted on top. NetApp, for example, is a NAS device first and SAN second. The SAN is actually bolted on top of the NAS. Their NAS is one of the best on the market. Their SAN functionality is weak. You would choose them when you need a great NAS but just a little SAN functionality. Whereas if you needed the opposite you'd look to, say, Hitachi or EMC.

1st Post

A lot of people do not realize they can just install CentOS or Ubuntu (or what ever flavor for that matter) and then just setup like SAMBA for the SMB/CIFS to do the file share. The biggest thing I have found is there is some overlap as Scott is indicating. All the marketing hype - etc. I was trying to figure out the best way to do a file server in a virtualized environment. You read through some of these support threads out there about horror storries of iSCSI issues - etc. Scott should do a followup article with maybe some scenerios. For example - I have an application server that is connecting to a database server. Backup ideas to maybe a third server. I have a virtualized environment, our local hard drives are filling up. How should I expand storage.one other question I had was should I virtualize with Xen the file server. It would seem the logical answer would be no - you want a bare metal install. I am seeing a lot of questions people are posing about a NAS running on Xen. If you are running a Linux distribution, since Xen is now in the core of most distributions, how much of a performance hit would you see. On the type 1 hypervisor - I am assuming it would be paravirtualized Scott? Would it have as much of an impact?

I actually have a "When Should I Choose SAN" article written but unlike this one that went straight to SMB IT Journal, that one is part of a series over at Datamation and is pending editorial approval.

I am seeing a lot of questions people are posing about a NAS running on Xen. If you are running a Linux distribution, since Xen is now in the core of most distributions, how much of a performance hit would you see. On the type 1 hypervisor - I am assuming it would be paravirtualized Scott? Would it have as much of an impact?

Running real Xen, as a Type 1 on Bare Metal, with raw access to the underlying block devices, either local, DAS or SAN, a NAS server running PV'd on Xen would see nominal overhead both at the CPU and at the storage level. It's a great way to go as NAS functionality tends to use relatively little system resources outside of storage capacity. So it tends to play very nicely with other virtualized workloads.

So I recently built a storage applicance that I am going to use to store VMWare vmdks. It's Linux, uses LVM and does replication and failover to an identical device in a different building. Right now I am presenting iSCSI targets, but if I were to present NFS shares instead it would be a NAS instead of a SAN? Any difference in performance?

So I recently built a storage applicance that I am going to use to store VMWare vmdks. It's Linux, uses LVM and does replication and failover to an identical device in a different building. Right now I am presenting iSCSI targets, but if I were to present NFS shares instead it would be a NAS instead of a SAN? Any difference in performance?

That's correct, if you present NFS you have to put the filesystem on the device itself and share via files, not blocks. If you use iSCSI the filesystem is being put on by the remote client. There is a difference in performance but they are different animals so difficult to compare. iSCSI tends to outperform NFS, but only tends. There are use cases where NFS is faster. It depends on what components need and can leverage what logic for the system.

VMware recommends NFS generally, as do I, for nearly all use cases because it is safer and easier. iSCSI can often be configured to be faster, but in normal usage iSCSI is so widely misunderstood and so easy to get wrong and kill performance (or corrupt data) that NFS is just more reliable and easy to use.