Different horses for different courses

It's not surprising that there is confusion about Storage Area Networks (SANs) and Network Attached Storage (NAS). Frank Booty explains, that it's important to know what SANs and NAS systems are suitable for

Confusion can reign supreme between the two product areas of storage area networks (SANs) and network attached storage (NAS). Nigel Ghent, EMC UK marketing director, says SAN is a connectivity topology - a high-speed network supporting the attachment of storage systems on a shared-access network. SAN technology creates a network infrastructure of shared, multi-host storage, linking all storage devices across local and remote sites. SAN's essence is to separate storage functions from the server. Its vision calls for direct movement of information from one storage system to another storage device, server, or end-user (PC or workstation).

'NAS is a storage device attached to a conventional local area network (LAN) to serve clients,' explains Ghent. So NAS devices act as individual nodes on a SAN here. Differences between NAS and SAN derive from the use of different protocols - SAN uses channel protocols like SCSI (small computer system interface) and fibre channel, whereas NAS uses LAN protocols, like NFS (Network File System) and HTTP (HyperText Transfer Protocol).

Sun's Chris Atkins says: 'SANs differ from NAS in that they introduce network hardware and software that's specifically designed to meet the needs of storage and data access - needs such as high bandwidth, centralised management, high availability, and non-intrusive expandability. NAS is re-using technology designed to do one thing, for a different and very demanding application. NAS is storage attached to a general purpose network, and accessible across this network.'

Dave Leyland, sales director at IBM/Tivoli business partner Sagitta Performance Systems of Havant, UK, reckons NAS is simple to implement, and offers great interoperability through the use of NFS. The movement from traditional SCSI-attached storage to NAS is a huge step forward in terms of architectural scalability, simplicity of management, and heterogeneous storage consolidation.

'The downside is the way NAS is protocol intensive,' says Leyland. 'In addition to the transaction overhead of the SCSI protocol, you now also wrap the packet with UDP/IP (User Datagram Protocol/Internet Protocol which is non-connection oriented), or TCP/IP (Transmission Control Protocol/Internet Protocol) which will contribute significantly to the processing overhead sustained by the server or servers.

'Ethernet has a maximum packet size of approximately 1,500 bytes,' says Leyland. 'Reading a 5Mb file from a NAS device would require the segmentation of the file into 3,500 individual packets, with either UDP/IP, or TCP/IP, headers and trailers. One can see the protocol overhead is daunting in larger file environments. As we move more and more towards visually based applications, I cannot see file sizes ever getting smaller.'

Leyland points out NAS is fairly efficient in 'small' file environments, and used widely by Internet Service Providers, due to the nature of their file size requirements.

'SANs are much more efficient, and most protocol management is done in the hardware at the interface,' says Leyland. 'Where bandwidth issues have to be resolved, SAN is a better proposal. Also there is merit longer term in splitting the LAN and SAN, as the third party copy element of the fibre protocol will eventually allow 'server-less' backups.

'Today, LAN-free backups can be done, and this kind of functionality will allow users to change their working regimes, allowing true concurrency of backup through work hours, with no LAN/WAN impact. SAN offers bandwidth solutions today, and a better architectural future. If you look at the level of consideration and product development being given to SAN by the industry today versus that of NAS, it's becoming clear where most users will move,' says Leyland. 'Heterogeneous support is offered with SAN through the use of packages, such as SANergy - formerly Mercury - from Tivoli.'

Ron Riffe, Tivoli's storage strategy and business development manager, says: 'Our purchase of the US network software tools company Mercury will open up the market. Customers don't have to scrap existing investments in disk. They can install SAN alongside these disks, and still handle NAS protocols using SANergy with backup. There are 100-plus SANergy installations in the UK alone already. Just think of SANergy combining both philosophies.'

One of the criticisms of SAN to date has been that few would be able to afford to dump existing investments in disk. With SANergy this won't be necessary. Tivoli, understandably, is 'very excited' about the product.

SAN overall gives a better current solution, and offers better architectural features for the future. In real terms server clustering is reliant on SAN.

'SAN applies to server applications, typically accessing local storage, and NAS applies to client-server applications, typically accessing remote storage,' says Ghent. 'They're not directly comparable, so there's no advantages/disadvantages, it all depends on the application. SAN is a network, and provides high-speed connectivity over long distances. NAS will be more cost-effective when just serving files, and not for running other applications as well.'

Email Alerts

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

It can be tempting to stray from the security roadmap security professionals have put in place when data breaches like the Sony and Anthem breaches are all over the news. But experts say it's crucial to stick to the security basics.

The Open Data Platform has arrived, but not all Hadoop vendors are on board. The initiative, aimed at boosting interoperability, formed a backdrop for discussion at the Strata + Hadoop World 2015 conference.