What is network-attached storage?

Network-attached storage (NAS) is a file-level storage architecture where 1 or more servers with dedicated disks store data and share it with many clients connected to a network. NAS is 1 of the 3 main storage architectures—along with storage area networks (SAN) and direct-attached storage (DAS)—and is the only 1 that’s both inherently networked and fully responsible for an entire network’s storage.

Compare NAS to more familiar storage volumes, like your PC’s hard drive, external drive, CD, or USB flash drive. A NAS architecture allows you to store and share file-based data, much like any storage volume. But while your hard drive, external drive, CD, or flash drive can only connect to 1 device at a time, NAS is networked to support many devices simultaneously.

NAS units are built to serve data as files. Although they’re technically able to complete general server tasks as well, NAS units run software that protects data and handles permissions—that’s it. This is why NAS units don’t need a full-featured operating system. Most NAS units contain an embedded, lightweight operating system fine-tuned for data storage and presentation.

To present these files, a NAS unit uses standard file-based protocols, such as Network File System (NFS), Server Message Block (SMB), Common Internet File System (CIFS), and/or Apple Filing Protocol (AFP)—which are the protocols used to communicate with Linux® and UNIX, Microsoft Windows, and Apple devices, respectively.

The main benefits of NAS include:

Scale out capacity: Adding more storage capacity to NAS is as easy as adding more hard disks. You don’t have to upgrade or replace existing servers, and new storage can be made available without shutting down the network.

Performance: Because NAS is dedicated to serving files, it removes the responsibility of file serving from other networked devices. And since NAS is tuned to specific use cases (like big data or multimedia storage), clients can expect better performance.

Easy setup: NAS architectures are often delivered with simplified scripts, or even as appliances preinstalled with a streamlined operating system, greatly reducing the time it takes to set it up and manage the system.

How does network-attached storage work?

Simply put, NAS is an approach to making stored data more accessible among devices on a network. By installing specialized software on dedicated hardware, enterprises can benefit from shared, single-point access with built-in security, management, and fault tolerant capabilities. NAS communicates with other devices using file-based protocols, which are 1 of the easiest formats to navigate (compared to block or object storage).

Hardware

NAS hardware may be referred to as a NAS box, NAS unit, NAS server, or NAS head (depending on whom you ask). The server itself is essentially configured with storage disks or drives, processors, and random-access memory (RAM)—much like any other server. A NAS unit may be configured with more RAM, and the drive types and capacity may be similarly configured to meet the needs of a specified use. But the main differences between NAS and general-purpose server storage lie in the software.

Software

A NAS box includes software that’s deployed on a stripped-down operating system, usually embedded in the hardware. Compare that to a general-purpose server that uses a full-fledged operating system—sending and receiving hundreds or thousands of small, unique requests every second. By contrast, a NAS operating system takes care of just 2 things: data storage and file sharing.

Protocols

A NAS box is formatted with data transfer protocols, which are standard ways of sending data between devices. These protocols can be accessed by clients through a network switch, which is a central server that connects to everything and routes requests. Data transfer protocols basically let you access another computer’s files as if they were your own.

Networks can run multiple data transfer protocols, but 2 are fundamental to most networks: the internet protocol (IP) and the transmission control protocol (TCP). TCP combines data into packets before they’re sent through an IP. Think about TCP packets as compressed zip files and IP as email addresses. If your grandparents aren’t on social media and don’t have access to your personal cloud, you have to send them vacation photos via email. Instead of sending those photos 1-by-1, you can bundle them into zip files and send them over a few at a time. In similar fashion, TCP combines files into packets before they’re sent across a network via IPs.

The files transferred across the protocols can be formatted as:

Network File Systems (NFS): This protocol is regularly used on Linux and UNIX systems. As a vendor agnostic protocol, NFS works on any hardware, operating system, or network architecture.

Server Message Blocks (SMB): Most systems that use SMB run Microsoft Windows, where it’s known as "Microsoft Windows Network.” SMB developed from the common internet file sharing (CIFS) protocol, which is why you might see it referred to as the CIFS/SMB protocol.

A brief history of network-attached storage

In the 1980s, the British computer scientist Brian Randell developed software that connected multiple UNIX systems in such a way that they were functionally indistinguishable from 1 another. Colloquially known as the Newcastle Connection, this led to the development of data transfer protocols (like NFS), which companies began using to store data in central locations.

As networking evolved, more protocols allowed clients to easily consume and share files. And solutions designed to handle specific storage situations were developed shortly after, furthering the development of NAS. Even today, the underlying technology is still evolving. Once the domain of magnetic rotating disks, NAS now incorporates faster solid-state drives and even solid-state drivesnon-volatile memory to speed up the performance of frequently accessed data. Multicore processors are getting faster, and more affordable RAM gives NAS greater performance and scale.

NAS software quickly became the enterprise-standard storage solution, and startups began optimizing ways to store, organize, and access networked data. One of those startups was particularly adept at clustering NAS files for high-capacity tasks like backup and archival as well as high-performance tasks of analytics and virtualization, and that startup eventually grew into Red Hat® Gluster Storage.

So, is NAS a cloud?

No. NAS by itself is not a cloud. Clouds are pools of virtual resources (yes, like storage) orchestrated by management and automation software so they can be accessed by users on-demand through self-service portals supported by automatic scaling and dynamic resource allocation. NAS would need to be virtualized into resource pools before it could be called a cloud, and those pools would need to be orchestrated by management and automation software before it could be considered cloud computing.

If you put local storage on 1 side of a spectrum and cloud storage on the other, NAS is somewhere in between. NAS has some local storage features (onsite, hardwired connections) and some cloud storage features (self-service, networked access), but doesn’t include the management and automation software necessary to rapidly scale and provide metered service. NAS isn’t a cloud, but it can serve a fundamental role in cloud computing.

Network-attached storage compared to other storage types

Storage area networks

A storage area network provides what's known as block storage. Block storage splits storage volumes—like hard disks, virtualized storage nodes, or pools of cloud-based storage resources—into smaller volumes known as blocks, each of which can be formatted with different protocols. For example, 1 block can be formatted for NFS, another can be formatted for AFP, and a third can be formatted for SMB. This gives users more flexibility, but also means they have to navigate everything manually since block storage bundles data together using arbitrary classifications.

Direct-attached storage

Direct-attached storage is storage that's directly attached to a single computer. It's not networked and so can't easily be accessed by other devices. DAS was the precursor to NAS, and each DAS device must be managed separately (compared to NAS, which manages everything). The most common example of DAS is a single computer’s hard drive. In order for another computer to access files on that drive, it must be physically removed from the original computer and attached to the new one, or a user must set up some sort of connection between the 2 devices.

Software-defined storage

Software-defined storage (SDS) is storage management software that operates independently of the underlying hardware. That means it’s possible to install SDS on a NAS box, which allows the hardware to be tailored to specific workloads. With SDS installed, storage hardware can be clustered so multiple servers can operate as a single system for a specific purpose. For example, 1 server cluster can be configured to hold user directories and NFS/CIFS folders while another is configured for block storage so it can hold photos and multimedia. Some NAS/SDS solutions can even consolidate and deliver more than a petabyte of data in 30 minutes or less.

Get training

All the pieces you need to set up a storage network

A software-defined file storage platform to handle high-capacity tasks like backup and archival as well as high-performance tasks of analytics and virtualization. It works particularly well with containers and media streaming.

A software-defined object storage platform that also provides interfaces for block and file storage. It supports cloud infrastructure, media repositories, backup and restore systems, and data lakes. It works particularly well with Red Hat OpenStack® Platform.

The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks of OpenStack Foundation in the United States and other countries, and are used with the OpenStack Foundation’s permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the OpenStack community.