Three ways to create clustered storage, page 2

Clustered NAS gateways are servers that sit in the data path between client servers and the storage arrays they access; the gateway acts as one logical server. A CFS clusters together the different NAS gateway servers so that each gateway can access storage anywhere in the cluster. This configuration allows the use of installed storage resources and offers more options to independently scale storage capacity. How well each product does this largely depends on how the vendor has implemented its CFS to manage cache coherency among the different nodes.

The CFS that runs on Exanet Inc.'s ExaStore NAS Gateway uses a control node to minimize the amount of communication that needs to occur among servers in the cluster. When a file is created, one node in the NAS gateway cluster assumes responsibility for that file and breaks it into 1MB chunks called extents. The owning node stores a small amount of meta data in each extent that indicates it's that file's controlling node. When a request to read that file occurs, the node receiving the request reads the file's meta data and determines which node is the control node for that file. The request is then redirected to the control node, which then coordinates the processing of that request.

Exanet's architecture is similar to clustered storage systems that use a parallel file system. Because ExaStore stores the file in 1MB extents, this permits the controlling node to engage other nodes to read the 1MB file extents in parallel more quickly. The other nodes then send their reads to the controlling node, which aggregates all of the 1MB extents into the original file format. Once the file is assembled, it sends the file to the node that received the client request. This node then presents the file to the client.

The CFS on ONStor's Bobcat Series NAS Gateway seeks to avoid the whole problem of cache coherency by turning off the write cache in its clustered servers. Turning off the write cache forces all writes to go directly to back-end storage. This puts a lock on the file as writes occur, and prevents reads or writes on other clustered servers from taking place until the write is complete. This approach works reasonably well for computing environments where different files are accessed randomly by different clients. And because ONStor supports multiple storage arrays from different vendors, users can match each file's performance and availability characteristics to the back-end storage.
But clustered NAS gateway architectures can only be deployed in circumstances where clients will access files over Ethernet interfaces using NFS or CIFS protocols. Using these protocols introduces overhead on both the requesting server and the NAS gateway server processing the request. While additional servers can be added to the cluster to provide the additional cache and CPU needed to handle these requests, it still isn't likely to satisfy the most performance-intensive, random-read apps that need to share files among the same or different operating systems. In these circumstances, users will need to look to a CFS that operates at the host level.