Common Topics

DataDirect (DDN) supplies high-bandwidth, hardware-accelerated, block storage arrays to the high-performance computing (HPC) and rich media markets. Think of it as a SAN array on steroids, the block equivalent of a BlueArc, with FPGA (Field-Programmable Gate Arrays) front-ending SATA disk drives in its Silicon Storage Appliance arrays. The company was founded to deliver fast access to bucketloads of unstructured data and it's now turned its attention to unstructured files.

Instead of taking a clustered-filer-in-the-data-centre approach like Isilon, or using Panasas-like parallel access to files, it's put its head in the clouds and gone for a geo-cluster of quasi-filers that store objects. There is API-access to a global namespace for objects that, DDN says, can scale to store more than 200 billion files and deliver in excess of 1m file reads per second, via simultaneous access to hundreds of its cloud storage boxes - the WOS 6000 or smaller WOS 1600.

The 1600 is a 3U rack enclosure storing 16TB of data on SATA drives or 7.2TB on faster access SAS ones. The 6000 is a 4U enclosure with up to 60TB of SATA capacity. It's the same enclosure as used in other DDN block storage systems. The different appliance configurations can be mixed in the same WOS cloud and new nodes are automatically discovered and used to load-balance the cluster's work. WOS nodes can be unpacked and set up in minutes.

They store objects, which are files or groups of files, and which have a telephone-like unique object number. We can conceive this as being equivalent to an area code, saying which local data centre, containing the WOS 1600 or 6000 systems, stores the object, and the rest of the string which identifies the object in the local data centre. The linked data centres - WOS nodes - form the WOS cloud. Accessing servers use the WOS API, via their local copy of WOS-LIB (library), to get or put, read or write, objects to the WOS cloud, which they access over gigabit Ethernet.

The WOS 1600 and 6000 are self-contained appliances running the WOS-OCS (WOS - Object Clustering System) software. This is a fully distributed system, meaning there are no single points of failure or bottlenecks and each new node adds linearly to the system’s performance capabilities and storage capacity. When an object is written to a node, policies created by an administrator are associated with it, which determine where other copies are stored, via replication, and how many of them. The system can recover from a drive failure in a node and from a node failure.

The WOS cloud store will serve files from the data centre nearest to accessing users. It receives a read request, and in-memory metadata about the stored objects is accessed to get the object's storage details. Objects are then delivered with a single disk access per object. Object copies can be stored for disaster recovery purposes and the DR copies will be used to respond to read requests coming from locations near them - they are active copies.

Regardless of the number of copies of an object that exist across the cluster, a single, common Object Identifier (OID) is used. WOS automatically tracks where objects are placed and removes this burden from the system administrator. A single WOS cluster acts as a global data repository, eliminating the need for multiple storage and file systems, replication software to tie them together, and custom software development to track file locations across the disparate systems.

The 200 billion files stored number is only a start, with Goldstein saying: "We will keep scaling up." He reckons WOS lowers the need for content-delivery networks, which can cost millions of dollars a year in fees. It also removes the need for storage administrators to know about Fibre Channel, LUNs, RAID levels, stripe sets and so forth. They can just stick their files in the virtually unfillable WOS store in the sky and let it look after safeguarding them and shipping them at low latency to users when needed.

We can envisage companies building pilot cloud storage with WOS nodes to see if they really can lessen their dependence on content delivery networks.

DDN's product management VP, Josh Goldstein, said; "This is DDN's first file-based product. It's many times faster than enterprise NAS," meaning EMC Celerras or NetApp FAS systems. It will be priced competitively to them, although no actual actual pricing information is currently available. We do know pricing will vary based on topology, so that, for example, 10 nodes in one data centre won’t cost the same as two nodes in each of five data centers. Products will ship in the third quarter of this year.

DDN thinks WOS is superior to EMC's Atmos because it can sustain higher transaction rates, is much less complex for customers to implement, has a smaller entry point capacity and scaling units (nodes) than Atmos, and much better space, power and cooling metrics. In comparison to Amazon's S3 (Simple Storage Service) DDN says WOS has no recurring monthly storage fees, provides data location control which S3 does not, and has vastly better file retrieval rates.

DDN is less well-known generally than EMC or NetApp, and is much, much smaller than either of them. But it has existing HPC customers, such as NASA and Lawrence Livermore Laboratories, and movie special effects users like Pacific Title & Art Studio. These will give it a base from which to build out its cloud storage offerings and can provide it with an existing credibility bank that start-ups like Parascale or Zetta have to do without.

For more information go here, where you can apply to join the ongoing WOS beta program and access a white paper (pdf). ®