By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

However, after evaluating products, including EMC Corp.'s Invista, IBM's SAN Volume Controller (SVC) and Hitachi Data Systems' (HDS) Universal Storage Platform (USP) virtualizing array, the company went with an unorthodox choice -- fabric-attached appliances from YottaYotta Inc. typically used for pooling storage between multiple sites.

AOL is using clusters of YottaYotta's GSX 3000 NetStorage nodes attached to its Fibre Channel directors (the company has a mix of Brocade Communications Systems Inc. and Cisco Systems Inc. switches in 16 different fabrics) in its central data center to pool block-based data storage on 300 terabytes (TB) of its 5.5 petabyte (PB) total storage area network (SAN) capacity. It has a mix of disk arrays that include EMC's DMX and Clariion, HDS' USP and Hewlett-Packard Co.'s (HP) EVA.

YottaYotta's product is typically used to connect clusters of servers and storage devices over long distances. The product also performs some WAN optimization.

"We will also be looking into using YottaYotta's devices for offsite replication," according to Dan Pollack, operations architect for AOL, but that's something the ISP considers a bonus -- instead, the device is already being put to use in portions of AOL's production environment for block-level storage virtualization in one location.

"We have a highly consolidated data center," Pollack said, which is good when it comes to many areas of administration but can make things a tangled web when it comes to any kind of data migration. In the majority of the SAN environment right now, the back-end storage arrays are divided into two tiers, with the high-end DMX and USP on Tier-1 and midrange Clariion and EVAs on Tier-2. However, Pollack said, data mobility between tiers is practically nonexistent because of the number of different servers and applications attached to each array in the 10,000-port environment.

"Basically, you pick a tier and you stay there," Pollack said. "If you need a performance problem solved or you overestimated your needs, you can move to the other tier, but we don't use it for lifecycle management."

Hardware upgrade migrations can be a logistical nightmare, Pollack said. "When you have a couple hundred hosts attached to one or two arrays, it becomes a significant communications issue where we discuss the array migration three weeks ahead of time and then perform it with between 10 and 12 storage, host and application administrators over a period of a month or more."

YottaYotta versus the competition

Pollack bypassed pooling SAN storage behind its HDS USP. "The idea of using an 'edge' device -- whether a storage array or host -- to do virtualization to us seems a little out of whack," Pollack said. "You're adding a bunch of workload where a lot of data workload is already going to begin with. You need that array to become the pass-through environment for the storage it's already fronting."

Pollack also pointed out that the HDS arrays will require hardware upgrades like any other array, and that USP heads are far more expensive than the 1U YottaYotta nodes if the company wants to add in more boxes later.

Because the YottaYotta nodes allow for N-way clustering, Pollack found the network-based system the most scalable of the products he evaluated, which also include EMC's Invista and IBM's SVC. Invista and SVC can be run in high-availability pairs so that code upgrades don't involve an outage, but Pollack said another appeal of YottaYotta is the ability to do rolling upgrades while load balancing across the GSX node clusters, and different nodes within the same cluster can run at different levels of code. "It's not all-or-nothing, we can do rolling upgrades across our entire environment and take things a little more slowly."

Pollack admitted AOL still has its reservations about buying from a less established vendor. "If we felt we had more time to wait [for products to develop], we would be more likely to choose a large incumbent supplier," he said. "But we would rather go with a generic caching platform that's a little more flexible than a storage virtualization appliance that doesn't meet our needs."

The company installed the YottaYotta devices behind a mix of production and test applications earlier this year. The majority of the 300 or so terabytes of data currently being run through the GSX clusters in three of the 16 fabrics belongs to production customer relationship management (CRM), asset management, logistics and billing applications.

AOL had to work with YottaYotta in the beginning to match up its data migration scheme with YottaYotta's management interface and monitoring tools. The devices are working with storage spread out over all of the different arrays in the environment, since each of them is connected to multiple fabrics for consolidation purposes.

Pollack is gung-ho about the storage virtualization plans, but said he will be painstaking in the rollout across all of the company's storage capacity. Extensive testing in each fabric will take place this year. AOL doesn't like to "mix and match" fabric vendors, keeping each fabric either all Cisco or all Brocade switches and intends to keep the fabrics logically separated, as well.

"It's possible, if it fits our application, that we could see an Invista in one of the fabrics at a later date," Pollack said. The company has purchased the YottaYotta devices it's currently using and considers itself in an advanced stage of evaluation, but it won't see full rollout until at least next year.

YottaYotta officials declined to comment for this story, saying that as a general policy, it does not comment on individual customer deployments.