3PAR

August 03, 2010

3PAR designs its systems to provide huge time savings for storage administrators. Below is a video of our new InForm Management Console (IMC) 4.1, announced today, showing how incredibly easy it is to configure and operate 3PAR's Remote Copy application.

Things that the demo didn't show that are advantages of 3PAR's single software architecture are:

A single console can manage both Mid-Range and Enterprise arrays

A single console can manage both local and remote systems

A single console can manage all array software elements

Customers can mix and match systems for replication- for instance they can use enterprise T-Class storage at their primary site and mid-range F-Class arrays at the secondary site. That arrays used primarily to comply with regulations can be much less expensive.

Replicated volumes have to be the same size (capacity) as the primary volume they are protecting, but they can be any class of service (disk type combined with RAID type). This means replicated capacity can be optimized for capacity efficiency while the primary storage can be optimized for performance. Again, this results in operating savings compared to competitive arrays that require replicated volumes to be the same configuration.

Here is a brief description of all the software functions available through IMC 4.1. As you can see, it's a pretty comprehensive list of features:

System Manager: used for viewing performance and utilization stats and configuring hardware elements.

Host Manager: used for configuring hosts that access 3PAR arrays - including Autonomic Groups, which are used to configure groups of servers to storage simultaneously and Virtual Domains, which restrict access between servers and storage resources according to group membership.

Provisioning Manager: used for provisioning both thin and thick volumes as well as setting up Virtual Copy (snapshot) volumes for them. Changing the class of service for volumes by re-striping them over additional drives or different types of drives and/or RAID levels through Dynamic Optimization is also done through the Provisioning Manager.

Event Manager: used to display logs,events and alerts.

Hardware Inventory Manager: does what it sounds like - reports on what is in your array.

July 29, 2010

It's getting very hard to keep up with all the crazy social media stunts coming out of Hopkington, but they seem to have done it to themselves again. First was the questionable spamming for viewers so they could claim they had a viral video, then today they just "leaked" a 3PAR sales "kill sheet" - and also apparently established a "secret" site with the URL Notapp.com, where they compared their own guarantee program to Netapp's. According to Simon Sharwood at Search Storage Australia, the site was removed and accessing the URL directed browsers to EMC's site.

Perhaps it is all part of a new marketing strategy by newcomer Jeremy Burton, who joined EMC as Chief Marketing Officer back in March. As best I can tell, Burton's new marketing strategy for the company is that people will believe anything. Maybe he doesn't think there are enough new products coming out of EMC - or that the delays in getting their ballyhooed FAST out the door are too embarrassing - but instead of trying to promote EMC on its own merits, it looks like he is doing his utmost to mud wrestle. Is that what EMC is paying him the big bucks for?

EMC suddenly is taking a bigger interest in 3PAR. That's good. Search Storage Australia just published parts of a competitive document that EMC was circulating to it's partners about 3PAR. It certainly wasn't a surprise because we'd seen it previously, but I was sorry to see it published because it made EMC look ridiculous, which was working pretty well for us. But now that it's been outed, here is what we have to say about it (in the guise of Ineption's lead character, the CRO)

The messaging is not built in, but our zero detection technology for optimizing capacity is. The host SW commands to do this are short and do not require "careful coordination". Veritas, Oracle, Windows Server and Linux software all work with minimal operator effort. For instance, this document from Oracle, describes the whole process, with the sole operator command being this: #bash ASRU LDATA.

Can EMC provide online reclamation of zeroed space without risking capacity overruns and with tolerable performance? 3PAR can. Does EMC have these capabilities in both mid-range and enterprise storage arrays? 3PAR does.

3PAR has both Flash and 1 TB SATA drives. We also have Adaptive Optimization software that uses Flash SSDs for storage tiering. EMC still doesn't have it after they made such a big deal about it last year. They like to tell customers that their size gives them development advantages, but their track record doesn't support their claim.

3PAR arrays allow users to create many tiers, but without the need for disk pools. Tiers are constructed from the combination of drive type plus RAID level. For instance, you can have separate tiers for SATA, FC and Flash SSD drives with the RAID level you select. Our Dynamic Optimization software allows admins to move data from one tier to another. You can "dial in" the performance and protection you want.

All systems have a peak output , ours just happens to have a lot more throughput than theirs - and at higher disk utilizations. We have published benchmarks that show how our systems perform. They don't. Adding disk drives to a system and utilizing those drives is far easier with a 3PAR system than either VMAX or Clariion where you have to wrestle with putting drives in the pools you want to use them for.

There are no disk pools in 3PAR storage. Pools trap resources so you can't use them. Work isolation in pools leads to hot spots and storage admin nightmares. Wide striping does not mean you can't have tiers. That is an idiotic statement.

VMAX can configure large pools - and all the drives in them have to be at the same RAID level meaning you can't create multiple tiers within those pools. If you want multiple tiers, you need multiple pools and all the headaches that involves. Change management in an environment with multiple pools is complicated. You also need to consider the pools needed for snapshots and remote replication. Are those easy to provision and change on EMC storage. Most would say "no".

3PAR uses all disk spindles all the time for delivering IOPS and pro-active sparing is done using reserved space on those drives. Rebuilds do process quickly. Would EMC have you believe they never have to perform drive rebuilds? Really?

The RAID6 thing really makes me sad. They look so stupid when they say it. We're all sorry to say goodbye to that piece of FUD.

Our front end archiecture was designed for large-scale parallel connectivity to match the massive bandwidth capabilities of our wide striped back end. Our benchmarks and the cost per IOPs in those benchmarks speak for themselves. Our customers also tend to run 3PAR systems at much higher disk utilizations than they run other vendor's arrays.

We support a huge number of ports on our systems w/full
active/active data access across all controllers. All controller nodes can be used to access all data volumes. We have a number of customers that run fairly sizable SANS without switches because they have enough ports on their arrays so they don't need to consolidate access through switches.

5- 9s? We're there. Our systems get pounded on every day in some of the largest private and public data centers in the world. They are designed with complete redundancy in all components and have advanced capabilities such as Persistent Cache to maintain high levels of performance even after the loss of a controller.

The delays in bringing their FAST tiering software - a product they were hyping in April of 2009 - to market have shown that size doesn't matter much when it comes to delivering technology on time. I'm not saying 3PAR always delivers on time, but EMC is far from immune to these problems. In fact, the need for them to coordinate across multiple product lines creates certain disadvantages for them.

As to their comments on our support; they are pure FUD and grasping for straws. We would not be able to maintain the customers we have if it were not for our efforts at supporting them.

* * * * * *

The following content was added on July 30th by Rusty Walther, 3PAR's Vice President of Customer Services & Support.

Stating
that 3PAR “outsources support” is just plain silly, especially coming from a
company that keeps most of the worlds’ largest offshore outsourcing companies
in business. Like EMC, 3PAR uses Third Party Maintenance suppliers
(TPM’s) for break-fix field activities. In some geographies, EMC and 3PAR
even use the “same” TPM. But EMC also outsources most of their volume call center and Level-1 Technical Support to offshore suppliers. Not so at
3PAR. Everyone that touches a 3PAR support case is a 3PAR-badged
employee. I challenge EMC to identify a single outsourcing company that
handles 3PAR technical support. EMC’s outsourced technical support
sub-contractors could be listed alphabetically, by geography, or by technology category
… but you’d need a couple of sheets of paper to do it.

July 26, 2010

The twitterverse is busy again today with discussions surrounding EMC's us of spambots to generate views of videos they are trying to make viral. If you are interested in seeing what is being said, check out these people's tweets and you'll be off on a trip down a dark hole.

Here are a couple cartoons I made about it last week from my new cartoon, Ineption:

Netapp's Val Bercovici suggest this viral spamming as the end of innocence in social media, but innocence exited the social media stage long ago.

I'm much more concerned about how large companies like EMC can use social media to suggest product and customer relationships that stretch the truth well beyond the impressions that a reader might take away from reading suggestive blog posts from respected corporate voices. As "unofficial company statements" that are more influential than press releases, social media pieces can distort things in a way that more-accountable corporate marketing are not allowed to.

Last week, Chad Sakac and Chuck Hollis published blog posts that pointed to an EMC white paper about details of a VMAX implementation at Terremark, an excellent 3PAR customer. Readers of these posts would probably think that VMAX was being used as the storage behind Terremark's multi-tenant, Enterprise Cloud service offering. That would be stretching things more than just a little bit. I commented on both blogs and the responses to my comments were interesting. I guess I feel a little kinder towards Chad as a result.

It is possible that somewhere in the world, a VMAX is being used by Terremark. One would expect Terremark to be looking at various storage platforms as a matter of course, it only makes sense for them. After all, VMware made a significant investment in Terremark last year and we all know who owns VMware. There are certain favors that EMC can ask that vendors such as 3PAR can't. But Terremark also has to operate Enterprise Cloud in their US major data centers every day and the storage they use for that is not in a test lab - it's production - and it is 3PAR storage.

And its not for lack of trying on EMC's part. Last November when VCE was announced, Terremark was discussed as a featured customer in both Chad's and Chuck's blogs. That was OK, I understand the excitement that surrounds a big announcement. But nine months later, to suggest that this announcement had given birth to a major production environment for a service that it is not supporting sort of stuck in my craw.

It's unusual for a company to be invited as a centerpiece of high-visibility festivities and then mysteriously decide not to follow through. It would be like getting complimentary tickets and backstage passes from
Lady Ga Ga herself, telling all your friends about it and then not going. It it does make one wonder. Why wouldn't you do whatever it takes to be included in VMware's big summer announcement party? Well, if you're Netapp, the answer appears to be - "Being there is over-rated. Just make sure everyone thinks you were." Call it Photoshop for PR or call it keeping your poker face, it's a mash up of a blown opportunity and opportunistic courage.

The excitement for VMware's storage partners was concentrated in two areas: VAAI (vStorage API for array integration) and SIOC (Storage I/O Control). The initial release of VAAI includes new SCSI block storage commands that allow the arrays to offload host systems from redundant, resource-consuming tasks. SIOC is a method for managing I/O queues to create more fairness in accessing storage resources. Netapp issued a press release last week in conjunction with the vSphere 4.1 release, but it was for their Virtual Storage Console, not for the support of the storage enhancements in vSPhere 4.1. There was a flag waving mention of VAAI:

"Additionally, NetApp is supporting the new VMware vStorage APIs for
Array Integration (VAAI) capabilities that offload data management tasks
from the host server to the storage system. This can free up host CPU
cycles for better performance and increased virtual machine density."

That's not exactly saying anything, but its more than they had to say about SIOC, which was zilch.

The bottom of the release directs readers to Vaughn Stewart's blog for more info. Apparently, Netapp's PR department left the rest of the innuendo up to Vaughn - a diligent and loyal Netapp employee who understands that sometimes a vendor blogger doubles as a PR bagman. It looks like I need to add a new chapter to Vendor Blogging with Dummies.

You have to dig into the comments to get some of the details, but Vaughn's blog does a decent job explaining that Netapp is working on delivering VAAI functionality in
Q4 2010. Now, that's not all that late considering its only 6 months or so away, but as a privileged insider to VAAI development, it's not a great showing either. In fact, it wouldn't surprise me if
some of the companies who were not in the program, such as Compellent, HP, IBM and Xiotech come out
with VAAI plug-ins before Netapp does. As for 3PAR, we will have
our VAAI plug-in available in September as part of a maintenance
release. We didn't have a lot of time to develop VAAI functionality after gaining access to the APIs in early 2010, but we fast-tracked the
development of it in order to make the announcement.

As much as I admire Vaughn's hutzpah for stepping in to carry the load that others at Netapp should have, there were a few problems with what he said. First was the absurd statement that "
SAN is attempting to be more NAS-like". There is so much wrong with that statement that it's difficult to find a place to start. Who or what is SAN? Is VMWare SAN? Is the T10 SCSI standards committee SAN? Is SAN the being an embodiment of SAN the block protocol? Is there a virtual reality thing going on here? And what is NAS-like anyway? Does it have anything to do with the size of one's beak or the way particular vowels resonate in the sinus cavities? Or is it like racing the back roads in a used chevy? Whatever Vaughn meant, I tend to dislike the imprecision of technology anthropomorphism.

The second thing Vaughn said was "As for the first release of VAAI... These features ALREADY EXIST in NFS." Really, block zeroing? That is a function developed for EagerZeroThick volumes, which are only supported on VMFS datastores, not NFS datastores. Perhaps we will see that change in the future, but for now its SAN only.

Hardware assisted locking is a way to allow smaller granular locking for VMFS and addresses an issue with VMDK-level operations in a shared datastore. Because NFS puts VMDKs in separate datastores, which are locked independently, hardware assisted locking is unnecessary for NFS. In other words, its a SAN only function because the current NFS datastore architecture doesn't need it.

The other API in VAAI is Full Copy. This VAAI API appears to be functionally equivalent to a Netapp utility called RCU (Rapid Cloning Utility) that was included as a function in their Virtual Storage Console. It is not, however, something that exists in NFS, unless Netapp wants to give that feature to all it's NAS competitors. As a vSphere function, Full Copy will be available to all vendors that implement the VAAI APIs. It will be interesting to see what differences there are as far as programmatic control using the VAAI plug-ins, vendor-specific consoles and Powershell.

July 20, 2010

3PAR customers like that fact that 3PAR arrays are so easy and fast to manage. In this video, Robert Cockerill from Thames River Capital in London talks about all the various things he does, his Windows-based infrastructure, how 3PAR's thin provisioning helps him manage it all and how simple it was to protect it with 3PAR Remote Copy.

July 14, 2010

I wrote about the fact that we already had zero detect technology in our product, which is useful for the new Full Copy command because it allows customers to remove zeroed data from clones when they are created and return them to array free space.

The discussion became a bit confused when Chad interpreted what I was saying as pertaining to Block Zeroing.

Block Zeroing and Full Copy are different aspect of the VAAI API. The intent of block zeroing is to reduce the amount of CPU effort and storage traffic required to write zeroes across an entire EagerZeroThick (EZT) VMDK when it is created. The intent of Full Copy is to make clones of VMs quickly without consuming I/O bandwidth. Things get interesting when you start thinking about making a full copy of an EZT VMDK that was created using VAAI with block zeroing - but I'll discuss that later.

I also want to clarify what zero detection technology is. 3PAR T and F class arrays have zero detection technology, which is enabled by Thin Persistence software, that recognizes zeroed blocks as they are read by the array and returns them to the array's free pool. Any read requests made to these block addresses will return a zero value. In essence it is dedupe for zeroes.

However, Zero detection is not needed when an EZT VMDK is created using the VAAI plug-in because the array will recognize the intent of the command and not write the zeroes. In other words, the VMDK will only contain a very small amount of reserved space when it is created. Again, any attempts to read blocks in those ranges will return zero values. Zero detection is effectively bypassed during the creation of the EZT VMDK.

The exception to this behavior is when the EZT VMDK being created is written to a thick volume - in that case the array will write zeroes across the entire VMDK.

The remaining cases for the creation of EZT VMDKs on 3PAR arrays occur when the VAAI is not used. For a thick volume, the entire VMDK has zeroes written to it. Thin volumes not using zero detect also have zeroes written over the entire VMDK. Thin volumes with zero detect will not have zeroes written to them and will contain only a small amount of reserved space.

FWIW, the reserved space is used as instantly-available capacity that can be allocated on-demand when writes start coming into the volume. 3PAR arrays always "read ahead" free space to improve the performance of thin provisioning.

The next bit here could be a bit thorny, so clear your head. The matter of making a Full Copy of an EZT VMDK to a thinly provisioned volume was something Chad said was not allowed. My assumption here is that the type of thin provisioning used makes a big difference.

For instance, if you are using TP from VMware, I could see where they would not allow a full copy to be made. The problem is that the full copy will return all the zero values for the source VMDK, whether or not those zeroes were ever actually written - and write them to the target TP volume. In other words, the target could be much larger than the source. In the VMware TP scheme, this could make for problems in a hurry if you were making a bunch of clones this way.

In contrast, if you were using a 3PAR array with zero detection, the Full Copy of the source VMDK would return zeroes for the entire VMDK, but the zero detection would strip them out again as the target was being written. You could make as many clones as you wanted this way, knowing that the physical capacity they consume would be a multiple of the physical capacity consumed by the source VMDK. In other words, you wouldn't have to worry about virtual zero bloat making a mess of your VMFS volume.

One of the big differences between 3PAR's zero detection technology and other vendors zero-reclaim technology is that 3PAR's process is real-time-on-ingestion as data comes into the array, whereas zero-reclaim works in a post processing fashion after the zeroes have already consumed disk space. This could be a significant difference in many cases because the post-processing method has the potential to create unexpected capacity-full conditions before the zero-reclamation process even has a chance to start.

July 13, 2010

We've been anxiously waiting for VMware's announcement of vSphere 4.1 for weeks. There are many big things in this release, including significantly scaling the management capabilities of vCenter and increasing the number of simultaneous vMotions that are supported. The door is open for ESX deployments to achieve much greater densities than they could previously and that's a big deal to large enterprises who want to get more resources under the control of fewer points of management. There are still great gains to be made in consolidation - more on that later.

In the storage world, there are a couple big things, SIOC and array integration through the VAAI API.Technodrone has put together an excellent post on SIOC and I highly recommend that anyone wondering how to make this functionality works should go to this post and read it. Array integration has been advanced in three ways:

Hardware assisted locking

Full copy

Block zeroing

Array integration through the VAAI API is already at a very advanced status at 3PAR with some of the most important functions implemented through our I/O co-processor ASIC. While some companies want to write off the importance of hardware, 3PAR believes there are many things that need to be done in hardware to get the performance needed to truly scale storage for virtual environments. Our co-processors are key to getting much greater storage utilization and higher VM ratios and are one of the 3PAR innovations that separate our best of breed products from everybody else. The capabilities discussed below are available in the hardware today, and will be enabled with a software upgrade in September.

OK, lets talk about hardware assisted locking first. For customers that have experienced locking problems, this is a big deal. The problem has been well-documented online - but in a nutshell, customers have run into problems where an operation that locked the LUN for a VMFS did not complete, thereby freezing all I/Os for all systems using that LUN. That was certainly a nasty problem - not a bug necessarily, but certainly an incredible pain in the rear to all involved.

VMware's response in vSphere 4.1 was to include a command in the VAAI API using an atomic test and set instruction for implementing fine grained locks for small block sizes. There will still be locking in VMware, but on a much smaller scale.

Unique to 3PAR is the fact that this new locking mechanism is implemented in our I/O co-processors where it completes very quickly, as opposed to implementing it in code in the controller. If you consider an environment with high VM ratios and multiple vMotions going on you want this granular locking mechanism to as quickly as possible. Nobody else comes close to the speed that 3PAR processes them.

Next is the new Full Copy capability - also with co-processor assistance to reduce the capacity of the copy that is made. 3PAR has zero detect and reclaim technology integrated into the co-processor. With zero detection running in an array, as new writes are made, strings of zeros are detected by the co-processor and those blocks are returned to free space inside the array. If future reads are made to those blocks, zero values are returned, but not from disk. The result is that copies of VMDKs with lots of zeros in it, will be much smaller after the copy is made - and the copy will proceed much faster.

This sort of functionality works amazingly well with EagerZeroThick (EZT) volumes in vSPhere. VMware requires EZT for Fault Tolerance (FT) and MSCS clusters and also recommends EZT for high performance. The main complaints about EZT are that it takes extra time to write all those zeroes when the VMDK is created and that it doesn't work well with thin provisioning. With 3PAR's zero detection, the time it takes to writes all those zeros and the space they consume is a non-issue, but more on that later. Virtual Geek at EMC wrote about VAAI today
and in his discussion of what does not work for full copy he mentioned
copying from an EZT volume to a thin provisioned one. Actually, he's
wrong about that where 3PAR is concerned, because EZT to Thin works
very well on a 3PAR array with zero detect.

The image below illustrates the advantages of using EZT on a 3PAR array with zero detection:

The last API element to discuss is Block Zeroing. The idea is that the host communicates to the array to write a string of zeros when it is provisioning storage or overwriting blocks to a non-EZT VMDK. vSphere writes a lot of zeroes in order to maintain data integrity with multi-tenancy. The hypervisor zeroes zero out blocks prior to writing them in order to ensure that a virtual data imprint from an old VM does not occur for a new VM.

But writing all those zeroes consumes CPU and I/O bandwidth that could actually be used productively, so VMware included a new command to offload the host from writing zeroes, effectively shunting that workload to the array. Voila - problem solved with 3PAR!! The zero detection and reclamation technology in a 3PAR array not only offloads the host from writing zeroes, but it also gives customers instantaneous reclamation of capacity with a smaller digital footprint (less capacity consumed) and faster performance. That's pretty cool and it's a trifecta that only 3PAR has.

What is amazingly cool about today's vSphere announcement for 3PAR customers is that all three API elements, hardware assisted locking, full copy and block zeroing are already implemented in 3PAR's T and F series hardware platforms, and will be usable by the end of September with a firmware upgrade.

Our co-processor architecture really delivered for us this time. But it's been delivering the goods for our customers for a long time already. In virtualized environments our customers tell us they double their VM density, while cutting their storage capacity in half - all while reducing the amount of storage administration necessary by 90%. Those stats can be hard to believe, but when you look at what we delivered on the first day vSphere 4.1 was announced - when most people didn't even know we were working on it - it might make it easier for people to understand why.

We take virtualized environments very seriously. People that don't know about 3PAR don't consider us to be a leader in virtualization, but when they find out the depth of technology we have and how well it works across our entire product line they understand we are leading in ways that really pay off for them. And the bigger they are, the bigger the rewards can be - especially after today.

June 30, 2010

I've been going slightly nuts since yesterday after Cisco announced the CIUS. It looks like the perfect tablet for the sorts of things I really want a personal screen device for - communicating with other people. This review by Erik Parker of InfoWorld is a pretty good read and it summarizes key advantages and disadvantages of CIUS. If it can make the technology of video conferencing transparent to end users, it will be a big deal.

But the hidden story to this is that Cisco is also making a play to get into the corporate desktop/laptop business with the CIUS. The idea that companies could deploy these with VDI is definitely part of Cisco's grand plan for world domination. Whether or not the CIUS could replace laptop or desktop computers remains to be seen, but there are reasons to think they could eventually if the stars align.

The arguments for VDI are strong, but there are still a lot of hurdles to overcome, such as back end storage performance to support boot storms. By the way, people looking at large VDI implementations might want to look at 3PAR's wide striping storage systems to get the sort of affordable IOPS needed to support large VDI environments. My previous post illustrates our design for massive throughput, which supports a huge number of IOPS without needing SSDs or requiring storage administrators to create special disk pools to isolate the VDI workload from other applications running in the same storage array.

Steve Taylor, one of our SEs, created an animation that shows the multiple layers of virtualization that create the natively wide-striped data layout on a 3PAR storage server. I think it's the coolest thing I'd seen since joining the company that quickly summarizes the multiple layers of virtualization in a 3PAR array.

All the functions shown are automatically done for the customer with minimal administrative effort. 3PAR customers do not spend time planning the layout of special disk pools or preparing their disk drives configurations for certain functions. All they do is select the drive class and the RAID level for the volume they are creating and the rest of the data layout work is done for them.

The demo shows how a RAID 5 3+1 virtual volume is created, what it does not show is the way other volumes would be created using different RAID levels over the same set of resources. It would be a replay of this, but with a different RAID level applied - everything else would be the same.

Not only does this design provide massive throughput, it also responds very quickly when customers need to add volumes. It's like driving a freight train that can corner. Try doing that with your v-Max on anything but a test track.

June 23, 2010

How is it that some people possess the gift of foresight and the ability to predict the future? Some say they have dreams or visions, some extrapolate from experience and logic, while others make predictions hoping to fulfill an agenda. Then there is the element of public exposure. Is the predication public and do they use their real name or hide behind an alias?

Nicholas Carr took was very public and very open when he wrote his breakthrough book "Does IT Matter?". In it, he stated that there are no sustainable advantages to be gained by a company through the implementation of information technology. He argued that any short term gain can be matched by competitors in a relatively short period of time with lower capital investments - effectively punishing companies for innovating. He recognizes the necessity of having IT in order to stay competitive, but finds it difficult to justify being an early adopter of technology.

Since Carr published his book, we've seen a lot of change in IT
markets, including the rapid deployments of virtual systems technology
and the expansion of hosted, utility computing and all things "cloud." But the biggest changes have resulted from the global financial crisis, forcing companies to reduce non-essential costs significantly - especially IT costs.

Unfortunately, not every technology implementation intended to reduce costs has been successful. And that's one of the things that makes the information technology business so fascinating and perplexing - intelligent people with deep expertise in technology fail to predict the ways that things can go awry and what the cost of their shortsightedness will be.

The rich history of failed IT projects is exactly why there is so much FUD spread by the competitors in our industry - FUD gets customers thinking about the consequences of their purchase decisions and all the possible problems that can result from an error in judgment. It also contributes to the interest in the machinations of our industry and the "war games" that are played out in traditional and social media. Whether we are predicting changes to the industry through mergers and acquisitions or the development of new business models, it all flows into the river of FUD at purchase time.

With the abundance of FUD, one naturally develops an aesthetic for the stuff to cull the weak from the strong. For example, a piece of weak FUD recently appeared online on Silicon Angle titled "Why Netapp Must Seek Acquisition", written by the poser "secretcto". The author starts with the suggestion "let's
take a look at the market cap of each of these players" and then
neglects to make any comparisons. It goes downhill from there, reaching its lowest point when the article referred to Nicholas Carr as Daniel Carr and then failed to negotiate
the transition of whether or not IT matters to cloud service providers. The tipping point for Carr's logic is that to service providers, IT absolutely does matter because operating data centers is their core business.

By contrast, you barely notice good FUD, it has a smooth logical flow and subtly builds to a persuasive conclusion based on a key point that usually has it's origins in a subjective opinion or bias.
A decent example of good FUD was Chris Mellor's recent piece about the Storage Array Killing Fields qualifies as good FUD. Chris doesn't have an axe to grind, but he is a journalist and therefore has the responsibility of stirring the pot. It's a well written piece that uses an analogy that compares the selection of equipment for data centers with the selection of components used in an automobile.

The problem is that automobile manufacturing is a poor analogy for running a data center. When a car rolls of the manufacturing line it is shipped to a dealer, sold to a customer who drives it away. There is nothing about the experience of making, selling or buying a car that is even closely related to the constant ongoing data processing services that are provided by a utility or cloud service provider.

Should we expect the recipe for success in hosting and cloud services to be any different? This recent article in Information Age states that 71% of the 450 CIOs in a KPMG survey want to improve the price to quality ratio of their outsourcing contracts. The dynamics of the business relationship between CIOs and their utility/cloud service providers are going to be the same. Service providers with the best reputations for customer service are going to thrive. Those that don't measure up will fail.

Vendors of consolidated stack solutions of servers, storage and software are trying to convince customers that the "All-in-one" stack solution is the safest way to proceed during the transition period while cloud computing is emerging. They would have you believe that the biggest risk in operating a data center is in ordering the products and getting everything installed initially. Considering that customer metrics for utility/cloud service providers will be responding to the needs of their customers quickly and accurately, the lions share of the risk will come well after the initial installation during the life of the service engagement.

The weakness of the All-in-one approach is that it does nothing to address the dicier aspects of owning, operating and changing an IT infrastructure after it is up and running. In many cases the stack vendor's answer to change management will be
the same as it is today - time-consuming and expensive professional
services. There are definitely utility/cloud service providers that will want this sort of
service, but many would prefer to do it themselves at much less cost. That's what you do when your primary business is running a data center.

A talented chef can find a way to prepare a gourmet meal on an Electrochef All In One Kitchen, but they would never decide to run their business with them. They are going to select best of breed appliances and equipment that best fit their needs and enable them to prepare quality dishes in a quality fashion.

So the question for the utility/cloud data center operator then is - "what is best of breed equipment for my business?"

The classic clash between Best-of-breed and All-in-one solution pits cost against complexity.
Best-of-breed technology has traditionally been more customizable to fit a wider range of requirements and therefore has been more complicated and expensive to operate. In contrast, All-in-one technology has traditionally been cheaper, limited to a smaller set of functions and easier to operate.

Unfortunately, neither stereotype works very well for the utility/cloud service provider. They need fully functional products that are also easier and quicker to operate. Fast, accurate change management and operator efficiency are the key elements for utility/cloud infrastructure products. 3PAR's Best-of-breed storage products have these characteristics as well as being extremely space-efficient and high-performing. Customers appreciate the amount of time they do not spend managing their 3PAR storage while they are getting the job done. When a new order comes into a 3PAR kitchen, the system is ready to go right away - including tasks that take a long time to set up on other storage, such as Remote Copy.

And what about the All-in-one stacks in the market? Surprisingly, unlike traditional All-in-one solutions, they are more expensive to install and operate. Change management is complex, which leads to relatively poor operator efficiency and the engagement of professional services, which does not necessarily speed up the process. The traditional benefits that All-in-one solutions typically provide are not part of these stack solutions.

The predictions for stacks taking over the market are all wrong. Sure, there will be stack solutions sold and it will take time for all of this to sort itself out as it always does when an industry is going through major, fundamental changes. The most important changes that will occur in the years to come will be driven by the service demands placed on utility/cloud service providers. Customers of utility/cloud services want their money's worth and the best service providers will do what it takes to give it to them. Stacks add no value in that equation.

June 10, 2010

Nigel Poulton tweeted
today: "What are peoples thoughts on best practices for multiple
pools on the likes of USP V and VMAX. Trade-off between perf vs resiliency etc..."

Good
question Nigel, one of the biggest problems customers have is being able to
fully utilize all their resources. It's not just that the ROI for storage tends
to be underwhelming, but more frustrating is the fact that their storage was
provisioned in a way that makes resources inappropriate or unavailable
for the pressing needs at hand.

Pools are used two ways - to reserve storage capacity for certain
functions such as snapshots or to create QoS levels for storage. The
difficulty lies that in the creation of pools for QoS, resources that are
committed to pools are practically locked into them and cannot be easily redistributed to other pools to meet changing demands. As storage systems
age and are filled with data, the various pools are
consumed unevenly. For example, consider an array with six pools provisioned as
follows:

The type of problems storage administrators are constantly dealing with occur when one pool becomes maxed out - making it's associated QoS unavailable. The admin, then has three choices, 1) use a higher QoS, 2) a lower QoS, or 3) add new resources to the pool, if possible. If they decide to use a higher QoS they may have to deal with performance problems in higher-priority applications. If they decide to use a lower QoS they will have to deal with performance problems for the application. If they decide to add resources, they might have to interrupt many other applications and take them off line while workloads are shifted. Then you get the ripple effect of "remodeling the kitchen".

When you consider the fact that some storage systems force users to establish separate pools for thin provisioned volumes and thick volumes, the number of pools in the system increases and the fragmentation of resources becomes a much bigger problem.

The best practice for managing storage pools is to do away with them entirely so they don't inhibit access to expensive resources and, more importantly, so they don't soak up so much administrative time and create increased risk of downtime and data loss.

Pools of disk drives are just a thin layer above bare disk drives where virtualization is concerned. Considering the transparent nature of system virtualization technology it is almost incomprehensible that storage systems force customers to create these artificial constructs that force hard choices about something as basic as the layout of data on disks. Vendors with pool-based volume management like to distract customers by talking about whiz bang functionality that doesn't address the core storage problem - the fact that their customers are still doing much of the work that the system ought to do for them.

The best practice then is to replace outdated storage designs with new designs that do not reserve storage resources in pools and do not use pools to create QoS levels. 3PAR InServ storage systems do not use pools and do not reserve capacity for different QoS levels.

3PAR InServ storage systems are used by many of the largest companies in the world, which saves them an enormous amount of money by lowering lower capacity requirements and administrator overhead. For example, Priceline.Com has been a 3PAR customer for many years and they talk about how it has worked for them in this video on YouTube.

The InServ's data layout starts with the subdivision of all disk resources into 256MB "mini-disks" we call chunklets. All the higher level RAID level functions in a 3PAR system are applied at the chunklet level, not at the disk level. RAID in an InServ system is implemented as "micro-RAID" sets which then are concatenated together and formed into virtual volumes that are exported as LUNs.

FWIW, the term virtual volume was used by 3PAR years before the system virtualization phenomenon became the market force that it is today. I only mention this to reinforce the fact that from it's inception, the InServ internal storage architecture was designed to virtualize storage. It makes storage administration transparent by doing the low level provisioning work on behalf of the storage administrator.

As new storage is provisioned in a 3PAR system, the data is spread across chunklets in small 16KB increments. All the disk drives (of the same class SATA VS high performance) in the system are used by default, so that data is widely striped for optimal throughput and to avoid hotspots. While there actually are small amounts of capacity pre-allocated for use before storage is provisioned, this is done automatically by the system and it is done in thin slices across all drives.

There are no pools, no constraints, no weeks-long planning efforts needed for storage installations and change management.

If you are looking to dump your nagging storage administration problems why would you ever go back to pool based storage when that is the root cause of your problems?

10. You have to buy it before you can determine if it will work for you.

3PAR's sub-volume tiering, Adaptive Optimization (AO), comes with a lot of features that make it much better than what competitors say. It doesn't require SSDs and it's not for everybody, but if you need it, it will do the job.

June 07, 2010

Here's a video that TechTarget produced for us with one of our customers, Priceline.com.

Here are a few highlights from the video:

Priceline.com was one of the first e-commerce players to adopt virtualization. That may account for why the company's IT organization is known for for it's high availability and ability to adapt quickly to changes in the market. Given the fact that their business has a broad value-based appeal, their IT organization works very hard to get the best rate of return for their capital expenditures.

3PAR storage allowed them to increase their storage capacity over 400% over the last four years while reducing the administrative load required to manage it all. Ron Rose, ex-CIO at Priceline (now on the Sr. Management Team at Dell) said that they were able to decrease the data center footprint 50% during that time. Mr. Rose estimated that they were able to reduce the deployment of approximately 100 physical servers and their associated footprint costs, which were equivalent to 106 acres of trees 310 tons of hydrocarbons per year.

May 28, 2010

Chuck Hollis wrote a blog post earlier this week,titled "Once Upon a Time". I thought it was an excellent post, telling about the transition EMC made a decade ago starting when Joe Tucci replaced Mike Ruettgers. FWIW, I think the diversification that Tucci accomplished at EMC has made all the difference there - especially the acquisition of VMware. You might call it lucky (as I tend to do), but the fact was they were looking to diversify their business took them on a journey that has buoyed their company far beyond the capabilities that their storage products by themselves would have supported.

At the end, he asks the question if history was bound to repeat itself again - which appeared to be a nudge towards some of the other companies in the industry. I didn't think this was such an affront - Chuck has been known to tweak competitors from time to time, but for the last 6 months or so, he's restrained himself from doing so.

So I was surprised this morning when I saw some tweets that had me look at the post again. And sure enough there was a blow up there involving a cadre of Netapp people that over-reacted to Chuck's post.

One of the consequences of this over reaction was that a benign blog post about EMC history became a referendum on Netapp's Secure Multi-Tenancy (SMT). It wasn't what Chuck was driving at in his original post, but the comments from Netapp folks steered the discussion that direction.

Chuck's main argument is that SMT isn't very secure if your service provider can gain access to a tenant's data. I'd add to that and say, it's not very secure if your service provider can delete volumes and destroy data too. Inadvertent destruction of data by administrators is a larger threat than somebody pulling "an inside job".

But it doesn't just effect service provider scenarios. The issue of multi-tenancy also applies to private data center operations. There have been suggestions that the word "tenant" refer to the legal owner of the data, but the word "legal" is unnecessary and obscures the common understanding that a tenant is the application owner that uses a shared a resource, whether it is a physical server or storage array.

A good example of multi-tenancy within the confines of a private data center is a corporate database that is managed by a DBA that doesn't want anything else to impact their performance and stability. When that database is moved to a virtual environment, the DBA expects to have multi-tenant protection that ensures nothing changes except a decrease in operating costs. The same applies to any application owner who would like, but can't afford the luxuries of dedicated resources.

Role-based administration combined with resource virtualization makes multi-tenant environments safe from administrator errors. Limiting the scope of what an admin can see as well as what actions they can take eliminates the possibility of them making a simple mistake with major consequences. Using the DBA example, if the DBA alone controls their own storage resources, there is no opportunity for a co-worker to screw things up for them.

3PAR's Virtual Domain software (available since 2008) provides a role-based, restricted access system for managing storage resources. This certainly doesn't solve all the security problems for multi-tenant environments, but it's an excellent way to eliminate the most common concerns of application owners.

The technology can be extended to public cloud infrastructures as well if a service provider chooses to make it available. A customer can be given Virtual Domain private control of their storage resources - without the ability to see any other customers' resources - to manage and provision as they see fit. In the service provider model, 3PAR provides the technology to its service provider partners who provide Virtual Domain-based services to their customers. 3PAR Cloud Agile partners who offer these services today are: