Tag Archives: StorageCloud

Scale-out architecture and why it is important when architecting a storage solution.

I had an interesting discussion with an architectural firm the other day. Most of the discussion was around scaling for the future. In our discussion we talked about the linear scalability of the ISE technology and he pointed out that while that made a ton of sense for his block-access requirements he was a little concerned around the unstructured data, as well as some plans utilizing NFS for some of his server and desktop virtualization needs. The last thing he wanted to worry about was changing his architecture in 12 to 24 months due to growth or technology changes. So we started working on architecting a solution utilizing our new “scale-out” ISE-NAS solution.

You’ve probably heard a lot about scale-out type architectures. 3PAR sort of led the way with their ability to scale out (at least to eight) their storage controllers to their fixed-backend backplane-attached disk drives and it offers up a pretty unique solution (at least in a block storage architecture). 3PARs problem is they don’t really have an answer for the same scalability around unstructured data (NAS). Don’t get me wrong, they list 5 NAS companies on their website, 1 is out of business and the other 4 have either been acquired by their competitors or is a straight up competitor. This scale out architecture seems to have caught on in the emerging NAS Gateway devices like Symantec FileStore and Isilon. Clearly both FileStore and Isilon are very different on the scale-out architecture. More below.

So first things first, let’s describe what a “scale-out” architecture means, at least to me that is. When architecting solutions, it’s always important to put a solution together that can grow with the business. In other words, they know what they need today, and they have an idea what they might need in 12 months, but 24 – 48 is a complete crap shoot. They could be 5X the size, or just 2X the size but the architecture needs to be in place to support either direction. What is sometimes not discussed is what happens when you run out of either front-side processing power, backend IOPS or usable capacity? Most storage solutions give you 1 to 2(ea) clustered controllers, and a fixed number of disk-drives they can scale to dependent on the specific controller you purchase. From a front-end NAS solution most of them only scale to 2 nodes as well. If you need more processing power, more backend IOPS or capacity, you buy a second storage solution or you spend money to upgrade storage controllers that are not even remotely close to being amortized off the CFO’s books. If you look at the drawing above, you can clearly see what scale-out architecture should look like. You need more front-side processing, no problem. You need more backend IOPS or Capacity, no problem. They scale independently of each other. There is no longer the case of “You love your first <insert storage/NAS solution of choice> and you hate your third, fourth etc etc. Isilon is probably a great example of that. They tout their “scale-out” architecture but it clearly has some caveats. For example, If you need more processing power, buy another Isilon, you need more capacity buy another Isilon, you need more backside IOPS…well you get the idea 🙂 It’s not a very efficient “scale-out” architecture. It’s closer to a Scale up !!

Let’s also not loose site on the fact that this is a solution that will need to be in place for about 4 to 5 years, or the amount of time in which your company will amortize it. The last thing you want to have to worry about is a controller upgrade, or net-new purchase because you didn’t size correctly or you under/over guessed your growth or even worse, years 4 and 5 hardware maintenance. This is especially true if the vendor “end of life’d” their product before it was written off the books !!! Cha-CHING.

So this company I was working with fluctuates with employees depending on what jobs they are working on. It could go from 50 people to 500 people in a moment’s notice and while they would LOVE to size for 500, most of the time they were around 50 to 100. So as I mentioned above, we started architecting a solution that incorporated our ISE-NAS solution based on Symantec’s FileStore product. When coupled with our Emprise 5000 (ISE) gives them the perfect scale-out solution. They can start with 2-nodes and grow to 16 by simply adding NAS engines (x86) to the front end. If they need more capacity, or backend IOPS, we can scale any direction independent of the rest of the solution. Coupled with our predictable performance we gave them the ultimate ability to size for today, and know exactly what they can scale to in the future.

In the world of “Unified Storage”, cloud computing and 3 to 5 year project plans, its important to consider architecture when designing a solution to plan for the future. Scale-Out architecture just makes a lot of sense. BUT – do your homework. Just because they say “scale-out” doesn’t really mean they are the same. Dual-Clustered controllers – or even eight-way – will eventually become the bottle neck and the last thing you want to worry about is having to do a wholesale swap-out/upgrade of your controller nodes to remove the bottleneck or worse, have to buy a second (or third) storage solution to manage!!

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on. In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster. With Xiotech’s Virtual View it’s as easy as 1, 2, 3. Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster. From a high level, here is how you would do it today.

I like to refer to the picture below as the “Rinse and Repeat” part of the process. Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.

VMware Rinse and Repeat process

With Virtual View the steps would look more like the following. Notice its “wizard” driven with a lot of the steps processed for you. But it also gives you an incredible amount of “knob turning” if you want as well.

Virtual View Wizard Steps

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!

If you have any questions, feel free to leave them in the comments section below.

By the way, if by chance 10,000 is just not enough users for you. Don’t worry, add a second ISE and DOUBLE IT TO 20,000. Need 30,000, then add a THIRD ISE. 100,000 users in 10 ISE or 30U of RackSpace. Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!! I should say, our ISE comes with 1GB. It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

In today’s datacenter no matter how much de-duplication, storage-tiering, and archiving companies attempt to throw at an issue, there still seems to be an explosion of information that has to be backed up, protected, restored and archived. I’ve stopped being surprised each time I’ve asked a customer how their backup window is doing. It’s always horrible and out of control. Even with advanced data de-duplication it still surprises me at the responses I get. Not to mention, most of the time customers are running out of power, cooling as well as rackspace in their datacenters. All of this becomes a sort of “Perfect Storm” that has the potential to sink the datacenter into a mess of inefficiencies.

So as I’ve discussed in the past, I get to spend a lot of time architecting solutions. One of the things I spend a lot of time helping design is solutions to help eliminate performance bottlenecks. The great news is I feel pretty strongly that we have a solution that is best in the industry. Imagine if you could eliminate storage as a potential bottleneck as well as reduce your power, cooling and overall carbon footprint with one storage solution? Awesome right!!! What if I told you that Xiotech has the fastest, best throughput raid-protected spinning media solution in the market today (by Storage Performance Council SPC-2)? What if I also told you that not only is it the fastest, but it’s also the most greenest (is that a word?) as well? You would probably tell me I was full of…well you know. This 3U storage element packs a wallop of performance. If you haven’t had a chance, you should check out this recent press announcement. In it we talk about a single Emprise 5000 having the ability to simultaneously power 750 DVD quality video streams, 25,000 MP3’s or 4 Studio-class movie editing projects. For those of you that are familiar with this types of performance hogging applications you know that in 3U of space, that’s pretty cool!! We even mention having the equivalent of operating every movie theater screen in the state of Colorado at the same time from one system. You know on a side note, after 10 years here at Xiotech – how come I don’t have one in my entertainment center yet!!! Brian Reagan – maybe you can make this happen for me 🙂

You probably noticed that I haven’t even touched on how we can reduce the carbon footprint!! We have a cool little feature that is native in our Emprise 5000 product called “PowerNap”. Not only is it native, but it’s also FREE!! PowerNap utilizes industry-standard Wake on LAN (WOL) technology. This gives the end user an incredible ability to power up and down the Emprise 5000 solution via scripts or cron jobs.

Here is something I like to take prospects through. Let’s say you run a VTL (or backup to disk process) type solution for backup or archiving. With PowerNap, you can run a simple Perl or PowerScript, as part of a backup process, to spin up the Emprise 5000. So you kick off your backups at 6pm, it takes just 60 seconds to bring the Emprise 5000 from 24watts of power, up to the full operating power draw of around 500 watts. Impressive right!! So, during the day the unit stays in a low power draw state, only drawing 24 watts of power. NOW THAT’S GREEN!!!! Let’s say that during the day you need to do a quick restore. You run your restore process within your backup software solution and part of the restore process is to run the script to spin up the unit. Once the file has been restored, the backup application can issue another script to “PowerNap” the unit. By the way, we have a great “BestPractice Guide”. If you are interested in it, follow me on twitter and send me a message. I’ll send it over to you.

Did I mention the Emprise 5000 comes with FREE 5 Year Hardware Maintenance and the PowerNap feature is FREE TOO!!!! In the “me-too” world of Storage array features and functions, it’s things like PowerNap and blazingly fast performance like this that make me happy to go to work each and every day !!

What does the Pacer, Yugo and Arbitrated Loop have in common? You are probably running one of them in your datacenter.

George Crump recently blogged over at InfoWorld and asked, “do we really need Tier 1 storage”? It struck me as interesting topic and while I disagreed with his reasons on where he put our solution, I tend to agree that the others mentioned are right where they should be. In his article he specifically mentions some of the reasons both the monolithic array manufactures as well as the “modular guys” have “issues” and he zeroed in on performance and scalability. Now his article was speaking about the front end controllers, but I think he missed out on pointing to the backend architectures as well. I thought this would make a great blog posting 🙂 As you recall in my “Performance Starved Applications” blog and my “Why running your hotel, like you run your Storage array can put you out of business” blog I said that if you lined up the various different storage vendors next to each other about the only difference is the logo and the software loaded on the controllers.

Did you also know that if you looked behind those solutions you would see a large hub architecture – also known as our dear old friend “Mr. Arbitrated Loop”? This is like running your enterprise wide Ethernet infrastructure on Ethernet hubs. Can you imagine having to deal with them today? For all those same reasons we dropped ethernet hubs like a bad habit, you should be doing the same thing with your storage array manufacturer if they are using arbitrated loops in their backend storage. Talk about a huge bottleneck to both capacity as well as performance at scale!! So what’s wrong with Fibre Channel Arbitrated Loop (FCAL) on the backend? Well for starters it doesn’t scale well at all. Essentially you can only reference 126 components (for example a disk drive) per loop. Most storage arrays support dual loops which is why you typically see a lot of 224 drive solutions on the market today, with 112 drives per loop – approaching the limit and creating a very long arbitration time. Now, for those that offer more, it’s usually because they are doing more loops (typically by putting more HBA’s in their controller heads) on the backend. The more loops on the backend, the more you have to rely on your controllers to manage this added complexity. When you are done reading my blog post, go and check out Rob Peglar’s blog post around storage controller “Feature Creep” called Jack of All Trades, Master of NONE !. At the end of the day the limitations of FCAL on the backend is nothing new.

About 4 years ago we at Xiotech became tired of dealing with all of these issues. We rolled out a full Fabric backend on our Magnitude 3D 3000 (and 4000) solution. We deployed this in a number of accounts. Mostly it was used for our GeoRAID/DCI configuration where we split our controllers and bays between physical sites up to 10Km. Essentially each bay was a loop all to itself directly plugged into a fabric switch. Fast forward to our Emprise product family and we’ve completely moved away from FCAL on our backend. We are 100% FULL, Non Blocking, Sweet and as pure as your mamas homemade apple pie Fabric with all of the benefits that it offers!!

My opinion (are you scooting towards the front of your chair in anticipation?) is unless you just enjoy running things in hubs I would STRONGLY advise that if you are looking at a new purchase of a Storage Array you should make sure they are not using 15-year old architecture on their backend !! If you are contemplating architecting a private cloud, you should first go read my blog post on “Building resilient, scalable storage clouds” and applying the points I’ve made, to that endeavor. Also, if you really are trying to make a decision around what solution to pick I would also suggest you check out Roger Kelley (@storage_wonk) over at http://www.storagewonk.com/. He talked about comparing storage arrays “Apples to Apples” and brought up other great differences. Not to mention, Pete Selin (@pjselin) over at his blog talked about “honesty in the Storage biz” which was an interesting take on “Apples vs Apples” relative to configurations and pricing. Each of these blog posts will give you a better understanding on how we differentiate ourselves in the market.

I’ve always been a HUGE fan of Commvault. They just rock. When I was a Systems Engineer back in Austin in the early 2000’s, I don’t think we had an account that I didn’t take Commvault into to try and solve a customer’s backup issues. AND WE DIDN’T EVEN SELL COMMVAULT !!! They had such cool technology that was clearly leaps and bounds above everyone else. Not to mention, they had some really cool people that worked for them as well (Shout out to Jeanna, Joelle, RobK and of course Mr Cowgil).

Fast forward a few years and the release of Simpana as well as the addition of native DeDuplication clearly gave Data Domain and various other deduplication solutions a run for their money. You would think that would be enough for one company!! I was pretty excited about their recent press release around adding cloud data storage as a tier option in Simpana. Dave Raffo over at SearchDataBackup.Com did a really nice job of summarizing the announcement. It’s a clear sign that Commvault is still very much an engineering driven organization. Which is just AWESOME!!

I think the biggest nugget that I pulled out of the press release is Commvault’s ability to integrate native REST capabilities. The more and more I hear about REST’s potential, the more I get excited about some of the endless possibilities it can offer. In this case, it allowed Commvault to easily integrate their backup architecture to include 3rd party cloud solutions like Amazon S3, EMC Atmos and a slew of others. They didn’t need to build an API for each vendor; they just relied on REST’s open API to do that for them.

If you haven’t had a chance you should check out Brian Reagan’s blog posting that mentions something we are calling CorteX. Essentially CorteX is our RESTful based ecosystem on which developers can gain access to our Emprise solutions. This is the next evolutionary step in our ongoing open architecture capabilities. As some of you are aware, we’ve been touting our WebService’s Software Development Kit for some time. It’s allowed us to do things like VMWare Virtual View which ties directly into Virtual Center to give VMWare Admin’s unprecedented abilities, as well as Microsoft developers creating a provisioning application called SANMAN that integrates some of their processes directly to our storage. RESTful API will take this to a greater level. Just like Commvault was able to tie directly into public cloud storage providers, CorteX will give unprecedented abilities to do really cool things.

I’ve probably said more then I should 🙂 So I’ll leave it with “more to come on CorteX as we get ready to release”. I’ve probably stolen enough of Brian’s thunder to get an e-mail from him!! It’s always good to hear from a Sr VP right!!

So, keep an eye on Xiotech over the next couple of months and start paying attention to vendors that support RESTful API’s !!!