Tag Archives: CX4

If you look up Emprise in the Merriam-Webster dictionary you will see that it means “an adventurous or daring enterprise.” That pretty much describes the Emprise product family’s launch 2 years ago. We did something that no one else was, or is doing today. Imagine being able to start from scratch on a storage solution, and I’m not talking about controller software. I’m talking a complete re-engineering/architecting a solution that is built with enough resiliencies to offer the only zero-cost 5-year hardware warranty in the storage industry. Not only is it super reliable, but it’s ridiculously fast and predictable. When you can support 600+ Virtual Desktop (Performance VDI) “bootstorm” instances at a whopping 20 IOPS per bootup in 3U of space, I would classify that as wicked fast!!!

In those 2 years we have not sat around on our laurels. Steve Sicola’s team, headed up by our VP of Technology David “Gus” Gustavsson, has really outdone themselves with our latest Emprise product launch. Not only did we move our entire user interface from “Web Services” to a RESTful API (ISE-Manager (blog about this later) and our iPhone/iPad App), he also released our 20(ea) 2.5” disk drive DataPac which has 40(ea) 2.5” drives in 3U of space for about 19.2TB’s space and a TON of performance. His team also released our ISE Analyzer (advanced reporting solution built on our CoreteX/RESTful API (www.CorteXdeveloper.com )– I’ll blog about that soon) and our next release of our Emprise product family, the Emprise 9000. I swear his team doesn’t sleep!!!

So, the Emprise 9000 is a pretty unique solution in the market. Today, when you think “scale out” architecture the first thing you might think about is NAS. Hopefully our ISE NAS !! We hope moving forward you will also think of our Emprise 9000. The Emprise 9000’s ability to scale to 12 controllers puts it way above the 8 controllers the 3PAR solution scales to and above the 2-controllers the rest of the storage world produces (EMC Clariion, Compellent, HP EVA, IBM XIV etc). When married with our Intelligent Storage Element (ISE) it truly gives our customers the most robust, scalable solution in the storage market today.

Let’s be clear, the Emprise 9000 is not just a controller update. It’s a combination of better, faster controllers, RESTful API and our ISE technology combined to solve performance starved applications issues like Virtual Desktops, Exchange, OLTP, Data Warehouse, Virtual Servers as well as various other types of applications found in datacenters today. The ability to give predictable performance whether the solution is 10% utilized, or 97% utilized is a very unique feature. Did I mention it comes with our free zero-cost hardware maintenance? 24x7x365 !!!

So for those keeping a tally at home, and for those competitors that want a little more information on what the Emprise 9000 can do, here is a quick list: (this is not all the features)

Each controller has a Dual quad-core Nehalem CPU’s!!

Scale-out to 12 Controller pairs

8Gb Fibre Channel ports

N-Port Virtualization (NPIV)

1Gb or 10Gb iSCSI ports (10GB later this quarter)

You can run both FC and iSCSI in the same solution.

Scalable from 1 to 96(ea) ISE’s of any size

Max capacity would be 1.8PB with 96(ea) 19.2 TB ISE’s

Support for greater then 2TB LUNS

Up to 256TB size LUN

Thin Provisioned Volumes

Snapshots

READ only Snapshots

Writeable Snapshots as well. Think “smart-clone” technology of VDI

Heterogeneous Migration

You want to migrate off that EMC, HP, 3PAR, HDS, etc – we can do it natively in our storage controllers.

Sync/Async native IP or FC replication

So, as you can see it’s a pretty impressive list!! And as with all new products, we will be adding new features pretty quickly so stay tuned to announcements from us around the 9000. BUT there’s more!! But I can’t really go into it today 🙂 Just stick around a couple of months for some even cooler stuff Gus’s team will be rolling out. I just got back from a week in Vegas getting feed by a firehose about all the stuff we will be rolling out by the end of the year. WOW !! Impressive to say the least !!! 🙂

Posted onMay 10, 2010|Comments Off on Stalled Virtualization Projects-How Xiotech can help you unstick these deployments

Stalled Virtualization Projects? – How Xiotech can help you UN-STICK these deployments.

Xiotech is in a huge partner recruitment phase this year. Things have been going just fantastic! However, one problem we are running into is trying to get some of these larger partners to give us the time of day. Shocking I know- who wouldn’t want to carve out 60 mins of their time to talk to “Ziotech” (I didn’t misspell that – it happens to us ALL THE TIME). Once we get our foot in the door it’s usually 20 mins of them explaining that they carry EMC, NetApp, Pillar, HDS, and Compellent, etc. They always explain to us they just can’t take on yet another storage vendor. What’s really interesting is we typically tell them we don’t want to replace any of their current storage offerings. This usually leads to a skeptical look from them 🙂 I usually tell them that we have a very unique ability to “un-stick” Virtual Desktop opportunities. Let me explain a little further.

It never fails- VDI projects seem to get stalled, or simply get stuck in some rut that the prospect or partner can’t get out of. Now, a stuck project can come in many shapes and sizes. Sometimes it’s time and effort, sometime it’s other projects in the way. But the vast majority of the time its Cost/ROI/TCO type things. Not just from a justification point of view, but most of the time from the upfront CAPEX view. This has been especially true with 1000+ seat solutions. Like I said, I just keep hearing more and more about these situations from our partners. What normally follows is, “well the project is on hold due to funding issues.” So how can we differentiate ourselves in this kind of opportunity? Funny you should ask that!!

I typically like to describe our ISE solution as a solution that has a VERY unique ability to do 3 things better than the rest.

#1 – We give you true cost predictability over the life of the project.

Let’s be honest, if you are about to deploy a 5000+ VDI desktop solution you are probably going to do this project over a 3 year time frame right? Even if it’s only 500, why is it that as we look into these solutions further we only see 3 years of maintenance on the most expensive CAPEX which is storage? By the time you get all 5000+ systems up and running it’ll be time for year 4 and 5 maintenance on your storage foundation. If this isn’t your first storage rodeo, then you know that years 4 and 5 can be the most painful in regards to costs. Not to mention, what’s really cool about our solutions is the “Lego” style approach to design. We can tell you what the cost is for each ISE you want to buy, since they are in 3U “blades” you can simply add the number you need to meet whatever metric you have in place and know the cost of each one. As you can see, we do “Cost Predictability” pretty well.

#2 – We give you performance predictability.

With our 3U Emprise 5000 we have the very unique ability to predict performance characteristics. This is very difficult with legacy storage vendors. Their array IOPS pool can swing 80% plus or minus depending on how full from a capacity point of view their solution is. We deliver 348 SPC-1 IOPS per SPINDLE based on 97% capacity utilization. Keep in mind most engineers for legacy storage arrays use 150 to 120 IOPS per spindle. So based on that alone, we can deliver the same performance with ½ the spindles!!

#3 – We can give you capacity predictability.

Because of the linearity of our solution, when you purchase the Emprise 5000 we can tell you exactly how much useable, after RAID capacity you will have available. Best practice useable capacity for our solution is 96% full. That’s where we do all of our performance testing. Compared with the industry average of anywhere from 60% to 80% your capacity “mileage” will vary !!

So why should this be important to solution providers and customers? So back to my VDI comments. If you are in the process of evaluating, or even moving down the path to deploy VDI how important is it for you to fully understand your storage costs when trying to design this out? If I could tell you that you can support 2000 VDI instance in 3U of space, and that 3U of space can hold 19TB’s of capacity and that solution cost $50,000 (I’m pulling this number out of the…..well, air) that could really be a pivotal point in getting your project off the ground don’t you think? Like I said, no one deploys a 5000 seats solution at one time. You do this over a number of years. With our Storage Blades, you can do just that. You simply purchase the one ISE at a time. With its predictable costs, capacity and most importantly, it’s predictable performance you have the luxury of growing your deployment overtime, without having to worry about a huge upfront CAPEX hit. Not to mention a 5 year hardware warranty better aligns with the finance side of the house and their typical 5 year amortization process. No hidden year 4 and 5 Maintenance costs !!

So, if you are looking at a VDI project or you’ve looked at it in the past and just couldn’t justify it, give us a call. Maybe we can help lower your entry costs and get this project unstuck !!

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on. In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster. With Xiotech’s Virtual View it’s as easy as 1, 2, 3. Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster. From a high level, here is how you would do it today.

I like to refer to the picture below as the “Rinse and Repeat” part of the process. Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.

VMware Rinse and Repeat process

With Virtual View the steps would look more like the following. Notice its “wizard” driven with a lot of the steps processed for you. But it also gives you an incredible amount of “knob turning” if you want as well.

Virtual View Wizard Steps

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!

If you have any questions, feel free to leave them in the comments section below.

By the way, if by chance 10,000 is just not enough users for you. Don’t worry, add a second ISE and DOUBLE IT TO 20,000. Need 30,000, then add a THIRD ISE. 100,000 users in 10 ISE or 30U of RackSpace. Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!! I should say, our ISE comes with 1GB. It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

What does the Pacer, Yugo and Arbitrated Loop have in common? You are probably running one of them in your datacenter.

George Crump recently blogged over at InfoWorld and asked, “do we really need Tier 1 storage”? It struck me as interesting topic and while I disagreed with his reasons on where he put our solution, I tend to agree that the others mentioned are right where they should be. In his article he specifically mentions some of the reasons both the monolithic array manufactures as well as the “modular guys” have “issues” and he zeroed in on performance and scalability. Now his article was speaking about the front end controllers, but I think he missed out on pointing to the backend architectures as well. I thought this would make a great blog posting 🙂 As you recall in my “Performance Starved Applications” blog and my “Why running your hotel, like you run your Storage array can put you out of business” blog I said that if you lined up the various different storage vendors next to each other about the only difference is the logo and the software loaded on the controllers.

Did you also know that if you looked behind those solutions you would see a large hub architecture – also known as our dear old friend “Mr. Arbitrated Loop”? This is like running your enterprise wide Ethernet infrastructure on Ethernet hubs. Can you imagine having to deal with them today? For all those same reasons we dropped ethernet hubs like a bad habit, you should be doing the same thing with your storage array manufacturer if they are using arbitrated loops in their backend storage. Talk about a huge bottleneck to both capacity as well as performance at scale!! So what’s wrong with Fibre Channel Arbitrated Loop (FCAL) on the backend? Well for starters it doesn’t scale well at all. Essentially you can only reference 126 components (for example a disk drive) per loop. Most storage arrays support dual loops which is why you typically see a lot of 224 drive solutions on the market today, with 112 drives per loop – approaching the limit and creating a very long arbitration time. Now, for those that offer more, it’s usually because they are doing more loops (typically by putting more HBA’s in their controller heads) on the backend. The more loops on the backend, the more you have to rely on your controllers to manage this added complexity. When you are done reading my blog post, go and check out Rob Peglar’s blog post around storage controller “Feature Creep” called Jack of All Trades, Master of NONE !. At the end of the day the limitations of FCAL on the backend is nothing new.

About 4 years ago we at Xiotech became tired of dealing with all of these issues. We rolled out a full Fabric backend on our Magnitude 3D 3000 (and 4000) solution. We deployed this in a number of accounts. Mostly it was used for our GeoRAID/DCI configuration where we split our controllers and bays between physical sites up to 10Km. Essentially each bay was a loop all to itself directly plugged into a fabric switch. Fast forward to our Emprise product family and we’ve completely moved away from FCAL on our backend. We are 100% FULL, Non Blocking, Sweet and as pure as your mamas homemade apple pie Fabric with all of the benefits that it offers!!

My opinion (are you scooting towards the front of your chair in anticipation?) is unless you just enjoy running things in hubs I would STRONGLY advise that if you are looking at a new purchase of a Storage Array you should make sure they are not using 15-year old architecture on their backend !! If you are contemplating architecting a private cloud, you should first go read my blog post on “Building resilient, scalable storage clouds” and applying the points I’ve made, to that endeavor. Also, if you really are trying to make a decision around what solution to pick I would also suggest you check out Roger Kelley (@storage_wonk) over at http://www.storagewonk.com/. He talked about comparing storage arrays “Apples to Apples” and brought up other great differences. Not to mention, Pete Selin (@pjselin) over at his blog talked about “honesty in the Storage biz” which was an interesting take on “Apples vs Apples” relative to configurations and pricing. Each of these blog posts will give you a better understanding on how we differentiate ourselves in the market.

Recently I wrote about why “Cost per raw TB” wasn’t a very good metric for comparing storage arrays. In fact, my good friend Roger Kelley over at StorageWonk.com wrote a nice blog specifically “Comparing Storage Arrays “apples to apples” . We don’t say this as a means to simply ignore some of the features and functions that some of the other vendors offer. It’s just our helpful reminder that there is no “free storage lunch”.

So let me take you on a different type of journey around “cost per raw TB” and “cost per useable TB” and apply it to something outside of technology. Hopefully this will make sense!!

Let’s assume you are in the market for a 100 room hotel. You entertain all sorts of realtors that tell you why their hotel is better than the others. You’ve decided that you want to spend about $100,000 for 100 room hotel which averages about $1000 per room. So, at a high level all the hotels offer that same cost per room. Let’s call this “Cost per raw occupancy”. It’s the easy way to figure out costs and it looks fair.

You narrow down your list of hotels to three choices. We’ll call them hotel C, hotel N and hotel X. Hotel C and N have the same architecture, same basic building design, essentially they look the same other than names and colors of the buildings. Hotel X is unique in the fact that it’s brand new and created by a group that has been building hotel rooms for 30+ years with each hotel getting better and better. They are so confident in their building that it comes with 5 years of free building maintenance.

So, you ask the vendors to give you their “best practice, not to exceed hotel occupancy rate”. Hotel C tells you they have some overhead associated with some of their special features so their number is about 60 rooms that could be rented out at any given time. The reservation system will let you book an unlimited amount of rooms, but once you get over 60 things just stop working well and guests complain. Hotel N says they can do about 70 rooms before they have issues. Hotel X says they have tested at 96 room’s occupancy without any issues at all.

So, while at a high level hotel’s C, N and X were $1000 a room, after further review hotel C is about $1600 a room, hotel N is $1400 a room and hotel X is $1041 a room. Big difference!! Let’s assume each of these vendors could “right size” their hotel to meet your 100 room request but the room cost will stay the same. So, hotel C would now cost you $160,000, hotel N is $140,000 and hotel X is $104,000. So that my friend is what I like to call “Cost per useable occupancy” !!

Another way to do this is to have hotel C and N right size down to your budget number based on “cost per useable occupancy”. If the $100,000 is the most important and you understand that you will only get to rent out 60 or 70 rooms from the other hotels, then you could save money with Hotel X by just purchasing 60 rooms in hotel X. That would bring Hotel X’s costs down to $60,000 or a nice savings of $40,000!! The net-net is you get 60 rooms across all 3 hotels but 1 offers you a HUGE savings.

At the end of the day, as the owner of that hotel you want as many rooms rented out as possible. The last thing you want to see happen is your 100 room hotel only capable of 60% or 70% occupancy.

So, if you are in the market for a 100 room hotel, or a Storage Array, you might want to spend a little more time trying to figure out what their best practice occupancy rate is !! It’ll save you money and heartburn in the end.

I’ll leave you with this – based on the array you have today, what do you think your occupancy rating would be for your 100 room hotel? Feel free to leave the vendor name out (or not) 🙂

How to build resilient, scalable storage clouds and turn your IT department into a profit center!!

If you’ve been living under a rock for the last year the topic of Cloud based computing might be new to you. Don’t worry about it at this point, there are CLEARLY more questions than answers on the subject. I get asked at just about every meeting what my interpretation of “cloud” is. I will normally describe it as an elastic, utility based environment that when properly architected, can grow and shrink as resources are provisioned and de-provisioned. It’s a move away from “silo based” infrastructure and into a more flexible and scalable, utility based solution. From a 30,000 foot view, I think that’s probably the best way to describe it. Then the conversation usually rolls to “so, how do you compare your solution to others” relative to cloud. Here is what I normally talk about.

First and foremost we have sold solutions that are constructed just like everyone else’s. Our Magnitude 3D 4000 product line is built with pretty much the exact same pieces and parts as does Compellent, NetApp FAS, EMC Clariion and HP EVA etc. Intel-based controller motherboards, Qlogic HBAs, Xyratex or other SBOD drive bays connected via arbitrated loops. Like I’ve said in prior posts, just line each of these up, remove the “branding” and you wouldn’t be able to tell the difference. They all use the same commodity parts. Why is this important? Because none of those solutions would work well in a “Cloud” based architecture. Why? Because of all the reasons I’ve pointed out in my “Performance Starved Application” post, as well as my “Cost per TB” post. THEY DON’T SCALE WELL and they have horrible utilization rates. If you really want to build a storage cloud you have to zero in on what are the most important aspects of it, or what I like to refer to as “The Fundamentals”.

First you MUST start with a SOLID foundation. That foundation must not require a lot of “care and feeding” and it must be self healing. With traditional storage arrays, you could end up with 100, 200 or even 1000 spinning disks. Do you really want to spend the time (or the HUGE maintenance dollars) swapping out, and dealing with bad disks? Look don’t get me wrong, I get more than a few eye rolls when I bring this up. At the end of the day, if you’ve never had to restore data because of a failed drive, or any other issue related to failed disks then this is probably not something high on your list of worries. For that reason, I’ll simply say why not go with a solution that guarantees that you won’t have to touch the disks for 5 years and backs it up with FREE HARDWARE MAINTENANCE (24/7/365/4hr)!! Talk about putting your money where your mouth is. From a financial point of view, who cares if you’ve never had to mess with a failed drive, it’s freaking FREE HARDWARE MAINTENANCE for 5 years!!

Secondly, it MUST have industry leading performance. Not just “bench-marketing” type performance, I mean real audited, independent, third party, validated performance numbers. The benchmarks from the Storage Performance Council are a great example of a third party solution. You can’t just slap SSD into an array and say “I have the fastest thing in the world”. Here is a great example; if you are looking at designing a Virtual Desktop Infrastructure then performance should be at the top of your design criteria (boot storms). Go check out my blog topic on the subject. It’s called “VDI and why performance matters”

Finally, you need the glue that holds all of this together from a management and a reporting point of view. WebServices is that glue. It’s the ubiquitous “open standard” tool on which many, many application solutions have been built on. We are the only company who builds its storage management and reporting on Web Services, and have a complete WSDL to prove it. No other company epitomizes the value of WebService than Microsoft. Just go to Google “SANMAN XIOTECH” and you’ll see that the folks out in Redmond have developed their own user interface to our solution (our WSDL) to enable automated storage provisioning. HOW AWESOME IS THAT!! Not to mention, WebServices also gives you the ability to do things like develop “chargeback” options which turns the information technology department into a profit center. We have a GREAT customer reference in Florida that has done this very thing. They’ve turned their IT department into a profit center and have used those funds to refresh just about everything in their datacenter.

So those are the fundamentals. In my opinion, those are the top 3 things that you need to address before you move any further into the design phase. Once your foundation is set, then you can zero in on some of the value added attributes you would like to be able to offer as a service in the cloud. Things like CDP, CAS, De-Duplication, Replication, NAS etc.