*Ed note* I wrote this over 3 months ago but didn’t get around to posting. Still relevant though.

Everyone knew SSD drives would change the storage landscape dramatically but the speed of development and rate the capacities are growing is still impressive. Dell has taken the next step in SSD evolution by announcing support for TLC SSD drives in our SC series storage arrays. We were first to market with the high capacity TLC drives in an enterprise storage array. We are still the only vendor as far as I know that can mix multiple SSD types in the same pool.

Why do you care? It all comes down to the different SSD drive costs, quality, resiliency, and perhaps most importantly, *capacity*.

There is a lot of information out there about the various SSD types and their use types so I wont go into it in much detail here(reference Pure SSD breakdown article). There are three types of SSD drives supported in Dell SC series (Compellent) arrays

Perhaps make this into a simple table, resiliency, cost, capacity

SLC – WI Write Intensive. Great for writes, great for reads, $/TB high

TLC – MRI Mainstream Read Intensive, Average for writes, still great for reads, $/TB excellent and the highest capacity (in a 2.5 inch form factor to boot!). Massively outperforms a 15K drive.

*Ed Note* The 1.6TB WI drives are Mixed Used WI drives.

Where SC series has its *magic sauce* is it uses Tiers of different disk types/speeds to move data within the array. Hot data on Tier 1, warm data or write heavy data on tier 2, and older, colder data in Tier 3. Typically Tier 3 would be NLSAS 7.2K spinning drives as they add the best cost/TB. SC series can mix and match the drive types in the same pool because of *data progression*. New writes and heavy lifting is handled by the top tiers and the bottom tiers are only periodically used and only for reads.

The largest TLC drive at time of writing (Sept 2015) is 3.8TB. 3.8TB in a 2.5 inch caddy, low power consumption and no moving parts. I don’t have exact performance details but for read workloads the MRI drives perform about the same as the PRI drives, but for random write workloads they are about half the performance of a PRI SSD. (*Rule of thumb, every workload is different. Speak to your local friendly storage specialist to get the right solution for your workloads). Compare that with a 15K spindle, its better rack density and power saving and a huge performance boost per drive. Then consider 4TB NLSAS drive that is 3.5inch, 80 – 100 IOPS with a random workload, spinning constantly so higher power consumption and moving parts. Obviously you can have situations where a NLSAS drive can spin down when not getting used but thats not the norm. The TLC drive is going to be more expensive than the NLSAS drive but when you take into account power, footprint, added performance over the life of the array it becomes a different calculation.

Yup, a nearly 4TB drive in 2.5 inch form-factor that is low power and 1000s of times faster than a 15k spinning drive with the about same cost per GB as a 15K drive. This is just the beginning, there are larger capacities on the roadmap. I wouldn’t be surprised by the end of next year to see the end of 15k and 10k drives in any of our storage arrays.

While we are on the topic, this is an excellent blog on the newer types of flash storage being tested and developed to help take Enterprise Storage into the future, whatever it looks like.

What are the gotchas? It can’t all be peaches and cream. You can see in the table above, there are different SSD types for different workloads. If you have a high write environment then the RI drives may not be a good fit because of the high erase cost and NAND cell resiliency. For that workload you would be better off with the WI SSDs.

However, most of the workloads I see and also stats that come from our awesome free DPACK tool is most environments are about 70/30 R/W% and average 32K IOs. (Typical VM environment). These are a great candidate for the RI drives.

Here is the great part for Compellent SC, if you want the best of both worlds we can do that using tiering and Data Progression to leverage a small group of the WI drives to handle the write workloads and a larger group of the RI drives to handle all the read traffic, even though to the application its just one bucket of flash. Now we can provide an all-flash, or hybid array with loads of flash but with a much much lower $/GB which is essential with the current data growth rates.

Data Progression in SC series

Here is an example. You have a VMware workload that you would like to turbo charge. You want to be able to support more IOPS but you also want those IOPS to be sub millisecond. You reach out to me, I talk about myself for the first 15 mins and then we run the free DPACK tool to analyse your workload.

DPACK reports 70/30 R/W% and average 32K IOs, with 95% of the time sitting at 5000 IOPS peaking to 12000 during backups.

Also that there is latency spikes throughout the day when SQL devs run large queries at 10am and 2pm but it usually sits about 3ms – 10ms, not too bad although during backups read latency jumps up to 30ms sometimes.

Queue depth is pretty good and CPU/MEM usage is fine. Capacity is 60TB used but a lot of probably cold data.

Looking at the backups about 2TB of data changes per day.

The SQL devs want to lock the SQL volumes into flash because they write shitty queries and can’t be assed optimising them. (I used to be a Oracle DBA, devs are lazy).

Growth no more than 30% year but a lot of that will be data growth, not workload growth.

This is a very common workload I see, it helps that Australia and New Zealand are very highly virtualised so a lot the workloads we see are ESX, with Hyper-v becoming more common. With this much information its reasonably simple to design an SC array that I would be 100% confident would nail that workload.

Its not a massive system and growth will mainly be Tier 3 but there are a few writes from the SQL databases so a SC4020 array with WI SSD, RI SSD, and NLSAS for the cold tier should do the trick.

The SC array uses tiering and incoming writes into the array very differently to a lot of arrays in the market. All new writes come into the array into Tier 1 (the fastest tier) as RAID 10 (the most efficient write). This is done on purpose to get the write committed and the ack back to the application as fast as possible. The challenge is R10 has 50% overhead and with flash that can mean $$$ and this is where the two tiers of SSD comes into its own. Every couple of hours (2 hours by default), a replay (snapshot) is taken by the SC array and marks the volumes blocks as read only. This is then instantly migrated to the second RI flash tier as R5 to maximise usable capacity. Because the data isn’t R/W anymore there is no need for it to be R10. SC uses redirect on write so new writes are written into Tier 1 as R10 and volume pointers are just simply updated.

A lot of info in a small paragraph but you can see what is happening there, the WI tier does all the heavy lifting in the array and then older data is moved to the RI tier for it to be read from. Then, as the data gets cold, it is typically moved to Tier 3 (NLSAS in my example) as R6. Same data, moved to the write tier and the right time to maximise performance and $/GB.

The replay is taken every 2 hours and then moves the data down to Tier 2. This means we only need to size Tier 1 for the required IOPS and enough capacity to hold 2 hours worth of writes x 2 (R10 overhead). in my example above there is about 2TB of data written everyday (if every write was a new write, assuming worse case). If you break that into 2 hour chunks its less than 200GB per replay, double it for R10 and I would only need 400GB of WI SSD to service that 60TB workload. The reality is that there are spikes during the day and the DPACK tool identifies those but you get my drift.

So .. Tier 1 lets go with 6 x 400GB WI drives. (1 is a Hot Spare). I wont put the exact figures here but those drives with that workload would smash it out of the park with 0.2ms latency.

Now I can focus on Tier 2 almost purely from a capacity standpoint. Remember, this tier will hold the data being moved down from the WI tier but its also holding data classified as hot that gets read from a lot. Everything in this tier will be R5 to get the best usable capacity number. They have 60TB , change 2TB a day and the SQL DB they want to pin is 10TB. So I want to aim for about 18TB usable in this tier just to be safe. I don’t have to worry about SSD write performance on this tier because it will be nearly 100% read except when data is moved down every couple of hours.

So . Tier 2 I’ll use 12 x 1.9TB MRI drives (1 hot spare). This gives me 18TB usable (not raw, you’ll find Dell guys always talk usable). Plenty of room for hot data and to lock the entire SQL workload in this tier. You would need shelves of 15k to get the same performance.

By splitting up the the WI & RI tiers it gives a level of flexibility that is difficult without tiers. If the write workload stays static, in other words stays around the same IOPS number and TB/day, there is no need to grow it. However some other business units see the benefits that the SQL guys are getting and want in on that action. We can grow the WI and RI tiers separately. Simply add a couple more 1.9TB RI drives and that tier gets bigger. We then change the Storage Profile on that volume (and with VVols we’ll change it on that VM) and voila, that volume is now pinned to flash.

Finally, we need another 40TB for the rest of the workload + 30% a year for growth over three years = approx 90TB.

Note: you can add drives anytime into an SC array and the pool will expand and rebalance so you dont have to purchase everything upfront. Also, with thin (assuming provisioning), thin writes, compression, raid optimisation etc there are extra savings but I’ll leave those out for now.

Like the RI tier, all the data in here will be Read Only and for larger drives will be R6. Because we don’t write to this tier besides internal data movement we are squeezing as much performance out of the spinning rust as possible. The key is to not have too much of a divide between the SSD tiers and the NLSAS tier. Again, DPACK allows us to size for the workload instead of guessing. We know the workload is 5000 IOPS so I want this tier to be about 15-20% of the total number, 1000 IOPS (that’s convenient). The NLSAS drives aren’t being written to and so there is no RAID write penalty so I can assume 80 IOPS per drive, 12 drives gets me very close to my IOPS number with a hot spare and magically its also the amount of 3.5inch drives we can fit in a 2U enclosure. Its almost like I’m making this up I have the drive number, but I want to get to 90TB usable at R6. Different story, with 24 x 6TB drive we get about 100TB usable. The good thing is I know I have met the performance brief.

Still with me, this has been a longer explanation that I intended. Speak of Puns, I hoped some of the 10 puns I have in this post would make you laugh, but sadly no pun in ten did.

End result, I have an SC4020 with 18 SSD drives (6 spare slots for expansion), 2 extra SC200 enclosures with 24 6TB NLSAS drives. 6RU in total and it nails the performance and growth rates needed.

You can see, having the option for multiple flash types makes for very flexible and cost effective solutions.

Where to from here? I’m sure drive capacities will continue to grow and grow, with the newer types becoming more mainstream. Samsung released a 12GB SSD recently and without doubt we’ll see that sort of capacity in our arrays over time. Imagine have a 16TB SSD in 2.5 inch, 32TB? A 1RU XC630 Nutanix node with 24 x 4TB 1.8 inch SSD. The only issue is we still have to back it up!!!

*Final Ed note* Since I wrote this post Dell has released the SC9000 platform. When it is paired up with all flash it is a monster.

The latest CITV v3 pack has been released. The CITV appliance is free to use for customers with an existing maintenance contract and enables advanced functionality in VMware environments. It is highly recommended to use the CITV tools if you have VMware on Compellent.

Replay Manager for VMware: updated to support SCv2000 (FC and iSCSI only) and vSphere 6.0

If you have a SCv2000 you can get access to it immediately as it comes with the latest 6.6.20 code. If you are running another Compellent model (SC8000, SC4020 etc), just talk to Copilot and get on the key release list.

Just a quick one, my boss Jamie Humphrey will be hosting a webinar on Flash at the price of disk and how you can potentially keep all your workloads on flash while not having to spend to many dollars.

If you are looking at flash solutions Dell has a pretty good story at the moment. Details are below and register here.

Webinar Details

Date – Wednesday 9 April

Time

11:00 – 11:45am (Sydney)

09:00 – 09:45am (Perth)

10:30 – 11:15am (Adelaide)

01:00 – 01:45pm (Auckland)

There has been an enormous shift in the economics of flash storage. If you’ve ever considered SSD as a way to increase performance for SAP, Oracle, SQL or big data analytics but found the cost prohibitive – the time to look again is now! Dell has taken an innovative approach to flash, one that allows us to deliver flash at a cost/GB that is comparable to spinning disk.

Dell Flash-Optimised Storage Solutions are first in class at tiering data across traditional rotating and flash optimised drives, easing the maintenance and management of your environment by redefining the economics of historically ‘expensive’ storage.

All the goodness of Compellents enterprise features in a mid-market form factor and price.

It’s here, it’s here! Finally we are allowed to talk about it. The brand new all-in-one Compellent has launched yesterday here in Sydney. Internally we have known about it for months (and so have a few customers I bet) so it’s great to be able to talk about it in the open.

DISCLAIMER: I work for Dell. Enterprise storage is my primary employment and its my job to educate customers about Dell storage products. Saying that, the 10 people that read my posts all the time know I try and be upfront and honest and that’s what I’m aiming for in this post.

The Facts

The Dell Storage SC4020 is Dell’s first full-featured, fibre channel, mid tier array. The SC4020 is based on the current SC8000 platform but in a 2U form factor compared to the minimum 6U starting point for the SC8000. It’s a 2U shelf with 24 x 2.5 inch drives as well as two controllers at the back, kinda like an EQL but a different sort of shelf. Initially the launch is in APJ and will support FC only but the worldwide launch will be later in the year and then it will support both FC & 10G iSCSI.

the Dell Storage SC4020 will support up to 120 drives and will scale to 408TB of raw capacity, nothing to sniff at. It runs the same software (Storage Center 6.5) and has most of the same features of its big brother, the SC8000. The SC4020 has a single quad core processor per controller and for that reason it won’t support compression. Initially live volume won’t be available but that functionality will be coming soon.

Connectivity will be 8 x 8G FC ports (4 per controller) and 4 x 10G iSCSI replication ports (2 per controller). For this release in APJ it will only support FC but will support iSCSI down the track. As I mentioned earlier, iSCSI will be enabled for host connections when it’s launched globally. For now, the iSCSI ports are purely for replication.

Back-end connectivity has 2 x 6G SAS ports per controller. This will support a redundant loops that to support the 120 drives. I know a single SAS loop can support up to 160 drives but 120 drives is the limit engineering have placed on the SC4020. Compare this to the SC8000 that can support up to 20 x 6G SAS ports.

Storage Center 6.5 (SC6.5) is also being released in conjunction with the SC4020. SC6.5 will be available for all new SC8000 and SC4020 installs and GA available to existing customers later in the year. If you’re an existing customer and you want to get earlier access to the SC6.5 code reach out to your local Dell storage guy and we’ll get you hooked up.

Like any IT project it had to get a catchy nickname during the development stage. Because of the 2U size of the box it was dubbed the “Baby Compellent”, which is catchy but makes you think it’s a younger, less capable, immobile eating and crapping machine. My name for it was the “Dwarf Compellent”, full featured, strong, robust but just in a smaller package, and a rockin beard. It never caught on.

Pricing will be announced by Dell over the coming days. However, it’s always best to talk to your Dell sales rep, expect this thing to be extremely affordable for the feature set, especially in all-flash configs (think 70%+ less than competitors).

Now, imagine you are on a desert island, sharing a apple crumble dessert with a guy called Des Ert. Take time to reflect!

Mid Market FC – Who buys FC these days?

Why did we do a smaller form-factor Compellent? Our customers told us to, that’s why. To be more specific, the APJ market told us to. The midrange FC market in APJ is huge, especially in fine China. They love the stuff. The marketeers estimate APJ have 43% of the worldwide FC mid-tier market. That’s a whole-lotta small FC shops.

“But but but … you have EqualLogic. Can’t you use it instead?”. Look, I’m so glad you asked :). EQL is actually a great fit here but one snag, it’s iSCSI only (which works exceptionally well but some of you just have to have fibre in your diet).

Right Sized Solutions for the Enterprise

Another sweet spot for the SC4020 is Enterprise’s that want the Compellent goodness out at branch or remote offices but the full sized SC8000 might seem like overkill, and cost is important. The SC4020 can replicate to an SC8000 or vice versa and the disk configuration does not need to match at each site, just make sure you have enough space. Compellent replication is very flexible, sync, async, near sync, kitchen sink, cascade – the whole lot. Think of it this way, we could easily have a solution with 8 SC4020s spread out over various sites and distances replicating back to a larger SC8000 in the main office or a hosted/managed DC managed by the customer or Dell.

FATPOD – My new favourite IT acronym

FATPOD – Flash At The Price Of Disk, awesome. I can’t actually believe I hadn’t noticed that acronym myself. It’s great. The other one is AFA – All Flash Array but FATPOD wins fat hands down. [Dell Hat On]Right now, this is what sets Dell Storage apart in the market at the moment is the way we can mix multiple SSD types and spinning disk to make it go like the clappers but still have lots of capacity to store all your downloads.[Dell Hat Off]

One of the key advantages to the Compellent platform is the way it can optimise and combine write optimised and read optimised flash (SSD) so it appears as one tier to your applications. This gives you the blinding performance SSD can give with the price/capacity benefits usually only available with slower spinning drives.

The SSD tier uses a mix of write optimised (WI) SLC drives and read optimised (RO) eMLC drives. Currently the WI drives are 400GB and the RO drives are 1.6TB, both are 2.5 inch drives so we can 24 in one shelf.

For example, a full shelf of flash with 6 x SLC and 18 x MLC will give you approx 24TB of usable space and about 80,000 to 100,000 IOPS at 80%/20% R/w. Not too shabby at all.

The WI/RO (SLC/MLC) mix allows us to hit such a good price point per GB. Imagine if your branch office had more IOPS than your prod array

Initial tests in the labs shows the box has legs, using All Flash the engineers were able to max the array out at around 300,000 IOPS for 100% read workload and about 200,000 IOPS with a 70%/30% read/write ratio. Expect more offical numbers to be released over the next couple of days.

The beauty of Data Progression in the SC4020 is you can be flexible how much flash and spinning you put into the array, it just comes down to what you need the array to do, and if you need to grow it just add more disk. The pool structure of Compellent allows you to dynamically add new drives on the fly and they get assimilated into the exiting pool – restriping baby.

Now, imagine you are at home, in a bath, naked except for your socks, holding some wet pizza. Not pleasant is it. Stop.

Perpetual Licensing

The licensing is the same as it has always been with Compellent, you buy the licenses and you get to keep them, simple right? Well it turns out a lot of folks don’t find it simple, I think mainly because everyone is programmed to expect to buy licenses every time they buy a new array. The Dell position on this is very different. You buy your Compellent and license the features you want, when it comes time to replace that array, you reuse your existing licenses. In our eyes its still the same array it’s just that it’s been refreshed. On top of that, if you reach the max limit of an SC8000 and you need to scale out all good, you can leverage the licensing in the first array so that the price is much lower. There is a small expansion cost but it’s nothing like the cost of licensing a whole new array. Do a google search on Compellent perpetual licensing and no forklift upgrade and you’ll find heaps of info as it has been around for years.

In terms of licensing the SC4020, the base license enables you for 48 drives, then there are two additional license packs

The SC4020 array will be supported exactly the same way the SC8000 is supported by Dell’s award winning Co-pilot support org which is great news for customers … and also the guys selling it

The launch at Kinglseys Steakhouse on Monday with some Customers, Press and Analysts.

I was lucky enough to attend the launch in Sydney on Monday (Yesterday). It was a great lunch and some excellent conversation. I recognised a bunch of the journalists but not many of the analysts. Man those journos are loud buggers

The lunch was kicked off by Joe Kremer, VP and President South Asia, Australia and New Zealand. He explained (somewhat gleefully) that we no longer have to report back to Wall Street anymore now Dell is private but things are going very, very well. He then handed over to Alan Atkinson who is the VP of Storage for Dell to announce the SC4020. On a side note, he’s a top bloke and very easy to talk to. I’ve found that a lot with Dell that the execs are very personable and will often stop for a chat, very unlike where I used to work. Alan went through all the stuff I have mentioned in the post above, although perhaps a little more formally. There were a few questions about the perpetual license model and how much flash can we stick in the thing from various Journos and on the whole I think it was received very well (although I’m biased). It was a short presentation which I think the crowd appreciated and we got into lunch and a beer or two. From there it was more casual one-on-one discussions and a bit of a petting zoo with the demo SC4020 we had in the room.

I had a great day and I want to do a special shout out to David (John) Holmes for getting me involved and trusting me with this. It’s my first time with a product launch so hopefully it’s been informative and useful.

Some photos I took from the event are below. I think I’m getting a bit rusty with the camera.

DISCLAIMER #2: If you can’t tell I’m a dinkum Aussie so I spell like one. Optimized is Optimised, favorite is favourite, cheese is not orange etc.

Sooooo, after ages of asking and asking, we now have some official performance stats from the FS8600 running FluidFS and it looks impressive, especially from a bang for buck approach. The results are great timing with Dell Enterprise Forum happing in the US at the moment and the release of the new FluidFS V3 .

The FS8600 is the FluidFS NAS frontend for Compellent. It’s a scale our NAS so it can use multiple heads and multiple arrays but still present as a single namespace which is awesome if you need massive file systems, high performance or not. For example, at the moment the largest file system can be 1PB but with the upcoming release that will double to 2PB.

All the drives used were SSD, well, actually a mix of SLC and higher capacity eMLC and with the 4 appliance config they hit 494,224 IOPS – a bloody lot. What it showed is what FluidFS is capable of if you can give it enough backend horsepower but to be honest I don’t really see any customers that need that kind of speed, but it’s always good to know it can do it.

Those following me on twitter would have been sick to death of hearing me talk about it but the Aussie Dell Storage Forum (DSF) has come and gone and it was a bloody good day .. very long, but good. It was the best opportunity we have had to showcase Dell’s growing storage solutions and portfolio in Australia and the feedback so far has been very positive :).

Imma let me keep this post short, but @PenguinPunk has one of the best event overviews of all time (cause it saves me lots of time)

The keynote was the same style as the Boston DSF with the whiteboard and talking about where Fluid Data is heading, especially around Fluid Cache and AppAssure. It was a bit different to Carter George’s slow and steady wins the race style Brett Roscoe lead it well and was really funny. The room was so packed that I heard there were 50 – 80 people outside the room watching on the big screen.

There there were four main session tracks, Compellent, EqualLogic, Powervault and “everything else” :). I was in the everything else stream presenting the first two AppAssure sessions with Andrew Diamond. I was handling the live demo .. yes .. live demo, danger, and it was running on the worlds largest laptop – a Precision M6700 – it had more guts than my R710 PowerEdge server but after lugging it around for two days I am now an inch and a half shorter (In height, mind out of gutter please). It all went well and no one asked me to repeat myself because of mumbling so I consider that a win!

For those who don’t know Andrew, I am like the Robin to his Batman for Dell Storage in Queensland and NT … because being from QLD, we love hanging out in our undies (thanks Dave). Not a whole lot that boy doesn’t know. I help him out with things like cranking up the the twitter machine and which meme is appropriate where

On the floor was where most of the action was with two racks full of kit and a bunch of the storage specialists answering questions and doing live demos. It had a bunch of new kit including:

Lunch for the masses apparently was good, I got to have fancy lunch with all the media dudes because I am a “blogger”. I think I’m going to rock up to other tech or fashion shows and say “I’m a blogger, food please”. One feedback I had about the public lunch was that the salmon was the driest they had ever eaten. I just assumed there was a salmon walking around cracking jokes like:

What are the two sexiest animals in the barn yard? “Brown chicken brown cow”.

We topped it off with a few tasty beverages at the end of the day including these blue Fluid Cocktails – they had some kick – and Prinnie from the Voice singing some tunes, the girl can sing.

It doesn’t just stop there, here is a list of all the sessions that were run during the day. If there is something that interests you and you want to find out more about it, contact your Dell sales team and we’ll help you out. If you’re not currently a customer all good, either contact me through the blog or on twitter and we can go from there. Don’t forget to ask about DPACK assessment as well.

One of the highlights of the day is that this tweet made it on the big board

The good thing about trade shows is new stuff, and DSF Sydney is no exception. Actually there are a few so I will keep this to the point.

Hybrid EqualLogic storage array – SSD and NLSAS in the same shelf.

Compellent FS8600 – Unified NAS for Compellent

AppAssure 5.3 with Linux support

FluidFS support for Quest SharePoint Maximizer

Dell Active Infrastructure

Hybrid EqualLogic storage array

The PS65x0ES is a pretty nifty box. It is based on the 48 drive sumo form factor but with 7 SSD drives and 41 NLSAS drives. The array acts as one pool of storage and tiers data automatically, giving you the SSD punch with the NLSAS fat capacity. It would suit apps like VDI, some databases and high bandwidth media.

PS6500ES is the 1Gbit version

PS6510ES is the 10Gbit version

Another bonus is that the new VMware HIT kit will be released. The HIT kit version is 3.5 and supports ESX 5.1 now and the new EqualLogic v6 firmware. I wrote it like that because a lot of the marketing material says HIT/VMware 3.5 which confuses everyone cause this think it’s for VMware version 3.5.

Compellent FS8600

Being a NAS guy at heart, I was personally very happy to see the FS8600 get released. It’s high performance, scale-out NAS for Compellent. The FS8600 runs the same FluidFS system that the NX3600 and FS7600 uses but it is fully integrated in the Compellent management systems.

It tweaked to maximise the thin-all-day-everyday-ness of Compellent as well as taking full advantage of the automatic tiering benefits that made Compellent famous in the first place.

It’s a 2U box that contains two clustered controllers, so they are active-active with mirrored write cache. We can get up to 4 x FS8600’s to act as a single namespace serving out a single share up to 1PB in size. That’s the figures but no one I know needs a filesystem that big at the moment, at least not in Australia. On the sister FS7500/FS7600 systems I am more commonly seeing shares about the 50 – 100TB mark.

It supports replication, AV, quotas etc, but I think it’s main advantage is that because it front ends Compellent, Compellent will auto tier your NAS data, so the new and hot data is on tier 1, and the old stuff no one is using gets moved to cheap tier 3 disk.

I intend on doing a more in depth techo post in the next couple of weeks.

AppAssure 5.3 with Linux support

AppAssure is another product I have been meaning to write about but time and sleep have gotten in the way. It’s a next gen backup system that Dell bought a few months ago. It uses snapshots and deltas to trickle feed backups over the course of the day, forever incremental style. Instead of the nightly big bang backups that smash the system, it’s spread over the course of the day and sent to a centralised core system to minimise impact to prod systems. It’s a great way of looking at backups and I have been treating it like my little baby since it came out. It doesn’t fit everywhere but for a lot of customers I meet it’s a great fit, and quite affordable too.

Again, I intend on adding much more content about AppAssure .. promise. In the mean time, its free for a 14 day trial and takes about 30 mins to setup … pieceofpissmate

The big addition is Linux support but the bulk deploy features will save a lot of admins a fair chunk of time. Here are the main updates in AppAssure Software (V5.3.1)

FluidFS support for Quest SharePoint Maximizer

If you didn’t know, Dell acquired Quest Software last month which added about 42000* new products to the Dell Software offerings (*slight exaggeration).

One of those products is the Quest Storage Maximizer for SharePoint which now supports Dell’s FluidFS platform.

“Quest Storage Maximizer (SMAX) is the most efficient and lightest weight external storage solution for SharePoint on the market. Files that are typically housed within SQL can be externalized and stored on a Fluid File System (Fluid FS) thus reducing the burden on SQL and increasing SharePoint performance.”

What that is saying is that using the Quest tool, we can use FluidFS as the storage for all your SharePoint data, instead of messy blobs inside a SQL database. Not only does SharePoint go quicker, but it can scale a hell of a lot easier.

Dell Active Infrastructure

This one I’m still catching up with. Dell vStart all in one solutions have been around for a while, this now adds converged infrastructure directly in the blade chassis with a new IO Aggregator and unified management across the system. Dell is also offering these systems will pre-integrated, optimised software and solutions like SharePoint out of the box which will be fantastic. I am picturing a situation where a customer tells me they are looking at implementing SharePoint 2013 and will need to buy some more storage and compute as we as come services to set it all up. Instead I give them one product number and the new system arrives in a few weeks, already racked, already cabled with SharePoint installed and running and ready for production. Tres sexy.

It’s only just been announced so not a lot of in-depth info yet, lets hope what I am thinking is right! And I also hope you can customise those stickers on the rack, like this.

Right now I’m writing this in the Qantas club at Dallas airport waiting for my 16 hour non-stop flight back to Brisbane. I used up my last complementary voucher but this really is a good way to fly. I’m supposed to be shopping for my wife but I’ll do it going through customs on the way home .. hopefully.

So with the sun setting and a free beer in my hand I thought I would jot down a couple of things and notes from my week at the Compellent offices in Minneapolis, Minnesota. For those who don’t know, Minnesota is above and to the left of Chicago, just below Canada. Apparently it’s pretty bloody cold there for most of the year but it was glorious weather all week, hot but not humid.The Compellent offices are in a suburb called Eden Prairie, south west of the city and we stayed right near it.

I was lucky enough to get the call up to go for a three day IDM training session with a motley crew of Dell storage engineers from around Asia and Europe. The first part was core Compellent by one of the Compellent Principal Architects. The second part was presentations from product management to do with VMware and the vSphere plugin, Hyper-V, Exchange, NAS, SQL, Oracle and powershell . The whole thing was great and very educational, filled a few holes I had about Compellent and I got to meet a couple of the Compellent ‘tweople’ – Jason Boche @jasonboche, Tony Holland @TonyHolland00 and Justin Braun @JustinBraun. There were other great guys I could mention but they’re not on twitter so they aren’t the “look at me” types the rest of us are and may not appreciate it.

The offices are very nice and spacious, and they have that high roof thing that Americans seem to like (and I must admit so do I). Those who follow me on twitter may have seen my review of the bathroom facilities as well . It’s all spread out over a couple of buildings with a few data centres chockers full of disks. It must must be a nice power bill, no wonder they went SAS! There is a lake just out the windows and it apparently freezes over they ice fish out of it. That is such a weird concept for me.

I guess I haven’t really written before anything about Compellent but I do like the technology, it was a small reason I made the jump from EMC to Dell. It’s a much different look at storage and I like it, especially some little things that EMC could do well to copy like replay profiles and LUN creation defaults. Just little things but you can tell the Compellent team were really focused on minimising the amount time a storage admin spent managing an array. If enough people are keen I can do a writeup about it and how it works, or if you are in Brisbane I can come show you. We were messing around in the labs and created 600 LUNs by clicking Create LUN 600 times. Not real life but still silly fun.

We spoke about a lot of things, most of which I wouldn’t be able to write about. I might do a short post later about some of the tidbits I discovered. I even squeezed a few futures out of them and the future looks real good … but I would say that right? . Two great things to highlight were seeing the vSphere plugin in action and also learning about powershell and how to integrate it with SQL server.

The Dell guys on the course were very friendly, about half from Asia and half from Europe. It was good to hear the stories from the other countries and how similar things can be, even if they are on the other side of the world.

One of the best parts of the trip was that I didn’t meet any dickheads. None. Everyone was super nice and up for a chat, from the airports to the pubs and of course at work, not that anyone could understand anything I was saying. The way I say the word “training” it must have a certain tone about it because I would get some funky looks whenever I said it. I even tried to put on an American accent, no luck. Even the TSA were nice to me. I was nervous thinking about all the touch up stories I had seen on twitter and forgot to take my laptop out of my bag in the scanner. They were really laid back about it, rescanned it and off I went. Phew, no cavity search for me. We spend a bit of time at a bar called Champps which has over 60 beers on tap (awesome) and one of the first times I have felt like someone earned their tips. Service was great and good unusual beers too. Another place to point out was Redstone grill where I had perhaps the best bit of fish in my life, Sea Bass something something, and boy it sure was something.

I went to the Mall of America. It was big and it had a theme park in it, but besides that it’s like every other shopping centre in the world. I tried to find a good quality basketball, no go. I tried to find a certain makeup for my wife, no go. I tried to have a shower in the kids pool, no go. Its ridiculous. However I did get to see ‘The Change Up’ at the movies. If you are a man and have kids please go see it.

I did get to go to a sport lovers paradise called Dicks Sports, a good name but not as good as The Dick Liquor. I didn’t get to fulfil my dream of owning an NBA basketball as they didn’t have any in stock so I got a good one for a bargain. Side note, after shopping here it cements that Australia is a rip off. I bought some Aussie thongs (flip flops) for half the price I can get them at home.

Going to the states I was trying to find a cheap 3G prepaid simcard so I could have data on my phone and got told that nothing exists because the US phone system is all over the place. One the Qantas flight over they were selling a prepaid sim from a provider called truphone.com. For $20 I got $15 credit and cheap calls and 15c/MB, bargain. The only bugger is that you have to activate it online so you still have to get near the web somewhere.