Tag Archives: Xtremio

Disclaimer: I recently attended Dell Technologies World 2018. My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from the storage.12 session. This was presented by Rami Katz and Zvi Schneider, and covered XtremIO X2: An Architectural Deep Dive.

XtremIO X2

Efficiency

4x better rack density

1/3 effective price $/GB

Multi-dimensional scaling

25% DRR enhancements

Protection

NVRAM

Expanded iCDM capabilities

QoS

Metadata-aware replication

Performance

80% lower application latency

2x better copy operations

Hardware

Simplicity

Simple HTML5 GUI

Intelligent reporting and troubleshooting

Guided workflows

Software-driven architecture driving efficiency and performance

Brute force approach limits enhancements (eg faster chips, more cores). With this approach you can average 20 – 30% performance improvement every year to 18 months. You need to have software innovation.

XtremIO Content Addressable Storage (CAS) Architecture

The ability to move data quickly and efficiently using metadata indexing to reduce physical data movement within the array, between XtremIO arrays or between XtremIO and other arrays / the cloud (not in this release).

Is synchronous replication on the roadmap? Give us a little time. It’s not coming this year. You could use VPLEX in the interim.

How about CloudIQ? CloudIQ support is coming in September

What about X1? It’s going end of sale for new systems. You can still expand clusters. Not sure about any buyback programs. You can keep X1 for years though. We give a 7 year flash guarantee.

X2 is sticking with InfiniBand and SAS, why not NVMe? Right now it’s expensive. We have it running in the labs. We’re getting improvements in software. Remember X2 came out 6 months ago. Can’t really talk too much more.

Disclaimer: I recently attended Dell Technologies World 2018. My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Disclaimer: I recently attended Dell Technologies World 2018. My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC today made some announcements around the XtremIO X2 platform and their PowerEdge server line. I thought it would be worthwhile covering the highlights here.

What’s New with XtremIO X2?

The XIOS 6.1 operating system delivers one very important enhancement: native replication. (It does a lot of other stuff, but this is the big one really)

Dell EMC also announced the availability of the new PowerEdge R840 and R940xa, both available from Q2 2018. I feel bad posting server news without some kind of box shot. Hopefully I can find one and update this post in the future.

It also offers “Integrated Security”, which Dell EMC tell me is based on a “[c]yber resilient architecture, [where] security is integrated into full server lifecycle – from design to retirement”.

You can also scale performance and capacity, with

Up to 2 GPUs or up to 2 FPGAs; and

Up to 26 SSDs/HDDs.

There’s also “Intelligent Automation” with

OpenManage RESTful API & IDRAC9 for DevOps integration

PowerEdge R940xa

Dell EMC are positioning the R940xa for use with “[e]xtreme GPU Database Acceleration.

There’s a 1:1 CPU to GPU ratio, so you can:

Deliver faster response times with 4-socket performance; and

Drive insights with up to 4 GPUs or up to 8 FPGAs.

Integrated Security is present in this appliance as well (see above).

Scale on-premises capacity by mixing and matching capacity and performance options with up to 32 drives.

Intelligent Automation is present in this appliance as well (see above).

Thoughts

People have been looking for native replication in the XtremIO product since it started shipping. It was hoped that the X2 would deliver on that, but instead RecoverPoint seemed to be a capable, if not sometimes disappointing, solution. “Native” replication is what people really want to be able to leverage though, as these kind of protection activities can get overly complicated when multiple solutions are bolted together. I had the great displeasure of deploying an XtremIO backed by VPLEX once. I’m not saying it didn’t work, indeed it worked rather well. But the additional configuration and operating overhead seemed excessive. To be fair, they also wanted the VPLEX so they could tier data to their VNX if required, but I always felt that was just a table stakes exercise. In any case, in my opinion the best option for data replication resides with application. But sometimes you’re just not in a position to use that. In that instance, infrastructure (or storage)-level replication is the next best thing. It needs to be simple though, so it’s nice to see Dell EMC delivering on that.

I don’t cover servers as much as I probably should. These two new models from Dell EMC are certainly pitched at particular workloads. There was obviously a lot more announced last year in terms of new compute, but that was generational. A lot of people are doing some pretty cool stuff with GPUs, and they’ve frequently had to come up with their own solution to get the job done, so it’s nice to see some focus from Dell EMC on that.

You can read a blog post on XtremIO here, and the PowerEdge press release is here. There’s also a white paper on XtremIO replication you can read here.

Disclaimer: I recently attended Dell EMC World 2017. My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

XtremIO X2

[image via Dell EMC]

In a nutshell, Dell EMC describe the new XtremIO X2 as “[f]lash optimised AFA with multi-dimensional scalability”. Features include:

New multi-dimensional scalable hardware

Software-driven performance / efficiency improvements

iCDM use case enhancements

New, simple HTML5 UI

New metadata-aware native replication (not at GA, later this year)

Your Feedback Is Important To Us

X1 Challenges Addressed?

Firstly though, Dell EMC have been listening to customers, and have been working on some improvements with the X2. To wit:

Cabling – there’s a new cable harness

BBU – BBUs replaced with NVRAM

Price – as low as a third of the price of the X1 in terms of effective $/GB

Density – up to 100TB/RU

Scaling – scale up and scale out

16Gb FC – Natively supported on X2

Cabling

A picture is indeed worth a thousand words. And the original X-Brick had some fairly ordinary cable management. The X2 is better.

With this you can scale to over 1.1PB Raw (and over 5PB assuming 4:1 dedupe rates). In terms of drive configurations, the starting point is 18 drives, and you can scale up in increments of 6 drives. When you get past 36 drives, a second XDP group is added (whereby you deploy another 18 + 6 + 6 + 6 disks). The SSDs are a hot swappable field replaceable unit (FRU) as well.

Expanded iCDM Capabilities

This is all pretty exciting, but what about integrated copy data management (iCDM)? Dell EMC say they’ve been seeing a 25% better data reduction on average when comparing X1 to X2 (on a 100GB working set). There have also been compression improvements made (via an intelligent packing algorithm) yielding 16:1 ratios. As well as this, they’re providing:

Open APIs

App integration and orchestration

Virtual copies (XVC)

Consistent, multi-dimensional performance with inline data services

You can now also do 2x the number of XVC copies. There are 16384 volumes supported, with 1024 snapshots per volume also supported.

Management

New HTML5 UI

No more Java binaries – aw yisss!

Faster and better user experience

Simple and Intuitive UI

Easy drill-down & navigation

Intelligent reports

1-2-3 provisioning

You can manage X2 clusters (obviously), and will have the ability to manage X1 clusters post GA.

Provisioning

Provisioning has been improved, with “Next step suggestions” in the form of:

Flexible and guided provisioning flows

“Popular” next step suggestions

Multi-step workflows

Metadata-aware Native Replication

This is coming in the future. Dell EMC tell me it’s going to be great. And I really hope it will be, because I’ve been underwhelmed in the field to date.

Easy operation

Uses XtremIO in-memory snapshot

Wizard-based

Full operational DR

Best Protection

RPO as low as 30 seconds

Immediate RTO

Up to 1000 recovery points

“Fan-in” configurations

Superior Performance

Supports XtremIO high performance

Efficient metadata-aware replication

Efficient replication – compression aware

How Will It Work?

Only deduplicated changes are replicated – data is deduplicated at source, destination, and WAN

Arrays at both ends must transmit and receive only deduplicated data

WAN bandwidth must be sized to account for only deduplicated data

No need for WAN accelerators

Native replication is async only

When?

The X2 will be available to order from May 31st, 2017 and shipping from August 30th, 2017. XIOS 6.0 will be made GA on August 30th, 2017.

Conclusion

I’ve been a fan of the XtremIO for a while now. It goes really fast and does some really cool stuff in terms of density, deduplication and performance. It has been a little underwhelming in terms of data services support (although we’ve seen X1 go through some significant changes in that respect) and hasn’t always been price competitive. But if you’ve been a VMAX customer pining for rack space or an enterprise running some RDBMS that needed some great performance from your block storage, then XtremIO has been for you.

This iteration of the XtremIO platform sounds (on paper at least) to be a lot better than its predecessor, and demonstrates that Dell EMC have been listening to their customers. In much the same way as X-Men 2 was better than the first one, so too does the X2 have the edge over the X1. I look forward to seeing these things in the field. And I look forward very much to seeing the end of Java-based storage management UIs. If you’d like to read Dell EMC’s take on the announcement, check out this blog post.

Virtualization Field Day 6 just wrapped up. If you missed any of the sessions, head over to the landing page to get links to the streams and associated blog posts;

Dave Henry did a somewhat entertaining post on EMC’s recent Isilon announcements. It’s now been updated with a few answers to some of his very reasonable questions. I have a few customers who are very interested in CloudPools. And I’m interested in finding out what the reality of the product is as opposed to the slideware;

I’ve been doing some design work for a few customers and thought I’d put together a brief post on some considerations when deploying XtremIO. I don’t want to go into the pros and cons of the product, nor am I really interested in discussing better / worse alternatives. Let’s just assume you’ve made the decision to go down that track. So what do you need to know before it lobs up in your data centre? As always, I recommend checking EMC’s support site as they have some excellent site planning and installation documentation. There’s also a pretty good introductory whitepaper here.

Hardware Overview

X-Brick

The core hardware in the XtremIO solution is the X-Brick. I’ve included a glamour shot below from EMC’s website for reference. Each X-Brick is comprised of:

You can optionally deploy the XMS (XtremIO Management Server) on a VM rather than physically. There are a few things you need to be mindful of if you go down this route.

The virtual XMS VM should have the following configuration:

8GB vRAM;

2 vCPUs; and

1 vNIC.

The virtual XMS VM should have a single 900GB disk (thin provisioned). Note that 200GB of disk capacity is pre-allocated following the cluster initialization. This should be provisioned on a RAID-protected storage. Note that shared storage used should not originate from an XtremIO cluster.

The virtual XMS should be located in the same LAN as the XtremIO cluster.

The deployed virtual XMS Shares memory resource allocation is set to High. As such, the virtual XMS is given high priority on memory allocation when required. If you’re using a non-standard shares memory allocation, this should be adjusted post-deployment.

In The Data Centre

Rack Requirements

The following table shows the required rack space depending on the number of X-Bricks in the cluster.

Power and Cabling

From a cabling perspective, your friendly EMC installation person will take care of that. There’s very good guidance on the EMC support site, depending on your access level. Keep in mind that you’ll want your PDUs in the rack to come via diverse circuits to ensure a level of resiliency.

In terms of power consumption, the table below provides guidance on maximum power usage depending on the number of X-Bricks you deploy.

Connectivity

From a connectivity perspective, you’ll need to account for both FC and IP resources. Each controller has two FC front-end ports and two iSCSI ports that you can present for block storage access. You’ll also need an IP address for each controller (so two per X-Brick), along with at least one for the XMS. For monitoring, the latest version of the platform supports EMC’s Secure Remote Services (ESRS), so you can incorporate it into your existing solution if required.

Conclusion

Should you decide to go down the XtremIO track, there a few things to look out for, primarily around planning your data centre space. It’s a nice change that you don’t have to get too bogged down in details about the actual configuration of the storage itself. But ensuring that you’ve planned for suitable space, power and management will make things even easier.

I received my Xtremio Upgrade Survival Kit from Pure Storage last week and wanted to just provide a little of commentary on it. I know it’s “old news” now, but it’s been on my mind for a while and the gift pack prompted me to burst into print.

But the post that I think put everything in perspective was Stephen’s. Yes, it’s all technically a bit of a mess. But we’ve been conditioned for so long to read between the lines of vendor glossies and not believe that anything is ever really non-disruptive. Every NDU carries a risk that something will go pear-shaped, and we prepare for it. Most people have had an upgrade go wrong before, particularly if your job has been enterprise storage field upgrades for the last 5 – 10 years. It’s never pretty, it’s never fun, but nowadays we’re generally prepared for it.

While I enjoy the generally ballsy marketing from Pure Storage for calling out EMC on this problem, I think that ultimately we (partners, customers) are probably all not that fussed about it really. Not that I think it’s good that we’re still having these problems. Architecture does matter. But sometimes things get stuffed up.

As an aside though, how good would it be if you worked in an environment where all you needed to do was fill out a paper slip to do a change?

working for minimum rage

taking the social out of social networking

buy me a pony

photos of food

disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my employer and does not necessarily reflect the views and opinions of my employers, previous or current. This is my blog.

Search

Search

Subscribe to PenguinPunk.net by email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.