A big part of the Axellio story here revolves around density. You get 4 nodes in 4 RU, and up to 36 NVMe drives per server. Axellio tell me you can pack up to 920TB of raw NVMe-based storage in these things (assuming you’re deploying 6.4TB NVMe drives). You can also have a minimum of 4 drives per server if you have a requirement that is more reliant on processing. There’s a full range of iWARP adapters from Chelsio Communications available with support for 4x 10, 40, or 100GbE connections.

[image courtesy of Axellio]

You can start small and scale up (or out) if required. There’s support for up to 16 nodes in a cluster, and you can manage multiple clusters together if need be.

Not That Edge

When I think of edge computing I think of scientific folks doing funky things with big data and generally running Linux-type workloads. While this type of edge computing is still common (and well-catered for with Axellio’s solutions), Axellio are going after what they refer to as the “enterprise edge” market as opposed to the non-Windows workloads. The Windows DC Edition licensing makes sense if you want to run Hyper-V and a number of Windows-based workloads, such as Active Directory domain controllers, file and print services, small databases (basically the type of enterprise workloads traditionally found in remote offices).

Thoughts and Further Reading

I’m the first to admit that my working knowledge of current Windows technologies is nowhere near what it was 15 years ago. But I understand why choosing Windows as the foundation platform for the edge HCI appliance makes sense for Axellio. There’s a lot less investment they need to make in terms of raw product development, the Windows virtualisation platform continues to mature, there’s already a big install base of Windows in the enterprise, and operations folks will be fairly comfortable with the management interface.

I’ve written about Axellio’s Edge solution previously, and this new offering is a nice extension of that with some Windows chops and “HCI” sensibilities. I’m not interested in getting into a debate about whether this is really a hyper-converged offering or not, but there’s a bunch of compute, storage and networking stuck together with a hypervisor and management tier to help keep it running. Whatever you want to call it, I can see this being a useful (and flexible) solution for those shops who need to have certain workloads close to the edge, and are already leveraging the Windows operating platform to do it.

You can grab the Axellio Data Sheet from here, and a copy of the press release can be found here.

I said last year that I don’t do future prediction type posts, and then I did one anyway. This year I said the same thing and then I did one around some Primary Data commentary. Clearly I don’t know what I’m doing, so here we are again. This time around, my good buddy Jason Collier (Founder at Scale Computing) had some stuff to say about hybrid cloud, and I thought I’d wade in and, ostensibly, nod my head in vigorous agreement for the most part. Firstly, though, here’s Jason’s quote:

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments”.

The Cloud

I talk to people every day in my day job about what their cloud strategy is, and most people in enterprise environments are telling me that there are plans afoot to go all in on public cloud. No one wants to run their own data centres anymore. No one wants to own and operate their own infrastructure. I’ve been hearing this for the last five years too, and have possibly penned a few strategy documents in my time that said something similar. Whether it’s with AWS, Azure, Google or one of the smaller players, public cloud as a consumption model has a lot going for it. Unfortunately, it can be hard to get stuff working up there reliably. Why? Because no-one wants to spend time “re-factoring” their applications. As a result of this, a lot of people want to lift and shift their workloads to public cloud. This is fine in theory, but a lot of those applications are running crusty versions of Microsoft’s flagship RDBMS, or they’re using applications that are designed for low-latency, on-premises data centres, rather than being addressable over the Internet. And why is this? Because we all spent a lot of the business’s money in the late nineties and early noughties building these systems to a level of performance and resilience that we thought people wanted. Except we didn’t explain ourselves terribly well, and now the business is tired of spending all of this money on IT. And they’re tired of having to go through extensive testing cycles every time they need to do a minor upgrade. So they stop doing those upgrades, and after some time passes, you find that a bunch of key business applications are suddenly approaching end of life and in need of some serious TLC. As a result of this, those same enterprises looking to go cloud first also find themselves struggling mightily to get there. This doesn’t mean public cloud isn’t necessarily the answer, it just means that people need to think things through a bit.

The Edge

Another reason enterprises aren’t necessarily lifting and shifting every single workload to the cloud is the concept of data gravity. Sometimes, your applications and your data need to be close to each other. And sometimes that closeness needs to occur closest to the place you generate the data (or run the applications). Whilst I think we’re seeing a shift in the deployment of corporate workloads to off-premises data centres, there are still some applications that need everything close by. I generally see this with enterprises working with extremely large datasets (think geo-spatial stuff or perhaps media and entertainment companies) that struggle to move large amounts of the data around in a fashion that is cost effective and efficient from a time and resource perspective. There are some neat solutions to some of these requirements, such as Scale Computing’s single node deployment option for edge workloads, and X-IO Technologies‘ neat approach to moving data from the edge to the core. But physics is still physics.

The Bit In Between

So back to Jason’s comment on hybrid cloud being the way it’s really all going. I agree that it’s very much a question of public cloud and on-premises, rather than one or the other. I think the missing piece for a lot of organisations, however, doesn’t necessarily lie in any one technology or application architecture. Rather, I think the key to a successful hybrid strategy sits squarely with the capability of the organization to provide consistent governance throughout the stack. In my opinion, it’s more about people understanding the value of what their company does, and the best way to help it achieve that value, than it is about whether HCI is a better fit than traditional rackmount servers connected to fibre channel fabrics. Those considerations are important, of course, but I don’t think they have the same impact on a company’s potential success as the people and politics does. You can have some super awesome bits of technology powering your company, but if you don’t understand how you’re helping the company do business, you’ll find the technology is not as useful as you hoped it would be. You can talk all you want about hybrid (and you should, it’s a solid strategy) but if you don’t understand why you’re doing what you do, it’s not going to be as effective.

X-IO Technologies recently announced the ISE 900 Series G4. I had the chance to speak to Bill Miller about it and thought I’d provide some coverage of the announcement here. If you’re unfamiliar with X-IO, ISE stands for Intelligent Storage Elements. This is X-IO Technologies’ “next-generation ISE”, and X-IO will also be continuing to support their disk-based and hybrid arrays. They will, however, be discontinuing the 800 series AFAs.

What’s In The Box?

There are two boxes – the ISE 920 and ISE 960. You get all of the features of ISE hardware and software, such as:

High Availability

QoS

Encryption (at rest)

Management REST API

Simple Web-based Management

Monitored Telemetry

Predictive Analytics

They used to use sealed “DataPacs” in the disk drive days but this isn’t needed in the all-flash world. ISE still manages SSDs in groups of 10 and still overprovisions capacity up to a point. The individual drives are now hot-swappable though.

You also get features such as “Performance-Optimized Deduplication”, and deduplication can be disabled by volume.

Failed drives do not have the same urgency for replacement as traditional arrays

Web-based Management Interface

Simplified management with X-IO’s OptimISE

Support for multi-system management through a single session

At-a-glance and in-depth performance metrics

Customizable widget based layout

As with most modern storage arrays, the user interface is clean and simple to navigate. OptimISE replaces ISE Manager, although you’ll still need it to manage your Gen1 – Gen3 arrays. X-IO are considering adding support for Gen3 arrays to OptimISE, but they’re waiting to see whether there’s customer demand.

[image courtesy of X-IO Technologies]

X-IO tell me that snapshots and replication are on the roadmap and will be added in the future, with X-IO aiming to have these features available in H1 next year (but don’t hold them to that though). They’ll also be aiming to add support for iglu systems.

Show Me Your Specs

It wouldn’t be a product announcement without a box shot.

[image courtesy of X-IO Technologies]

2U Dual-Controller Active/Active

8Gbps FC (16Gbps field upgradeable in the future)

4 ports per controller (8 ports will be field upgradeable in the future)

Hot-Swappable FRUs

Controller

Power Supplies

Fans

Regulators

SSDs min – max

ISE 920: 10 – 20

ISE 960: 10 – 60

Two hot-swappable 1600 Watt PSUs

Capacity (*Effective capacity assumes 5:1 deduplication ratio)

ISE 920: 9.6TB – 242TB

ISE 960: 9.6TB – 725TB

Capacity expansion (up to 60 drives) is done in 10 drive increments.

Performance

X-IO tell me they can get performance along the lines of:

Up to 400,000 IOPS; and

Access Time <1ms.

Conclusion and Further Reading

X-IO released a really good overview of the Intelligent Storage Element (ISE) platform a while ago that I think is worth checking out. X-IO’s deduplication solution promises to deliver some pretty decent results at a highly efficient clip. If you want some insight into how they go about doing it, check out Richard Lary’s presentation from Storage Field Day 13. This is their first array with deduplication built in, and I’m interested to see how it performs in the field. The goal is to deliver the same results as their competitors, but with improved efficiency. This seems to be the goal behind much of the hardware design, with X-IO telling me that they come in around 60 cents (US) per effective GB of capacity. That seems mighty efficient.

X-IO have been around for a while, and I’ve found their Axellio Edge product to be fascinating. The AFA market is crowded with vendors saying that they do all things for all people. It’s nice to see that X-IO aren’t promising the world to customers, but they are offering some decent features at a compelling price.

I had the opportunity to talk to X-IO Technologies about their Axellio Edge product at Storage Field Day 13 (you can read about that here). They recently announced a “Portable Axellio Edge Computing System” that “can be quickly disassembled for travel and reassembled onsite” and fits in equipment cases suitable for commercial air travel. Here’s what it looks like.

[image courtesy of X-IO Technologies]

The main chassis is emptied and stored in your checked baggage, with the data packed in a carry-on case that fits within the size limits for US air travel (although I’m not convinced Air France would put up with it based on previous experience). The idea is that the important stuff (or potentially classified data) is within your sight / on your person at all times and there’s less scope for shenanigans.

Pigeon Powered

There are a bunch of scenarios where having a lot of processing and capacity at the edge makes a tonne of sense. But what do you do when you need to get it back to the core in a timely fashion for further investigation or analysis? X-IO aren’t the first to come up with portable (and ruggedised) solutions optimised for moving a lot of data by air rather than over the wire, but sometimes the only answer to physics is to fly the stuff where you need it to be.

My thoughts are with the fellow passengers who have to put up with the big case of NVMe that will occupy a bit of space in the overhead bins, but I’ve travelled enough in the US to know that it’s probably not the biggest thing people have tried to fit into those lockers. Heck, I had a manager once who took a 1RU server as carry-on luggage. Sure, he wasn’t popular, but he somehow convinced them it was in spec.

I like the idea behind this product, in much the same way I appreciate that the Edge product has a very specific use case and isn’t suitable for everyone. You can read more about the Axellio Edge here, read the press release here, and grab a copy of the data sheet from here. Justin also provided some typically insightful coverage over at Forbes.

Disclaimer: I recently attended Storage Field Day 13. My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

X-IO Technologies presented on their Axellio Edge product, amongst other things, at Storage Field Day 13 recently. You can see video of the presentation here, and download my rough notes from here.

What Edge?

So what is the “edge”? Well, a lot of data has mass. And I’m not talking about those big old 1.8″ SCSI drives I used to pull from servers when I was a young man. Some applications (think geosciences, for example) generate a bunch of data very close to their source. This data invariably needs to be analysed to realise its value. Which is all well and good, but if you’re sitting on a boat somewhere you might have more data than you can easily transport to your public cloud provider in a timely fashion. Once the dataset becomes big or fast enough, it’s easier to move the application to the data than vice versa. X-IO say Axellio focuses on the situation where “moving the data processing power closer to where the data is being generated – closer to the source” makes sense. This also means you need the appropriate CPU/RAM combination to run the application attached to the large dataset. And that’s what X-IO means by edge computing.

X-IO’s FabricXpress is the magic that makes the product work as well as it does. X-IO says it extends the native PCIe bus significantly.

PCIe based Interconnect

Up to 72 NVMe SSDs – significantly more SSDs

Between server modules

Offload modules

Dual ported NVMe architecture

Allows access to the same data on the same SSD from both servers

Shared access for HA solutions

Enables independent server behaviour on shared data

[image courtesy of X-IO Technologies]

Networking and Offloading Module

Networking

1×16 PCIe per server module for networking

Supports standard off the shelf NICs/HCAs/HBAs

Supports HHHL or FHHL cards

Ethernet, InfiniBand, FC

Up to 2x100GbE per module

Offloading Module

Two centre modules is replaced with single carrier

Holds two FHFL DW, x16 PCIe cards

Nvidia P100: +18.6 Teraflops (sp)

Nvidia V100: +30 Teraflops (sp)

Doing What at The Edge?

Edge Data Analytics Platform

The point of Axellio Edge is to ingest and analyse data at really very high speeds. The neat thing about this is that a 2RU chassis replaces a rack of scale out gear. X-IO claim that it’s “uniquely qualified for real-time big data analytics”.

[image courtesy of X-IO Technologies]

Conclusion and Further Reading

I hadn’t previously given a lot of thought to the particular use cases X-IO presented as being ideally suited to the Axellio Edge offering. My day job revolves primarily around large enterprises running ridiculously critical and crusty SQL-based applications (eww, legacy). Whilst I’ve had some experience with scientific types doing interesting things with data out in the middle of nowhere, it’s not been at the scale or speed that X-IO talked about. Aside from the fact that there’s a whole lot to like about Axellio in terms of speed and capability in this 2RU box, I also like the range of scenarios that this thing delivers.

We’re working with bigger and bigger data sets, and it’s getting harder and harder to move this close to our compute platform in a timely fashion. Particularly if that compute platform is sitting in public cloud. And even moreso if we have to respect the laws of physics (stupid physics!). Instead of trying to push a whole tonne of data from the source to the application, X-IO have taken a different approach and are bringing the data and processing back to the source.

The Axellio Edge isn’t going to be the right platform for everyone, but it seems that, if the use case lines up, it’s a pretty compelling offering. Coupled with the fact that people I’ve spoken to who have been X-IO customers have been very staunch advocates for the company. The people I had the pleasure of speaking with at X-IO are all very switched on and have put a lot of thought into what they’re doing.

For more information on PCIe, have a look here. You can also find more info on NVM Express here. You can grab a copy of the Axellio data sheet from here, and there’s a good whitepaper on edge computing and IoT that you can find here (registration required).

working for minimum rage

taking the social out of social networking

buy me a pony

photos of food

disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my employer and does not necessarily reflect the views and opinions of my employers, previous or current. This is my blog.

Search

Search

Subscribe to PenguinPunk.net by email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.