It got me to thinking about how I’d define “cloud” and why others feel differently. So here’s a bit of a soft-topic consideration for you along the way.

I was first exposed to the buzzword around 2009, when a major PC and IT gear reseller from the midwest was trying to convince me on every call and email thread that I should buy The Cloud(tm). My rep never could tell me why, or what problem it would solve, a common shortcoming of quota-bound sales reps. I think the closest to a justification I ever got was “Just give a try, you’ll be able to tell where you can use it.” And I didn’t.

As the current decade rolled along, anyone running the server side of a client/server model called themselves The Cloud(tm). And of course, Amazon Web Services and other players came along to give their own definitions and market shares to the matter.

Today, at its extreme interpretation, anything not running in focus on your current personal device is probably considered “cloud” by someone. And to be fair to antsle, that’s where they fit in a way. Continue reading →

I’ve attended a couple of Tech Field Day events, and watched/participated remotely (in both senses of the word) in a few more, and each event seems to embody themes and trends in the field covered. Storage Field Day 5 was no exception.

I found a couple of undercurrents in this event’s presentations, and three of these are worth calling out, both to thank those who are following them, and give a hint to the next generation of new product startups to keep them in mind.

3. The Progressive Effect: Naming Names Is Great, Calling Names Not So Much

Back at the turn of the century, it was common for vendors to focus on their competition in an unhealthy way. As an example, Auspex (remember them) told me that their competitor’s offering of Gigabit Ethernet was superfluous, and that competitor was going out of business within months. I’ll go out on a limb and say this was a stupid thing to say to a company whose product was a wire-speed Gigabit Ethernet routing switch, and, well, you see how quickly Netapp went out of business, right?

At Storage Field Day 5, a couple of vendors presented competitive/comparative analysis of their market segment. This showed a strong awareness of the technology they were touting, understanding of what choices and tradeoffs have to be made, and why each vendor may have made the choices they did.

Beyond that, it can acknowledge the best use for each product, even if it’s the competition’s product. I’ll call this the Progressive Effect, after the insurance company who shows you the competitor’s pricing even if it’s a better deal. If you think your product is perfect for every customer use case, you don’t know your product or the customer very well.

Once again, Diablo Technologies did a comparison specifically naming the obvious competitor (Fusion-io), and it was clear that this was a forward-looking comparison, as you can order a hundred Fusion-io cards and put them into current industry standard servers. That won’t work with most of the servers in your datacenter with the ULLtraDIMMs just yet. But these are products that are likely to be compared in the foreseeable future, so it was useful context and use cases for both platforms were called out.

Solidfire’s CEO Dave Wright really rocked this topic though, tearing apart (in more of an iFixit manner than an Auspex manner) three hyperconverged solutions including his own, showing the details and decisions and where each one makes sense. I suspect most storage company CEOs wouldn’t get into that deep of a dive on their own product, much less the competition, so it was an impressive experience worth checking out if you haven’t already.

MENTION OTHER COMPANY IN PRESENTATION, FUD HULK SMASH YOUR COMPANY. OTHER COMPANY NOT EXIST IN VACUUM. YOUR COMPANY IN VACUUM. SUCKS. #SFD5

There were some rumblings in the Twittersphere about how knowing your competitor and not hiding them behind “Competitor A” or the like was invoking fear, uncertainty, and doubt (FUD). And while it is a conservative, and acceptable, option not to name a competitor if you have a lot of them–Veeam chose this path in their comparisons, for example–that doesn’t mean that it’s automatically deceptive to give a fair and informed comparison within your competitive market.

If Dave Wright had gone in front of the delegates and told us how bad all the competitors were and why they couldn’t do anything right, we probably would’ve caught up on our email backlogs faster, or asked him to change horses even in mid-stream. If he had dodged or danced around questions about his own company’s platform, some (most?) of us would have been disappointed. Luckily, neither of those happened.

But as it stands, he dug into the tech in an even-handed way, definitely adding value to the presentation and giving some insights that not all of us would have had beforehand. In fact, more than one delegate felt that Solidfire’s comparison gave us the best available information on one particular competitor’s product in that space.

This is a post related to Storage Field Day 5, the independent influencer event being held in Silicon Valley April 23-25, 2014. As a delegate to SFD5, I am chosen by the Tech Field Day community and my travel and expenses are covered by Gestalt IT. I am not required to write about any sponsoring vendor, nor is my content reviewed. No compensation has been or will be received for this or other Tech Field Day post.

I was surprised last week at Interop to hear people still talking about both FCoEgate and HP FirmwareGate. It seems that in the absence of any clarity or resolution, both still bother many in the industry.

FCoE-gate

FCoEgate: An analyst group called The Evaluator Group released a “seriously flawed” competitive comparison between an HP/Brocade/FC environment and a Cisco/FCoE environment. Several technical inquiries were answered with confusing evidence that the testers didn’t really know what they were doing.

Several people I talked to at Interop mentioned that this was a perfectly understandable mistake for a newbie analyst, but experienced analysts should have known better. Brocade should have known better as well, but I believe they still stand by the story.

The take-home from this effort is that if you don’t know how to configure a product or technology, and you don’t know how it works, it may not perform optimally in comparison to the one you’re being paid to show off.

This one doesn’t affect me as much personally, but I’ll note that there doesn’t seem to have been a clear resolution of the flaws in this report. Brocade has no reason to pay Evaluator Group to redo a valid comparison, and technologists worth their salt would see through it anyway (as many have). So we have to count on that latter part.

FirmwareGate

FirmwareGate: HP’s server division announced that, for the good of their “Customers For Life,” they would stop making server firmware available unless it was “safety and security” updates. How can you tell if it’s “safety and security?” Try to download it.

HP claimed repeatedly that this brings them in line with “industry best practices,” thus defining their “industry” as consisting exclusively of HP and Oracle. I don’t know any working technologists who would go along with that definition.

@DeepStorageNet not true, go get firmware for your Cisco mds9509, or Redhat er ver. – yes some fall outside, but not the majority

But I had fleeting faith that maybe they’d fixed the problem. So I went to get the firmware update for a nearly 2-year-old Microserver N40L, which had a critical firmware bug keeping it from installing a couple of current OSes. Turns out it’s not a “safety and security” fix, and my system apparently came with a one year warranty.

So if I wanted to run a current Windows OS, I either have to spend more on the support contract than I did on the server (if I can find the support contract anymore), or go with an aftermarket third party reverse-engineered firmware (which, unlike HP’s offerings actually enhances functionality and adds value).

Or I can go with the option that I suspect I and many other hobbyists, home lab users, influencers, and recommenders will — simply purchase servers by companies that respect their customers.

What should HP be doing instead?

The “industry best practices” HP should be subscribing to include open access to industry standard server firmware that fixes bugs they delivered, not just vaguely declared “safety and security” upgrades, much as every other industry standard server vendor except Oracle does. That includes Dell, Cisco, Supermicro, Fujitsu, NEC, Lenovo/IBM, and probably a number of other smaller players.

@johnobeto @tinkertwinsathp I don't see how a guy selling a used server with current firmware dilutes the brand more than w/old firmware

As my friend Howard Marks noted, some of us would be satisfied with a software-only or firmware-only support contract. On-site hardware maintenance isn’t necessary or even affordable for many of us. Many of us who buy used servers would be better off buying an extra server for parts, and most of us buying used servers know how to replace a part or swap out a server. Some of us even better than the vendor’s field engineers.

HP has been silent on this matter for over a month now, as far as I can tell. The “Master Technologists” from HP who won’t distinguish an MDS router from an x86 server have gone silent. And I’m sure many of the “customers for life” that the 30-year HP veteran graciously invites to keep buying support contracts will start looking around if there’s not a critical feature in HP servers that they need.

So where do we go from here?

I can no longer advocate HP servers for people with budgets containing fewer than 2 commas, and even for those I’d suggest thinking about what’s next. There are analogous or better options out there from Dell, Cisco, Supermicro, Fujitsu, NEC, Lenovo, and for the smaller lab form factors, Intel, Gigabyte, Shuttle, and others. (It’s also worth noting that most of those also provide fully functional remote management without an extra license cost as well.)

If you do want to go with HP, or if you can’t replace your current homelab investment, there are ways to find firmware out there (as there has been in the past for Sun^wOracle Solaris). It took me about 15 minutes to find the newly-locked-down Microserver firmware, for example. It didn’t even require a torrent. I can’t advocate that path, as there may be legal, ethical, and safety concerns, but it might be better than going without, at least until you can replace your servers.

And I’ve replaced most of my HP servers in the lab with Dell servers. One more to go. If anyone wants to buy a couple of orphaned DL servers in Silicon Valley (maybe for parts), contact me.

If anyone else has seen any clarity or correction in the state of FCoEgate or FirmwareGate in the last month or so, let me know in the comments. I’d love to be wrong.

Like this:

Cloud Connect Summit is co-located with Interop this week in Las Vegas, Nevada. This is part of a series of highlights from my experience here. Disclaimers where applicable will follow the commentary. Check interop.com for presentation materials if available.

I usually don’t give a lot of focus to keynotes, because I have conference-strophobia or something like that. A room with thousands of people in it is rather uncomfortable for me. And so are buzzwords.

However, Cloud Connect opened with one speaker I know and have spoken with before, another whose business I am familiar with, and a third guy who I didn’t know, but had to assume either did something wrong in a past conference, or is on par with the first two speakers.

Adrian Cockcroft probably needs no introduction.

Mark Thiele is a well-known figure in the datacenter and colocation world. He is currently executive VP and evangelist of datacenter technology for Switch, known for their SUPERNAP datacenters here in Las Vegas and elsewhere.

And the poor guy who got stuck between them… Chris Wolf is CTO Americas of VMware.

Okay, maybe Adrian deserves an introduction

It’s no surprise that Adrian Cockcroft focused on implementing and migrating to cloud. If you’ve seen him speak in the past 4 years it’s probably been about what Netflix was going to do, was doing, or has already done in their migration to an entirely-off-premises cloud-based solution (AWS). He’s now in the venture capital world with Battery Ventures, guiding other companies to do things similar to what Netflix did.

I first met Adrian at a BayLISA meeting in 2009. I’d been a fan from his Sun legacy; as author of *the* Sun Performance and Tuning book in the 90s, you would be hard pressed to find a Solaris admin who hadn’t read the book, along with Brian Wong’s Configuration and Capacity Planning book. In 2009, he talked about dynamically spinning up and down AWS instances for testing and scaling–it was an uncommon idea at the time, but nowadays few would imagine an environment that didn’t work that way (other than storage-heavy/archival environments). I had a long ad-hoc chat with him at the last free Devops Days event in Sunnyvale, where he predicted the SSD offerings for AWS a couple of months before they happened.

As most of my readers already know, Netflix has had to build their own tools to handle, manage, and test their cloud infrastructure. With a goal to have no dependencies on any given host, service, availability zone, or (someday) provider, you have to think about things differently, and vendor-specific tools and generic open source products don’t always fit. The result is generally known as NetflixOSS, and is available on Github and the usual places.

When Adrian asked who in the room was using Netflix’s OSS offerings, somewhere between a third and half of the attendees raised their hands. Fairly impressive for a movement that just four years ago brought responses of “there’s no way that could work, you’ll be back in datacenters in months.”

One key point he made was that if you’re deploying into a cloud environment, you want to be a small fish in a big pond, not a shark in a small pond. Netflix had to cope with the issues of being that shark for some time; if you are the largest user of a product you will likely have a higher number of issues that aren’t “oh we fixed that last year” but more “oh, that shouldn’t have happened.” Smaller fish tend to get the benefits of collective experience without having to be guinea pigs as much.

I’ve felt the pain of this in a couple of environments in the past, and I’m not even all that much of a bleeding edge implementer. It’s just that when you do something bigger than most people, the odds of adventure are in your favor.

The Good, The Bad, and The Ugly

The talk was called “The Good, The Bad, and The Ugly,” taking into consideration the big cloud announcements from Amazon’s AWS and Google Cloud Platform. There is plenty of coverage of these announcements elsewhere (I’ll link as I find other coverage of Monday’s comparison), but in short, there are improvements, glaring omissions, and a substantial lack of interoperability/exchange standards.

Azure’s greatest strength and greatest weakness is that it focuses almost entirely on the Windows platforms. Most companies, however, are apparently not moving *to* Windows, but away from it, if they are making a substantial migration at all. Linux is the lay of the land in large scale virtual hosting, and to be a universal provider, an IaaS/PaaS platform has to handle the majority platform as well as the #2 platform.

The unicorn in the cloud room is likely to be interchangeability between cloud providers. There are solutions for resilience within Amazon or within Google platforms, but it’s not so easy to run workloads across providers without some major bandaids and crutches. So far.

Time for Q&A: SLAs and where Cloud still doesn’t fit

Two questions were presented in this section of the opening keynote.

The first question was around service level agreements (SLAs). A tradition in hosted services, server platforms, network providers, etc… you don’t see SLAs offered in cloud platforms very often. You might think there were guarantees, based on the ruckus raised by single-availability-zone site owners during AWS outages over the past 2-3 years, but the key to making AWS (or other platforms) work is pretty much what Netflix has spent the last few years doing–making the service work around any outage it can.

This isn’t easy, or it would’ve been done years ago and we wouldn’t be talking about it. And my interpretation of Adrian’s response is that we shouldn’t expect to see them anytime soon. He noted that the underlying hardware is no less reliable than the servers you buy for your physical datacenter. And if you’re doing it right, you can lose servers, networks, entire time zones… and other than some degradation and loss of redundancy, your customers won’t notice.

The second question was heralded by Bernard Golden of enStratius Networks thusly, I believe:

I’ve taken to asking companies and tech advocates where their solutions don’t fit… because there is no universal business adapter (virtual or otherwise), and it’s important to have a sense of context and proportion when considering anything technological. If someone says their product fits everywhere, they don’t know their product or their environment (or either).

Adrian called out two cases where you may not be able to move to a public cloud: Capacity/scale, and compliance-sensitive environments.

Capacity and scale goes back to the shark in a small pond conundrum. Companies on the scale of Google and Facebook don’t have the option to outsource a lot of their services, as there aren’t any providers able to handle that volume. But even a smaller company might find it impractical to move their data and processing environment outside their datacenter, depending on the amount and persistence of storage, along with other factors. If you’ve ever tried to move several petabytes even between datacenters, you’ll know the pain that arises in this situation (either time, technological complexity, cost, or even all three).

Compliance issues are a bit easier to deal with–only slightly, mind you. As Adrian mentioned, they’re having to train auditors and regulators to understand cloud contexts, and as that process continues, people will find it easier to meet regulatory requirements (whether PCI, HIPAA, 404, or others) using current-decade technological constructs.

So where do we go from here?

My take: Cloud may be ubiquitous, but it’s not perfect (anyone who tells you otherwise is trying to sell you something you don’t need). As regulatory settings catch up to technology, and as cloud service providers realize there’s room for more than one in the market, we’ll hopefully see more interoperability, consistent features across providers, and a world where performance and service are the differentiating factors.

Also, there is still technological life outside the cloud. And once again, anyone who tells you otherwise is trying to sell you a left-handed laser spanner. For the foreseeable future, even the cloud runs on hardware, and some workloads and data pipelines still warrant an on-premises solution. You can (and should) still apply the magic wands of automation and instrumentation to physical environments.

Disclaimers:

I am attending Interop on a media/blogger pass, thanks to the support of UBM and Tech Field Day. Other than the complimentary media pass, I am attending at my own expense and under my own auspices. No consideration has been provided by any speakers, sponsors, or vendors in return for coverage. .

ABOUTINTEROP®

Interop® is the leading independent technology conference and expo series designed to inform and inspire the world’s IT community. Part of UBM Tech’s family of global brands, Interop® drives the adoption of technology, providing knowledge and insight to help IT and corporate decision-makers achieve business success. Through in-depth educational programs, workshops, real-world demonstrations and live technology implementations in its unique InteropNet program, Interopprovides the forum for the most powerful innovations and solutions the industry has to offer. Interop Las Vegas is the flagship event held each spring, with Interop New York held each fall, with annual international events in India, London and Tokyo, all produced by UBM Tech and partners. For more information about these events visit www.interop.com.

tl;dr: Gallifrey One 2015 tickets are all sold out. Ticket transfers open in October, at face value. There is no shortage of hotel rooms. Gallifrey One will not be expanding. And what’s with the kidneys?

Today at 10am Pacific time, Gallifrey One sold 3200 tickets to the 2015 convention in seventy-five minutes.

The 2015 Gallifrey One convention next February is sold out (in just 75 minutes). See you next year! #gally1

DISCLAIMER: While you’re pondering, I’ll remind you that this is an unofficial site not affiliated with Gallifrey One, and I’m just a guy who’s been to six Gallifrey One conventions and likes to try to help folks who are attending or want to attend.

That’s 7/10 of a ticket every second on average. Let’s round up. One ticket a second, for a show…

Share this:

Like this:

Post navigation

Welcome to RSTS11…

I’m a 17 year veteran of Silicon Valley/Bay Area system administration, now retired from doing Real Work(tm). I’ve done networks, storage, IT, operations, caffeine procurement, and just about anything else that plugs in or acts like it. I’ve worked in 149-person and 149,000-person companies.

Today I work for Cisco designing solutions and telling stories around big data and analytics. See the links above for disclosures and caveats to my coverage here.

My thoughts here are my own, and should not be taken to represent any company or entity other than me.