Category: as a Service

I was on a call before, and a thought struck my mind again. I’ve been seeing people all over the globe use valuable metals to describe their service levels, resources, and/or properties. And you know what? It doesn’t work!

I see examples every day. I’ve created a service offering, and it goes by the name “Platinum”. You get 4 of the fastest servers out there, 512GB of RAM per server, and we’ll throw in some SSD’s.

So, what do I do next year?

Since platinum is still platinum, what happens when the servers that I ordered don’t have the same CPU frequency? Or people would expect double the amount of RAM for the server? Maybe the price for the Solid State Disks went down, and I can now get double or triple the capacity for the same amount of money (well, maybe not next year, but what about the year after)?

When you actually offer an internal service, it’s key to think about what you are actually offering. Are you describing your service? If so, a general name might not be bad. Car manufacturers have been doing this for ages, – Get the new XYZ executive edition! -, and while the model name rarely changes when a revision came, they’ve added a year, or an internal version number to distinguish between revisions. And you ordered your car just prior to the new launch? Well, you’re out of luck, but we’ll gladly sell you the newer version.

Now change places, and take on the role of the car manufacturer. Would you still call your currently fastest model “Platinum”? When you know that in two weeks time, you’ll be working on an even faster engine?

No you wouldn’t!

You would pick something that describes the product (or service) you are going to offer. If you want to offer that server class I mentioned before, pick something sensible. Describe what the service does, perhaps add a revision number or a time stamp. Instead of calling it “Jumbo-servers Platinum”, call it “Jumbo-servers Q1 2012, 4x XYZ virtualization server, quad core, 512GB RAM, 1 SSD 120GB”.

And if you can’t make the name that long, think of useful shorter codes. Spread the word and show people that you aren’t starting off well with new projects by using gold, silver and bronze as your service levels, and tell them that gold isn’t going to be gold in one year.

Oh, and before someone on Twitter says something. Unobtanium is cool, but I wouldn’t want it as a name. Nor would I prefer Yuan Renminbi, atomic weights or Apple specs. Although those gave me a good chuckle!

Like this:

I was on EMC training last week. To be more specific, I was on “Virtualized Data Center and Cloud Infrastructure” training, and took the “E20-018 – Virtualized Infrastructure Specialist for Cloud Architects” afterwards.

First off, I passed the exam, which I’m happy about. Second, I’m still not sure how happy I am about the training and the exam itself.

The training

When it comes to the training itself, we are talking about a week long, instructor led training. It covers several aspects of what would or could be considered elements of a virtualized data center, as well as cloud technologies.

You will start off with an overview of some of the definitions that make up cloud computing. A reference there is the definition of cloud, as it is defined by NIST. There is a reference the three service models of cloud computing, as well as the phases that you usually see when building a cloud infrastructure and the five key characteristics of a cloud delivery model.

You will also get an overview of the technologies that can be used to deliver a cloud like infrastructure. That includes stuff like different synchronization models, hypervisor types, link speeds (think of dark fibre as an example) and technologies used within a virtualized environment that range from a live migration, to the ability to offer services and service catalogs in a self service environment.

You also take a stab at things like governance, risk and compliance. You will get an idea of the things you can run in to when you create or even work with such an environment. You will references to things like the Sarbanes–Oxley Act, and business driven frameworks and best practices like ITIL.

Mix that up with some labs that focus on giving you food for thought when designing a cloud like infrastructure (there is no hands on in the labs, just paper and teamwork), and it sounds like you have a pretty decent training.

Or does it?

I have two main problems with the course itself. For one, I think that it’s based too much on the standards and concerns out of the US. I think that people absolutely need to be aware of things like the Sarbanes–Oxley act, but I also feel that models and concerns should be highlighted for companies that are not bound by this act. Or at least tell these people why the rationale for these acts might be useful to implement, besides the “you legally have to” way of explaining.

Don’t get me wrong, you need to consider these kinds of things, especially in a cloud like environment where you more quickly are able to cross global and legal boundaries, but then also focus on things that you might encounter outside of the United States.

The second concern is much harder to address, since one might get the feeling that you are not being taught anything new. And I think this is the much bigger issue. In a sense, anyone who has been working on the “* as a Service” environments will not really encounter anything new, and they might feel that EMC is just stating the obvious in their course. Techies that attend the training will probably leave with a feeling of “not enough examples and hands on, too much fluffy stuff”.

I think that they are partially right. For me it was a large part of what I have already encountered more than once. On the other hand, it’s good to see that people are working on standards, and the course itself brings together a lot of the things that you encounter, but might not knowingly consider. It’s “food for thought” if you will. And as with all such classes, you get a chance to exchange ideas and perspectives with your classmates, and that might be the most valuable thing about this class, especially with a topic that is harder to grasp (from an actual technology used/where do i start standpoint).

The exam

Read carefully! The biggest piece of advice I can give you, since sometimes the wording of the questions is just plain odd. Also, make sure you have an idea of the technology used in data centers, and virtualization across different tiers. Examples of the last one could be things like storage and network virtualization.

Spend some time on getting to know what governance, risk and compliance is all about. Have some sort of insight of what are the driving factors behind GRC.

The certification can be achieved, even without the training, but I feel the way some questions are asked is perhaps the biggest issue you can face in passing the exam if you already have some experience in designing or working in cloud like environments.

And let me wish you some good luck if you attempt to take he certification! One important note is that the EMC E20-001 is a mandatory prerequisite before attempting the certification, so make sure that you have that one if you want to be an EMC certified cloud architect.

When you come to think about it, people who work in the IT sector are all slightly nuts. We all work in an area that is notorious for trying to make itself not needed. When we find repetitive tasks, we try to automate them. When we have a feeling that we can improve something, we do just that. And by doing that, we try to remove ourselves from the equation where we possibly can. In a sense, we try to make ourselves invisible to the people working with our infrastructure, because a happy customer is one that doesn’t even notice that we are there or did something to allow him to work.

Traditional IT shops were loaded with departments that were responsible for storage, for networking, for operating systems and loads more. The one thing that each department has in common? They tried to make everything as easy and smooth as possible. Usually you will find loads of scripts that perform common tasks, automated installations and processes that intend to remove the effort from the admins.

In comes a new technology that allows me to automate even more, that removes the hassle of choosing the right hardware. That helps me reduce downtimes because of (un)planned maintenance. It also helps me reduce worrying about operating system drivers and stuff like that. It’s a new technology that people refer to as server virtualization. It’s wonderful and helps me automate yet another layer.

All of the people who are in to tech will now say “cool! This can help me make my life easier”, and your customer will thank you because it’s an additional service you can offer, and it helps your customer work. But the next question your customer is going to ask you is probably going to be something along the lines of “Why can’t I virtualize the rest?”, or perhaps even “Why can’t I virtualize my application?”. And you know what? Your customer is absolutely right. Companies like VMware are already sensing this, as can be read in an interview at GigaOM.

The real question your customer is asking is more along the lines of “Who cares about your hardware or operating system?!”. And as much as it pains me to say it (being a person who loves technology), it’s a valid question. When it comes to true virtualization, why should it bother me if am running on Windows, Unix, Mac or Linux? Who cares if there is an array in the background that uses “one point twenty-one jiggawatts” to transport my synchronously mirrored historic data back to the future?

In the long run, I as a customer don’t really care about either software or hardware. As a customer I only care about getting the job done, in a way that I expected to, and preferably as cheap as possible with the availability I need. In an ideal world, the people and the infrastructure in the back are invisible, because that means they did a good job, and I’m not stuck wondering what my application runs on.

This is the direction we are working towards in IT. It’s nothing new, and the concept of doing this in a centralized/decentralized fashion seem to change from decade to decade, but the only thing that remained a constant was that your customer only cared about getting the job done. So, it’s up to us. Let’s get the job done and try to automate the heck out of it. Lets remove ourselves from the equation, because time that your customer spends talking to you is time spent not using his application.

Stevie Chambers wrote something in a tweet last night. He stated the following:

The problem with an IT specialist is that he only gets to do the things he’s already good at, like building a coffin from the inside.

And my first thought was that he’s absolutely right. A lot of the people I know are absolute cracks or specialists in their own area. I’ll talk to the colleagues over in the Windows team, and they can tell you everything about the latest version of Windows and know each nook and cranny of their system. I’ll talk to the developers and they can write impossible stuff for their ABAP Web Dynpro installations.

But then I ask my developers what effect a certain OS parameter will have on their installation. Or perhaps how the read and write response times from the storage array in the back-end might influence the overall time an end user spends while he’s waiting for his batch job to complete. And you know what answer you get a lot of times? Just a blank stare, or if you are lucky some shoulders being shrugged. They’ll tell you that you need to talk to the experts in that area. It’s not their thing, and they don’t have the time, knowledge, interest or just simply aren’t allowed to help you in other areas.

So what about our changing environment? In environments where multiple tenants are common? Where we virtualize, thin provision and dedupe our installations and create pointer based copies of our systems? Where oversubscription might affect performance levels? Fact is that we are moving away from isolated solutions and moving toward a solution stack. We no longer care about the single installation of Linux on a piece of hardware, but need to troubleshoot how the database in our Linux VM interacts with our ESX installation and the connected thin provisioned disks.

In order to be an effective administrator I will need to change. I can’t be the absolute expert in all areas. The amount of information would just be overwhelming, and I wouldn’t have the time to master all of this. But being an expert in only one area will definitely not make my job easier in the future. We will see great value in generalists that have the ability to comprehend the interactions of the various components that make up a stack, and are able to do a deep dive when needed or can gather expertise for specific problems or scenarios when they need to.

Virtualization and the whole “* as a Service” model isn’t changing the way any of the individual components work, but they change the interconnect behavior. Since we are delivering new solutions as a stack, we also need to focus on troubleshooting the stack, and this can’t always be done in the classical approach. In a way this is a bigger change for the people supporting the systems than it is for the people actually using those systems.