We'll see how SAI Global has brought advanced backup and DR best practices into play for its users and customers. We will further learn how this has not only provided business continuity assurance, but it has also provided beneficial data lifecycle management and virtualization efficiency improvement.

Here to share more detail on how standardizing DR has helped improve many aspects of SAI Global’s business reliability, please join me now in welcoming Mark Iveli, IT System Engineer at SAI Global, based in Sydney, Australia. Welcome to BriefingsDirect, Mark. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Mark Iveli: Hi, Dana. Thanks for having me.

Gardner: My pleasure. Let’s start from a high level. What do you think is different about DR, the requirements for doing good DR now versus five years ago?

Iveli: At SAI Global we had a number of business units that all had different strategies for their DR and different timings and mechanisms to report on it.

Through the use of VMware Site Recovery Manager (SRM) in the DR project, we've been able to centralize all of the DR processes, provide consistent reporting, and be able to schedule these business units to do all of their testing in parallel with each other.

So we can make a DR session, so to speak, within the business and just run through the process for them and give them their reports at the end of it.

Gardner: It sounds like a lot of other aspects of IT. Things had been done differently within silos, and at some point, it became much more efficient, in a managed capacity, to do this with a strategic perspective, a systems-of-record perspective. Does that make sense?

Complete review

Iveli: Absolutely. The initiative for DR started about 18 months ago with our board, and it was a directive to improve the way we had been doing things. That meant a complete review of our processes and documentation.

When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg. We looked at the technology and said, "This is what we need from a technology point of view." As we started to get further into the journey, we realized that there was so much more that we were overlooking.

We were working with the businesses to go through what they had, what they didn’t have, what we needed from them to make sure that we could deliver what they needed. Then we started to realize it was a bigger project.

The first 12 months of this journey so far has been all around cleaning up, getting our documentation up to spec, making sure that every business unit understood and was able to articulate their environments well. Then, we brought all that together so that we could say what’s the technology that’s going to encapsulate all of these processes and documentation to deliver what the business needs, which is our recovery point objective (RPO) and for our recovery time objective (RTO).

Gardner: All right. Before we delve a bit deeper into what DR is doing for you and maybe tease out a bit more about this whole greater than the sum of the parts, tell us about SAI Global and your responsibilities and specifically how you got involved with this particular project.

When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg.

Iveli: I'm a systems engineer with SAI Global, and I've been with the company for three years. When the DR project started to gather some momentum, I asked to be a significant part of the project. I got the nod and was seconded to the DR project team because of my knowledge of VMware.

That’s how I got into the DR project. I've spent a lot of time now working with SRM and I've become a lot less operational. I've had a chance to be in front of the business and do a little bit of the BA work of IT to work with these business units and say, "This is what your application is doing and this is what we can see it’s doing through the use of Application Discovery Manager. Is this what you guys know your applications to do?"

We've worked through those rough edges to bring together their documentation. They would put it together, we would review it, we would all then sit around and agree on it, and put the information into the DR plans.

From the documentation side of things, I've worked with the project manager and our DR manager to say, "This is how we need to line up our script. This is how we need to create our protection grid. And this is how the inventory mappings are all going to work from a technical point in SRM."

Gardner: Just briefly, what is SAI Global about? Are you in the business of helping people manage their standards and provide compliance services?

Umbrella company

Iveli: SAI Global is an umbrella company. We have three to four main areas of interest. The first one, which we're probably most well-known for, is our Five Ticks brand, and that’s the ASIS standards. The publication, the collection, the customization to your business is all done through our publishing section of the business.

That then flows into an assurance side of the business, which goes out and does auditing, training, and certification against the standards that we sell.

We continue to buy new companies, and part of the acquisition trail that we have been on has been to buy some compliance businesses. That’s where we provide governance risk and compliance services through the use of Board Manager, GRC Manager, Cintellate, and in the U.S., Integrity 360.

Finally, last year, we acquired a company that deals solely in property settlement, and they're quite a significant section of the business that deals a lot with banks and convincing firms in handling property settlements.

So we're a little bit diverse. All three of those business sections have their own IT requirements.

Gardner: I suppose, like many businesses, your brand is super important. The trust associated with your performance is something you will take seriously. So DR, backup and recovery, business continuity, are top-line issues for you.

Because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.

Is there anything about what you've been doing as a company that you think makes DR specifically important for you, or is this just generally something you think all businesses really need to master?

Iveli: From SAI Global’s point of view, because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.

Our publishing business feels that their website needs to be available five nines. When we showed them what DR is capable of doing, they really jumped on board and supported it. They put DR as high importance for them.

As far as businesses go, everyone needs to be planning for this. I read an article recently where something like 85 percent of businesses in the Asia-Pacific region don’t have a proper DR strategy in place. With the events that have happened here in Australia recently with the floods, and when you look at the New Zealand earthquakes and that sort of stuff, you wonder where the businesses are putting DR and how much importance they've got on it. It’s probably only going to take a significant event before they change their minds.

Gardner: I was really intrigued, Mark, when you said what DR is capable of doing. Do you feel that there is a misperception, perhaps an under-appreciation of what DR is? What is this larger whole that you're alluding to that you had to inform others in your organization about?

Process in place

Iveli: The larger whole was just that these business units had a process in place, but it was an older process and a lot of the process was designed around a physical environment.

With SAI Global being almost 100 percent virtual, moving them into a virtual space opened their minds up to what was possible. So when we can sit down with the business units and say, "We're going to do this DR test," they ask if it will impact production. No, it won’t. How is it happening? "Well, we are going to do this, this, and this in the background. And you will actually have access to your application the way it is today, it’s just going to be isolated and fenced off."

They say, "This is what we've been waiting for." We can actually do this sort of stuff. They're starting to see and ask, "Can we use this to test the next version of the applications and can we test this to kind of map out our upgrade path?"

We're starting to move now into a slightly different world, but it has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.

Gardner: So being able to completely switch over and recover with very little interruption in terms of the testing, with very little downtime or loss, the opportunity then is to say, "What else can we do with this capability?"

It has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.

I have heard about people using it for migrations and for other opportunities to literally move their entire infrastructure, their virtual assets. Is that the sort of thing you're getting at -- that this is larger than DR? It’s really about being able to control, manage, and move your assets?

Iveli: Absolutely. With this new process, we've taken the approach of baby steps, and we're just looking to get some operational maturity into the environment first, before we start to push the boundaries and do things like disaster avoidance.

Having the ability to just bring these environments across in a state that’s identical to production is eye-opening for them. Where the business wants to take it is the next challenge, and that’s probably how do we take our DR plan to version 2.0.

We need to start to work with the likes of VMware and ask what our options are now. We have this in place, people are liking it, but they want to take it into a more highly available solution. What do we do next? Use vCloud Director? Do we need to get our sites in an active/active pairing?

However, whatever the next technology step is for us, that’s where the business are now starting to think ahead. That’s nice from an alignment point of view.

Gardner: Now, you mentioned that your organization is almost 100 percent virtualized. It’s my understanding from a lot of users as well that being highly virtualized provides an advantage and benefit when heading to DR activities. Those DR maturation approaches put you in a position to further leverage virtualization. Is there sort of a virtuous adoption pattern, when you combine modern DR with widespread virtualization?

Outside the boxIveli: Because all of a sudden, your machines are just a file on a data store somewhere, now you can move these things around. As the physical technologies continue to advance -- the speed of our networks, the speed of the storage environments, metro clustering, long haul replication -- these technologies are allowing businesses to think outside of the box and look at ways in which they can provide faster recovery, higher availability, more elastic environments.

You're not pinned down to just one data center in Sydney. You could have a data center in Sydney and a data center in New Zealand, for instance, and we can keep both of those sites online and in sync. That’s couple of years down the track for our business, but that’s a possibility somehow through the use of more virtualization technology.

Gardner: Perhaps another way to look at it too would be that your investments to get to a high level of virtualization, server virtualization, pays back dividends, when you move to advanced DR, is that fair?

Iveli: Yes, that’s a fair comment, a fair way to sum it up.

Gardner: Tell us a little bit about your use of VMware vCenter SRM. What version are you using now and have you been progressing along rapidly with that?

Iveli: We've installed SRM 4.1 and our installation was handled by an outsource company, VCPro. They were engaged with us to do the installation and help us get the design right from a technical point of view.

Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology.

Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology. Handing it across to the operational teams to make it a daily operation, or a daily task, is where we're seeing some challenges. A new contract admin has come on board, and they don’t quite understand the environment. So they put a machine in the wrong spot, or some use of storage is provisioned and it’s not being replicated and it is designed for a P1 recovery ranking.

That’s what my role is now -- keeping the SRM environment tuned and in line with what the business needs. That’s where we're at with SRM.

Gardner: Certainly, the constant reliability and availability of all your assets, regardless of external circumstances, is the number one metric, but are there any other metrics during your journey, as you called it, that you can point to that indicate whether you have done this right, or what it pays back -- reliability certainly, but what else is there in terms of a measurement of success?

Iveli: That's an interesting question. When I put this to the DR team yesterday, the only real measurements that we have has been the RPO and the RTO. As long as all the data that we needed was being replicated inside the 15-minute timeframe, that was one of our measurements.

Timely manner

Through the use of the HP Enterprise Virtual Array (EVA) monitoring, we've been able to see and ensure that our DR tunnels are being replicated correctly and within a timely manner.

The other one was the RTO, which we have been able to measure by the report from SRM showing us the time it has taken to present the failover these machines. So we're very confident that we can meet both our RPO and RTO through the use of these metrics.

Gardner: Any advice for those listening in who are beginning their journey? For those folks that are recognizing the risks and seeing these larger benefits, these more strategic benefits, how would you encourage them to begin their journey, what advice might you offer?

Iveli: The advice would be to get hired guns in. With DR, you're not going to be able to do everything yourself. So spend a little bit more money and make sure that you get some consultants in like VCPro. Without these guys, we probably would have struggled a little bit just making sure that our design was right. These guys ensured that we had best practice in our designs.

Before you get into DR, do your homework. Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.

Come around with a strong business case for DR. Make sure that you've got everybody on board and you have the support of the business.

Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.

When you get into DR, make sure that you secure dedicated resources for it. Don't just rely on people coming in and out of the project. Make sure that you can lead people to the resource and you make sure that they are fully engaged in the design aspects and the implementation aspects.

And as you progress with DR, incorporate it as early as you can into your everyday IT operation. We're seeing that, because we held it back from our operations, just handing it over and having them manage the hardware and the ESX and the logical layers, the environment, they were struggling just to get their head around it and what was what, where should this go, where should that go.

And once it’s in place, celebrate. It can be a long haul. It can be quite a trying time. So when you finally get it done, make sure that you celebrate it.

Gardner: And perhaps a higher degree of peace of mind that goes with that.

Iveli: Well, you'll find out when you get through it, how much easier this is making your life, how much better you can sleep at night.

Gardner: Well, great. We've been talking about business standards and compliance provider, SAI Global, and how they have benefited from a strategic view of IT-enabled DR processes and methods.

I'd like to thank our guest, Mark Iveli. He is IT System Engineer at SAI Global. I appreciate your time, and it was very interesting. Thank you, Mark.

Iveli: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.Transcript of a sponsored podcast on how compliance services provider SAI Global successfully implemented a disaster recovery project with tools from VMware. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

We'll see how small business Myron Steves made a bold choice to go essentially 100 percent server virtualized in 90 days. That then set the stage for a faster, cheaper, and more robust DR capability. It also helped them improve their desktop-virtualization delivery, another important aspect of maintaining constant business continuity.

Based in Houston, Texas, and supporting some 3,000 independent insurance agencies in that region, with many protected properties in the active hurricane zone at the Gulf of Mexico, Myron Steves needs to have all sources up and available, if and when severe storms strike. To help those less fortunate, employees need to be operational from home, if necessary, when a natural disaster occurs. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We'll learn how the IT executives at Myron Steves adopted an advanced DR and virtualization approach to ensure that it can help its customers -- regardless of the circumstances. At the same time, they also set themselves up for improved IT efficiency and agility for years to come.

Gardner: I am doing great. Thanks for being with us. We're also here with William Chambers, IT Operations Manager at Myron Steves. And welcome to you also, William.

William Chambers: Thanks. Hello. How are you?

Gardner: We're doing well. Tim, let me throw a first question out at you. Hurricane Ike, back in 2008, was the second costliest hurricane ever to make landfall in the U.S. and, fortunately, it was a near miss for you and your data center, but as I understand, this was a wake-up call for you on what your DR approach lacked.

What was the biggest lesson you learned from that particular incident, and what spurred you on then to make some changes?

Moudry: Before Hurricane Ike hit, William and I saw an issue and developed a project that we presented to our executive committee. Then, when Hurricane Ike came about, which was during this time that we were presenting this, it was an easy sell.

When Hurricane Ike came, we were on another DR system. We were testing it, and it was really cumbersome. We tried to get servers up and running. We stayed there to recover one whole day and never got even a data center recovered.Easy sellWhen we came to VMware, we made a proposal to our executive committee, and it was an easy sell. We did the whole project for the price of one year of our old DR system.

Gardner: What was your older system? Were you doing it on an outsourced basis? How did you do it?

Moudry: We were with another company, and they gave us facilities to recover our data. They were also doing our backups.

We went to that site to recover systems and we had a hard time recovering anything. So William and I were chatting and thinking that there's got to be a better way. That’s when we started testing a lot of the other virtualization software. We came to VMware, and it was just so easy to deploy.

William was the one that did all that, and he can go on with that more later, but we just came to VMware and it became a little bit easier.

Gardner: Tell me about the requirements. What was it that you wanted to do differently or better, after recognizing that you got away with Ike, but things may not go so well the next time? William, what were your top concerns about change?

Chambers: Our top concerns were just avoiding what happened during Ike. In the building we're in in Houston, we were without power for about a week. So that was the number one cause for virtualization.

Number two was just the amount of hardware. Somebody actually called us and said, "Can you take these servers somewhere else and plug them in and make them run?" Our response was no.

Chambers: That was the lead into virtualization. If we wanted everything to be mobile like that, we had to go with a different route.

Gardner: So you had sort of a two-pronged strategy. One was to improve your DR capabilities, but embracing virtualization as a means to do that also set you up for some other benefits. How did that work? Was there a nice synergy between these that played off one another?

Chambers: Once you get into it, you think, "Well, okay, this is going to make us mobile, and we'll be able to recover somewhere else quicker," but then you start seeing other features that you can use that would benefit what you are doing at smaller physical size. It's just the mobility of the data itself, if you’ve got storage in place that will do it for you. Recovery times were cut down to nothing.Simpler to manage

There was ease of backups, everything that you have to do on a daily maintenance schedule. It just made everything simpler to manage, faster to manage, and so on.

Gardner: I talk to large enterprises a lot and I hear about issues when they are dealing with 10,000 seats, but you are a smaller enterprise, about 200 employees, is that right?

Moudry: Yeah, about 200.

Gardner: And so for you as an SMB, what requirements were involved? You obviously don't have unlimited resources and you don't have a huge IT staff. What was an important aspect from that vantage point?

Chambers: It’s probably what any other IT shop wants. They want stability, up-time, manageability, and flexibility. That’s what any IT shop would want, but we're a small shop. So we had to do that with fewer resources than some of the bigger Exxons and stuff like that.

Moudry: And they don’t want it to cost an arm and a leg either.

Gardner: For the benefit of our listeners, let’s talk a little bit about Myron Steves. Tell us about the company, what you do, and why having availability of your phones, your email, and all of your systems is so important to what you do for your customers.

Moudry: We're an insurance broker. We're not a carrier. We are between carriers and agents. With our people being on the phone, up-time is essential, because they're on the phone quoting all the time. That means if we can’t answer our phones, the insurance agent down the street is going to go pick up the phone, and they're going to get the business somewhere else.

Now, we're trying to get more green in the industry, and we are trying to print less paper

Also, we do have claims. We don't process all claims, but we do some claims, mainly for our stuff that's on the coast. After a hurricane, that’s when people are going to want that.

Now, we're trying to get more green in the industry, and we are trying to print less paper. That means we're trying to put the policies up there on the website, a PDF or something like that. Most likely, when they write the policy, they're not going to download that policy and keep it. It’s just human nature. They're going to say, "They’ve got it up there on the Web."

We have to be up all the time. When a disaster strikes, they are going to say, "I need to get my policy," and then they are going to want to go to our website to download that policy, and we have to be up. It’s the worst time I guess.

Chambers: And not many people are going to pack their paper policy when they evacuate or something like that.

Gardner: So the phones are essential. I also talk with a lot of companies and I ask them, which applications they choose to virtualize first. They have lots of different rationales for that, but you guys just went kit and caboodle. Tell me about the apps that are important to you and why you went 100 percent virtualized in such a short time?

SAN storage

Chambers: We did that because we’ve got applications running on our servers, things like rating applications, emails, our core applications. A while back, we separated the data volumes from the physical server itself. So the data volume is stored on a storage area network (SAN) that we get through an iSCSI.

That made it so easy for us to do a physical-to-virtual (P2V) conversion on the physical server. Then in the evenings, during our maintenance period, we shut that physical server down and brought up the virtual connected to the SAN one, and we were good. That’s how we got through it so quickly.

Gardner: So having taken that step of managing your data first, I also understand you had some virtual desktop activity go on there earlier. That must have given you some experience and insights into virtualization as well.

Chambers: Yeah, it did.

Moudry: William moved us to VMware first and then after we saw how VMware worked so well, we tried out VMware View and it was just a no-brainer, because of the issues that we had before with Citrix and because of the way Citrix works. One session affects all the others. That’s where VMware shines, because everybody is on their independent session.

Gardner: I notice that you're also a Microsoft shop. Did you look at their virtualization or DR? You mentioned that Citrix didn’t work out for you. How come you didn’t go with Microsoft?

Then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days.

Chambers: We looked at one of their products first. We've used the Virtual PC and Virtual Server products. Once you start looking at and evaluating theirs, it’s a little more difficult setup. It runs well, but at that time, I believe it was 2008, they didn’t have anything like the vCenter Site Recovery Manager (SRM) that I could find. It was a bit slower. All around, the product just wasn’t as good as the VMware product was.

Moudry: I remember when William was loading it. I think he spent probably about 30 days loading Microsoft and he got a couple of machines running on it. It was probably about two or three machines on each host. I thought, "Man, this is pretty cool." But then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days?

Chambers: I think it was three days to get the host loaded and then re-center all the products, and then it was great.

Moudry: Then he said that it was a little bit more expensive, but then we weighed out all the cost of all the hardware that we were going to have to spend with Microsoft. He loaded the VMware and he put about 10 VMs on one host.

Chambers: At that time, yeah.Increased performance

Moudry: Yeah, it was running great. It was awesome. I couldn’t believe that that we could get that much performance from one machine. You'd think that running 10 servers, you would get the most performance. I couldn’t believe that those 10 servers were running just as fast on one server that they did on 10.

Chambers: That was another key benefit. The footprint of ESXi was somewhat smaller than a Microsoft.

Moudry: It used the memory so much more efficiently.

Gardner: So these are the things that are super-important to SMBs, when you’ve got a free version to try. It's the ease of installation, higher degree of automation, particularly when it came to multiple products, and then that all important footprint, the cost of hardware and then the maintenance and skills that go along with that. So that sounds like a pretty compelling case for SMB choice.

Before we move on, you mentioned vSphere, vCenter Site Recovery Manager, and View. Is that it? Are you up to the latest versions of those? What do you actually have in place and running?

Chambers: We’ve got both in production right now, vCenter 4.1, and vCenter 5.0. We’re migrating from 4.1 to 5.0. Instead of doing the traditional in-place upgrade, we’ve got it set up to take a couple of hosts out of the production environment, build them new from scratch, and then just migrate VMs to it in the server environment.

It went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers.

It's the same thing with the View environment. We’ve got enough hosts so we can take a couple out, build the new environment, and then just start migrating users to it.

Gardner: As I understand, you went to 99.999 percent virtualization in three months, is that correct?

Chambers: Yes.

Gardner: Was that your time-table, or did that happen faster than you expected?

Chambers: It happened much quicker than we thought. Once we did a few of the conversions, of the physical servers that we had, and it went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers. Once we got everything in our physical production environment virtualized, then we could start building new virtual servers to replace the ones that we had converted, just for better performance.

Gardner: So that's where you can bring more of those green elements, blades and so forth, which you mentioned is an important angle here. Of course you’re doing this for DR, but the process of moving from physical to virtual can be challenging for some folks. There are disruptions along the way. Did any of your workers seem put out, or were you able to do this without too much of disruption in terms of the migration process?

Without disruption

Chambers: We were able to do it without disruption, and that was one of the better things that happened. We could convert a physical server during the day, while people were still using it, or create that VM for it. Then, at night, we took the physical down and brought the virtual up, and they never knew it.

Gardner: So this is an instance where being an SMB works in your favor, because a large organization has to flip the switch on massive data centers. It's a little bit more involved. Sometimes weekends or even weeks are involved. So that’s good.

How about some help? Did you have any assistance in terms of a systems integrator, professional services, or anything along those lines?

Chambers: On the things that we’ve built here, we like to have other people come in and look at it and make sure we did it properly. So we’ll have an evaluation of it, after we build it and get everything in place.

Gardner: It sounds like you’re pretty complete though. That’s impressive. Another thing that I hear in the market is that when people make this move to virtualization and then they bring in the full DR capabilities, they see sort of a light bulb go on. "Wow. I can move my organization around, not just physically but I have more choices."

We’re going from a DR model to a high-availability business continuity, just to make sure everything is up all the time.

Some people are calling this cloud, as they’re able to move things around and think about a hybrid model, where they have some on their premises or in their own control, and then they outsource in some fashion to others. Now that you've done this, has this opened your eyes to some other possibilities, and what does that mean for you as an IT organization?

Chambers: It did exactly that. We’re going from a DR model to a high-availability business continuity, just to make sure everything is up all the time.

Moudry: That’s our next project. We’re taking what we did in the past and going to the next level, because right now we have it to where we have to fail over. We’re doing it like a SAN replication and we have to do a failover to another site.

William is trying to get that to more of a high-availability, where we just bring it down here and bring it up there, and it's a lot less downtime. So we’re working on phase two of the process now.

Gardner: All right. When you say here and near, I think you're talking about Houston and then Austin. Are those your two sites?Moving to colos

Moudry: Right now it’s Houston and San Antonio, but we are trying to move -- we are moving all of our equipment to colos and we are going to be in Phoenix and Houston. So all the structure will be in colos, Houston, and Phoenix.

Gardner: So that’s even another layer of protection, wider geographic spread, and just reducing your risk in general. Let’s take a moment and look at what you’ve done and see in a bit more detail what it’s gotten for you. Return on investment (ROI), do you have any sense, having gone through this, what you are doing now that perhaps covered the cost of doing it in the first place?

Moudry: We spent about $350,000 a year in our past DR solution. We didn’t renew that, and the VMware DR paid for itself in the year.

Gardner: So you were able to recover your cost pretty quickly, and then you’ve got ongoing lower costs?

Moudry: Well, we are not buying equipment like we used to. We had 70 servers and four racks. It compressed down to one rack. How many blades are we running, William?

We're working with automation. We're getting less of a footprint for our employees. You just don’t hire as many.

Chambers: We're running 12 blades, and the per year maintenance cost on every server that we had compared to what we have now is 10 percent now of what it was.

Gardner: I suppose this all opens up more capacity, so that you can add on more data and more employees. You can grow, but without necessarily running out of capacity. So that's another benefit.

Moudry: We can probably do that, if we needed employees, but we're working with automation. We're getting less of a footprint for our employees. You just don’t hire as many.

Gardner: As you pursue colos, then you’ve got somebody else. They can worry about the air-conditioning, protection, security, and so forth. So that’s a little less burden for you.

Moudry: That’s the whole idea, for sure.

Gardner: How about some other metrics of success? Has this given you some agility now. Maybe your business folks come down and say, "We’d like you to run a different application," or "We're looking to do something additional to what we have in the past?" You can probably adapt to that pretty quickly.

Copying the template

Moudry: Making new servers is nothing. William has a template. He just copies it and renames it.

Chambers: The deployment of new ones is 20 minutes. Then, we’ve got our development people who come down and say, "I need a server just like the production server to do some testing on before we move that into production." That takes 10 minutes. All I have to do is clone that production server and set it up for them to use for development. It’s so fast and easy that they can get their work done much quicker.

Moudry: Rather than loading the Windows disk and having to load a server and get it all patched up.

Chambers: It gives you a like environment. In the past, where they tested on a test server you built, that’s not exactly the same as the production server. They could have bugs that they didn’t even know about yet, and that just cuts down on the development time just a lot.

Gardner: And so you're able to say yes, instead of, "Get in line behind everybody else." That’s a nice thing to do.

Chambers: Yes.

Gardner: Any advice for folks who are looking at the same type of direction, higher virtualization, gaining the benefits of DR’s result and then perhaps having more of that agility and flexibility. What might you have learned in hindsight that you could share with some other folks?

We’ve got a lot of people working at home now, just because of the View environment and things like that.

Chambers: We’ve attended several conferences and forums. I think there’s more caution that people are using. They want to get into virtualization but they're just not sure how it runs.

If you are going to use it, then get in and start using it on a small basis. Just to do a proof of concept, check performance, do all the due diligence that you need, and get into it. It will really pay off in the end.

Moudry: Have a change control system that monitors what you change. When we first went over there, William was testing out the VMs, and I couldn’t believe, as I was saying earlier, how fast it is. We have people who are on the phones. They're quoting insurance. They have to have the speed. If it hesitates, and that customer on the phone takes longer to give our people the information and our people has hard time quoting it, we’re going to lose the business.

When William put some of these packages over to the VM software, and it was not only running as fast, but it was running faster on the VM than it was on a hard box. I couldn’t believe it. I couldn’t believe how fast it was.

Chambers: And there was another thing that we saw. We’ve got a lot of people working at home now, just because of the View environment and things like that. I think we’ve kind of neglected our inside people, because they'd rather work in a View environment, because it's so much faster than sitting on a local desktop.

Backbone speed

Moudry: Well, the View, and all that being on the chassis itself is all backbone speed. When a person is working on the View, he is working right next to servers, rather than going through Cat 5 cable and through switches. He is on the backbone.

When somebody works at home, they're at lightning speeds. Upstairs is a ghost town now, because everybody wants to work from home. That’s part of our DR also. The model is, "We have a disaster here. You go work from home." That means we don’t have to put people into offices anywhere, and with the Voice over IP, it's like their call-center. They just call from home.

Gardner: I hope it never comes to this, but if there is a natural disaster type of issue, they could just pick up and drive 100 miles to where it's good. They’re up and running and they’ve got a mobile office.

Moudry: The way we did it, if they want to go 100 miles and check into hotel, they can work from the hotel That’s no problem.

Gardner: Let's look to the future unintended consequences that sometimes kick in on this. I've heard from other folks, and it sounds like with these View desktops that you’re going to have some compliance and security benefits, better control over data. Any metrics or payback along those lines?

There is no need for anybody to take our data out of this data center, because they can work from View anywhere they want to.

Moudry: We just were going over some insurance policies and stuff like that for digital data protection. One of the biggest problems that they were mentioning is employees putting data on laptops and then the laptop goes away, get stolen or whatever. There is no need for anybody to take our data out of this data center, because they can work from View anywhere they want to. Anywhere in the world, they can work from View. There's no reason to take the data anywhere. So that’s a security benefit.

Chambers: They can work from different devices now, too. I know we’ve got laptops out there, iPads, different type of mobile devices, and it's all secure.

Gardner: Any other future directions that you could share with us? You've told us quite a bit about what your plans are, colos and further data center locations, perhaps moving more towards mobile device support. Did we miss anything? What's the next step?vMotion between sitesMoudry: As we said before we’re colo-ing VMware, we’re not able to vMotion between sites, but we’re kind of waiting for VMware to improve that a little bit. They'll probably come in down the road a little. But, that would probably be the next thing that I’d want is the vMotion between sites.

Gardner: And why is that important to you?

Moudry: Well, because it's a high-availability, they meet a true high-availability, because you just vMotion all your stuff to the other side and nobody even knows.

We’ve vMotioned servers between the hosts, and nobody even knows they moved. It's up all the time. Nobody even knows that we changed hardware on them. So that’s a great thing.

Gardner: It's just coming out of the cloud.

Moudry: Yeah.

Chambers: Sometimes, there may be a need to shut down an entire rack of equipment in one of our colos. Then we’d have to migrate everything.

Gardner: So an insurance policy for an insurance provider?

Chambers: Yes.

Moudry: Yeah.

Gardner: I'm afraid we’ll have to leave it there, gentlemen. We’ve been talking about how insurance wholesaler Myron Steves & Co. has developed and implemented an impressive IT DR strategy We’ve seen how an even small-to-medium-sized business can create business continuity for its operations, and make IT more efficient and agile to its business users. I’d like to thank our guests, Tim Moudry, Associate Director of IT at Myron Steves & Co. Thanks so much, Tim.

Moudry: Thank you.

Gardner: And also, William Chambers, IT Operations Manager there at Myron Steves. Thank you, William.

Chambers: You're very welcome, thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to our audience for listening, and come back next time.

Transcript of a sponsored BriefingsDirect podcast on how small-and-medium businesses can improve disaster recovery through virtualization, while reaping additional benefits. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.You may also be interested in:

Today, we present a sponsored podcast discussion on how high-performance motorcycle designer and manufacturer Ducati Motor Holding has greatly expanded its use of virtualization and is speeding toward increased private cloud architectures.

With a server virtualization rate approaching 100 percent, Ducati has embraced virtualization rapidly in just the past few years, with resulting benefits of application flexibility and reduced capital costs. Ducati has embraced private cloud models now across both its racing and street bike businesses. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here to tell us about the technical and productivity benefits of virtualization and private clouds is Daniel Bellini, the CIO at Ducati Motor Holding in Bologna, Italy. Welcome to the show, Daniel.

Daniel Bellini: Good morning. Thank you.

Gardner: Tell me why virtualization has made sense for Ducati specifically, and why now you're moving more toward a private cloud?

Bellini: Probably most people know about Ducati and the fact that Ducati is a global player in sports motorcycles. What some people may not know is that Ducati is not a very big company. It's a relatively small company, selling little more than 40,000 units a year and has around 1,000 employees.

At the same time, we have all the complexities of a multinational manufacturing company in terms of product configuration, supply chain, or distribution network articulation. Virtualization makes it possible to match all these business requirements with available human and economical resources.

Gardner: Tell me why you had to do this quickly. Some people like to gradually move into virtualization, but you've moved in very rapidly and are at almost 98 percent. Why so fast?

Bellini: Because of the company’s structure. Ducati is a privately owned company. When I joined the company in 2007, we had a very aggressive strategic plan that covered business, process, and technology. Given the targets we would face in just three to four years, it was absolutely a necessity to move quickly into virtualization to enable all the other products.

Gardner: Of course, you have many internal systems. You have design, development, manufacturing, and supply chain, as you mentioned. So, there's great complexity, if not very large scale. What sort of applications didn’t make sense for virtualization? Are there some things that you haven’t moved there, and do you plan to go to virtualization for them at some point?

Legacy applicationsBellini: The only applications that didn't make sense for virtualization are legacy applications, applications that I'm going to dismiss. Looking at the application footprint, I don’t think there is any application that is not going into virtualization.

Gardner: So eventually a 100 percent?

Bellini: Yes.

Gardner: And now to this notion of public cloud versus private cloud. Are you doing both or one versus the other, and why the mix that you’ve chosen?

Bellini: Private cloud is already a reality in Ducati. Over our private cloud, we supply services to all our commercial subsidiaries. We supply services to our assembly plant in Thailand or to our racing team at racing venues. So private cloud is already a reality.

In terms of public cloud, honestly, I haven’t any seen any real benefit in the public cloud yet for Ducati. My expectation from the public cloud would be to have something that has virtual unlimited scalability, both up and downward.

My idea is something that can provide virtually unlimited power when required and can go down to zero immediately, when not required. This is something that hasn't happened yet. At least it’s not something that I've received as a proposal from a partner yet.

I wouldn’t say that there's a specific link between the private cloud and security, but we take always charge of the security as part of any design we bring to production.

Gardner: How about security? Are there benefits for the security and control of your intellectual property in the private cloud that are attractive for you?

Bellini: Security is something that is common to all applications. I wouldn’t say that there's a specific link between the private cloud and security, but we take always charge of the security as part of any design we bring to production, be it in the private cloud or just for internal use.

Gardner: And because Ducati is often on the cutting edge of design and technology when it comes to your high-performance motorcycles, specifically in the racing domain, you need to be innovative. So with new applications and new technologies, has virtualization in a private cloud allowed you to move more rapidly to be more agile as a business in the total sense?

Bellini: This was benefit number one. Flexibility and agility was benefit number one. What we've done in the past years is absolutely incredible as compared to what technology was before that. We've been able to deploy applications, solutions, services, and new architectures in an incredibly short time. The only requirement before that was careful order and infrastructure planning, but having done that, all the rest has been incredibly quick, compared to that previous period.

Gardner: It’s also my understanding that you’re producing more than 40,000 motorcycles per year and that being efficient is important for you. Given the small company, the need for precision logistics and the supply chain is very high. How has virtualization helped you be conservative when it comes to managing costs?

Limited investmentBellini: Virtualization has enabled us to support the business in very complex projects and rollouts, in delivering solution infrastructures in a very short time with very limited initial investment, which is always one thing that we have to consider when we do something new. In a company like Ducati, being efficient, being very careful and sensitive about cash flows, is a very important priority.

The private cloud and virtualization especially has enabled us to support the business and to support the growth of the company.

Gardner: Let’s look a little bit to the future, Daniel. How about applying some of these same values and benefits to how you deliver applications to the client itself, perhaps desktop virtualization, perhaps mobile clients in place of PCs or full fat clients. Any thoughts about where the cloud enables you to be innovative in how you can produce better client environments for your users?

Bellini: Client desktop virtualization and the new mobile devices are a few things that are on our agenda. Actually, we have been already using desktop virtualization for few years, but now we’re looking into providing services to users who are away and high in demand.

The second thing is mobile devices. We're seeing a lot of development and new ideas there. It's something that we're following carefully and closely, and is something that I expect will turn out into something real probably in the next 12-18 months in Ducati.

Looking back, there is nothing that I would change with respect to what we've done in the last few years.

Gardner: Any thoughts or words of wisdom for those who are undertaking virtualization now? If you could do this over again, is there anything that you might do differently and that you could share for others as they approach this.

Bellini: My suggestion would be just embrace it, test it, design it wisely, and believe in virtualization. Looking back, there is nothing that I would change with respect to what we've done in the last few years. My last advice would be to not be scared by the initial investment, which is something that is going to be repaid in an incredibly short time.

Gardner: One last issue. How about the management? Are you using vCloud Director or other ways that you can manage these environments, because one of the things that happens when there is a lot of virtualization is that it can be complex when you're dealing with heterogeneity? How about on the management issue? Is there anything that you've done there that you would share back to others?

Bellini: Director is probably one of the most exciting things I've seen in the last few years. I can't disclose what I'm planning to do with Director, but it’s something that is opening very interesting and new scenarios for IT and for a multinational company like Ducati.

Gardner: Well, very good. We’ve been talking about how high performance motorcycle designer and manufacturer, Ducati Motor Holding, has greatly expanded its use of virtualization and is speeding towards increased use of private cloud models.

I’d like to thank our guest. We've been here with Daniel Bellini, the CIO at Ducati. Thank you so much, Daniel.

Bellini: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening and come back next time.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP expert chat discussion on best practices for implementing cloud computing models.

You know, the speed of business has never been faster, and it's getting even faster. We’re seeing whole companies and sectors threatened by going obsolete due to the fast pace of change and new kinds of competition.

Because of this accelerating speed in business, managing change has become a top priority for many corporations.

The modern data center, it turns out, has to serve many masters. It has to be flexible, it's really a primary tool for business, and it needs to be built to last and to serve over a long period of time with ongoing agility, dependability, and manageability.

Difficult Trick

As you begin to cloud-enable your data centers, you'll recognize that you need to pull off a very difficult trick, especially nowadays, and that’s to be both increasing your organization’s speed and agility, but also at the same time reducing cost. This is a very difficult combination, but it's absolutely essential.

The cloud-enabled data center, therefore, is going to require a long journey, but in addition to making these strategies, put them in place and execute on them, you need to demonstrate the economic advantages along the way. That’s important to get the trust and allegiance of the business and the end users who are depending on these services. So proper planning and project management are essential and managing the expectations of users and business leaders is critical.

Speed to serious progress is a given these days in business, because businesses are under so much pressure to adapt and think first of advantages in their respective verticals. In their regions, they are under a lot of competitive pressure. They are also under pressure to show better economic performance themselves.

Further driving this need for change and adaptability is a big push to mobile computing, and even the increased social interactions that we're seeing in the marketplace and that are having a profound effect, things like social networks and sharing and learning more about business through these interactions.

We're seeing whole companies and sectors grow obsolete these days, if they don't keep up with the new trends.

So the increased speed of business is building a sense of urgency and risk, and the risk is that you don't perform well in the market. But there's a secondary risk, and that is, if you move too quickly to cloud, if you don’t do it properly, it could end up being a problem that reduce whatever the benefits of cloud computing are.

We're seeing whole companies and sectors grow obsolete these days, if they don't keep up with the new trends. We've seen many companies rise and fall rapidly, and it's important to build on the speed, but not so fast that you break the mechanisms and have an insufficient platform support capability in the process.

What's prompting this is businesses looking for innovation, or sometimes, in some cases many times, going around IT, starting to adopt cloud, the software-as-a-service (SaaS) applications and services outside of IT’s knowledge. They think that the cloud drive gets them better results, but it can actually spawn complexity and sprawl, both unintended consequences.

This can create quite a mess, and we have seen instances of this in the past when technologies are adopted without IT’s consent, and it means a mess for CIOs. They need to measure and integrate how these services can be brought in, both to work with existing and legacy application data and platforms, as well as to then bring in the hybrid services enablement.

Low-risk fashion

So the onus is really on the IT people to enable cloud adoption, but to try to do it in a managed, low-risk fashion. This require a discipline, and it also requires flexibility and adaptation to the cultural norms of the day.

Cloud enablement needs to be built in, not just at the technological level, but in ways that business and technology processes are developed, and this means IT thinking anew about being a service enabler, being a service broker, and being in a role of a traffic cop, if you will, allowing what can and can't be used. Not just say no, but learning to say yes, but providing it in a safe fashion. This is what we now refer to as a hybrid services broker function.

It's by no means too late to master services in cloud management, and it's not too early to begin to get really strategic in terms of shaping how the organizations react, thinking about data centers strategically, planning for a hybrid services delivery capability, recognizing also that the way that IT is supported financially is going to change.

People are going to pay as they use, and they're going to look for really good efficiency automation and management, a fixed purpose approach to IT. That means high efficiency and high productivity. It's what businesses and consumers are demanding and it's what IT must deliver or run the risk of becoming obsolete itself.

Cloud, in effect, is forcing a focus and hastening of directive. So what's really been underway for sometime, services orientation, that includes a focus on service-oriented architecture (SOA), business services management, and an increased emphasis on process efficiency. Clear goals are gaining that agility and speed, lowering the total cost, and rethinking IT as a services-delivery function.

The key is to gain IT’s trust, to build IT’s trust, to keep cost coming down, and to show innovation and building success along the way.

We're going to see here today how gaining a detailed sense of where you are across your IT activities is now crucial to being able to navigate a services consumption model, which include private cloud, hybrid cloud, and ultimately a mixture of public clouds. But with existing data centers, they need to know exactly what they have and know the assets that they're going to need to support, and how that will interact and interoperate and essentially integrate with outside services as well.

The key is to gain IT’s trust, to build IT’s trust, to keep cost coming down, and to show innovation and building success along the way, providing businesses with that agility and speed that they are really looking for. It's a very difficult fit, but we're seeing success already in the field from early adopters. They're learning to support automation, elasticity, and that fit for purpose capability across more aspects of IT.

It makes services orientation a mantra that can pay off in terms of efficiency and management, and it also helps reduce that risk by allowing them to remain in control with risk governance management.

We're now going to hear from a HP expert about meeting these challenges and obtaining the payoffs, while making sure that the transition to cloud and data center transformation is done in a safe and managed away. Now is the time to begin making preparations for such successful cloud enablement of your data center.

With that, I would like to now introduce our speaker, Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization, based in Singapore. Welcome, Glenn.

Exciting environment

Glenn West: Hi. The Cloud is an incredibly exciting environment and it's changing things in quite incredible ways. We're going to be focused today on how cloud is enabling the data center.

In the data center today, there are quite a few challenges, both from the external world, as well as from internal changes. In the external space, there are regulatory risks, natural disasters, legal challenges, and obviously technologies are changing.

As Dana mentioned, whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center. They must adapt and transform. Internally, greater agility and consolidation are needed. Green initiatives to save money and cost are putting great pressure on change.

So all of these things are causing the data center to converge, and this convergence is pushing the cloud.

What is a data center? In HP we have a very holistic approach. We'll start at the bottom from the facility point of view -- the location, the building, the mechanical and the electrical. Data center densities are growing quite rapidly and electrical costs are changing incredibly fast and rising. So the facility is very important in the operational cost of the data center.

Whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center.

Next, we go to the more traditional component, the actual infrastructure -- the server, the storage, the networking, both the physical and the virtual component of this. Then, on top of that, the part that drives the business, the applications and the information, and this is incredibly mixed. You have legacy applications, you have internally developed custom applications, as well as the more common ones.

On top of these, you have other facilities, such as critical systems, middleware, data warehouses and big data that are forcing changes. Data is growing very, very rapidly, and the ability to analyze this data is growing rapidly as well.

We next look at the management and operations. If data centers change, the management and the efficient operation become even more important. Then, controlling, governing, and the organization have key parts. Without having the right organizational structure it's very difficult to manage your clouds.

Some people view cloud computing as a fantastic miracle and some people view it as a fact. But actually cloud momentum has been moving quite rapidly, to the point that the whole population is using cloud on a routine basis. Most people are exposed as users to cloud via Amazon, or Facebook. Obviously, there are different types of clouds, and the ones I just mentioned are public clouds.

The next type is private cloud. An organization often has traditional IT. So some people ask if cloud computing is the next dot-com. In reality, cloud computing is an irresistible force. It's moving forward, and things are changing.

Scalable and elastic

So what does cloud mean? Cloud means going to a more service-driven model. It's scalable and it's elastic. Think about the public cloud space. How do you handle it when something is very, very popular? One day, it may have a hundred users and the next day it becomes the next hot thing for that instant in time. Then, the demand goes away.

If we use a traditional model, we can’t afford to have the infrastructure, but this pay-per-use is the foundation of cloud. We start looking at a service concept delivered and consumed over the Internet as needed.

The key word that keeps coming up is service, service orientation, the elasticity and the pay-per-use. Clouds ideally are multi-tenant. That can be within a company or outside a company.

Let's zoom to the next level and start with the private cloud. This is an internal client base. Think about it, for example, as a large company that has a hundred business units. Each business unit is a consumer of services. It's value-based and customized, and this is different than a public cloud.

A public cloud is a huge client base. You're talking about tens of millions or hundreds of millions of the potential subscribers in public clouds. It's very efficient, very data-driven, and based on large volumes.

It's radically different than traditional IT. You move away from managing servers, and you manage services.

Now the part in the middle, the hybrid, is a unique mix. I have a process that happens once in a blue moon, so I really want IT facility for it. The hybrid is a mix of public and private clouds to get even greater elasticity.

In the private cloud, all that is inside the company or inside the firewall, and as a cloud provider to your internal business unit, you start getting infrastructure pools. So you start seeing standardization.

Cloud is for automation, orchestration, automating control and a service catalog. All of a sudden, instead of calling somebody and saying, "I need this done," you have a portal. You say, "I want a SharePoint site," and boom. It’s created.

It's radically different than traditional IT. You move away from managing servers, and you manage services. In a data center, over the next couple of years, focus is going to be on private clouds. There will be public cloud providers for certain things, but the focus is going to be on the private side.

Private will slowly push into a hybrid and then slowly adds additional from the cloud services. The majority initially a private cloud will be infrastructure as a service.

The key drivers of this are agility and speed. When a business unit says they need it tomorrow, they're not joking. The agility that a private cloud provides solves a lot of opportunity in the business. It also solves the pressure of going into a public cloud supplier and getting it outside the IT framework.

Management and processingThe challenges over the next few years are management and the processing. How do we fund and charge back the whole business model concept? Then, building the cloud service interface, the service description. All of this is before the technology. Cloud is more than just the technology. It’s also about people and process.

Only a small portion will fit in the cloud today, but things are rapidly moving. We were talking about the future. Look at the current sprawl that's occurring. If IT doesn’t get in the front, this probably will get worse. But if the cloud is managed properly, then IT sprawl can be reduced, controlled, and slowly moved into a more standardized structure.

This is a journey. It won't happen overnight. IT sprawl is consuming 70 percent for operations and maintenance versus innovation which is only 30 percent. Something is wrong. This should be the other way around, and cloud provides a solution to start reversing this process. It's best when you have 70 percent in innovation and 30 percent in operations.

As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery. Hybrid delivery is when you start looking at the different ways of providing services, whether they are outsourced, cloud private-based, or publicly-based.

You start looking at becoming a service broker, which is the point at which you say that for this particular service, it makes best sense to pay it here. Then you start looking to manage it and be able to fully optimize your services.

As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery.

Going further out into 2015, 18 percent of all IT delivery will be public cloud, 28 percent will remain as private cloud, and the rest will be in-house or outsourced. You can see the rapid change going forward.

Gardner: What kind of applications do you think we are going to see? When you mention the service enablement, these different cloud models, I think people want to know what sorts of applications will be coming first in terms of applicability to these models?

West: If you're referring to public cloud, the first ones a lot of times are collaboration applications. Those were the first ones that moved into the public cloud space. Things like SharePoint, email, calendaring applications were the early adopter models.

Later we have seen CRM applications move. Slowly but surely, you're seeing more and more application types, especially when you start looking at infrastructure as a service (IaaS). It’s not so much the type of application, but the type of application load.

As you see, the traditional model is all about selling products, fixed costs, fixed assets. Everything is fixed. But when you start looking at a service model, it’s more pay-per-use. It’s flexibility, it’s the choice, but also a bit of uncertainty. In the traditional model you have controls, but when you start looking at the service model, it’s all about adaptability and change.

Big gapSo there's a big gap here. On one side, we're all about things being fixed, and on the next side, we're moving to being cloud ready, to hybrid services, and hybrid service delivery. So how do we get across this great divide? We really need a bridge. We really need a way to move across this great divide and this big change.

The way we change this is through transformation. It's a journey. Cloud is not something that you can wake up one day and say, "We're going to have it executed instantly." You have to look at it as going through levels of maturity.

This maturity model starts at the bottom. Some organizations are already at the beginning of this journey. They've already started standardizing, or they may have started virtualizing, but it’s a process. You have to get to the point where you're looking at moving up. It’s not just about technology.

Obviously, you have to get to the point where you're consuming cloud services. If you look at the movement to cloud, you can look at it as pulling organizations into it. This is driven by the rapid adoption by the masters in cloud. There’s a great push from the business side. Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.

Also from the data center itself, there is a push. The IT sprawl, the difficulty in management, are all pushing towards cloud quite rapidly.

Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.

The question is, where are we now? Right now, a lot of companies are in this environment where they have started virtualizing. They've moved up a bit and they've started doing some optimization. So they're right at the edge of this.

But to move forward you need to look at changing more than just some of the technology. You also need to look at the people, the technology, and the process in order to bring organizational maturity to the point that it’s starting to be service enabled. Then, you're starting to leverage the agility of cloud.

If you are just simply virtualized, then guess what, you're not going to see the benefit that cloud offers. You need to increase in all of these areas.

Gardner: As we look at the continuum, how do organizations continue to cut costs while they're going about this transformation. As I pointed out, that's an essential ingredient to keeping the allegiance, trust, and support of IT going.

West: This journey is quite interesting. To a large degree, the cost optimization is built in. When you start the journey in the standardization process, you start reducing cost there. As you virtualize, you get another level of cost reduction. At each step, when you start going to a shared service model and a service orientation, you start connecting things to business. You start getting the IT concepts dealing with the business cost.

Further optimizedMoving up to the point of elasticity, things are further optimized. This whole process is about optimization, and when you start talking about optimization, you're talking about driving down the costs.

Currently, between the beginning of this journey and the end of this journey, we're reducing cost as we go. Each stage of this is another level of cost reduction.

We mentioned that the cloud isn't just about technology. Obviously, technology is part of it, but it's also about automation and self-service portals. The cloud is about speed. Imagine the old traditional process, you say, "Let me weigh the capital equipment required. Let me get that approved. Let me write the PO."

To get a server under the traditional system, I've seen organizations that take nine months. That's not agility. Agility is getting it in 90 seconds. You log into the portal and say, "I need a SharePoint Server," you're done.

As part of the process, you also have to get into standardization. You have to get into service lifecycle. A cloud that never throws anything away is not an optimized cloud. Having a complete service lifecycle, from beginning to end, is important.

In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control.

Usage and chargeback are key elements as well. Anything that's free always has a long queue. In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control. Having a way of allocating and charging back to the consuming parties, be it an internal customer or outside customer is very important.

Elements often forgotten in cloud are people and having a service orientation. If you look at a traditional IT organization, you have a storage manager and a network manager. If you look at cloud, you have service managers. The whole structure changes. It doesn't necessarily reduce roles or increase roles, but the roles are different. It's about relationship management, capacity management, and vendor management. These are different terms than traditional IT.

If you look at it moving from private cloud, what are the big changes, versus the lower level of maturity? Obviously getting into resource management, looking at standardizing process, getting some automation done, aligning the business, service catalog, self-service, and chargeback. These are the foundations of moving from level 2, where you have done some virtualization, into the beginning of implementation of private cloud.

So what can we do in private cloud? Obviously test and development is the perfect first item into private cloud. New services? Cloud is here. If you're implementing something new, it should be cloud focused.

When you start looking at large batch or batch processing needs, these are things that come and go. If I need some processing power now and I don't need it tomorrow, this really plays to be the key elements of cloud.Opportunities for cloud

High performance computing, web services, database services, collaboration, high volume, frequently requested standardized and repeatable. That pretty well identifies those great opportunities for private cloud.

Now that we've talked about private cloud, how do we slowly move to more of a hybrid model? For the hybrid model, right off the bat, we need to start looking at adding public cloud services.

Once you start moving into public cloud, you need the ability to understand that things will scale a business, meaning that you need to look at the variability of cost. They need to be tied to the level of business.

Things like backup ability, interoperability and standards, and security are additional things that we need to look at as we move into public cloud services and the hybrid model.

Let's talk about the types of workloads. We need cloud for things that are dynamic, that go on and off at times. On every Monday I need to do this this application. It's going to consume significant resources once a month or once a quarter, or this project is going to run for a moderate amount of time and the demand is coming and going.

Things like backup ability, interoperabilities and standards, and security are additional things that we need to look at as we move into public cloud services and the hybrid model.

The next area that works really good is something that is growing very, very rapidly. Because of the elasticity of cloud, rapid growth is a fundamental ability of cloud. Application workloads that need to be able to grow very rapidly are ideal.

Predictability is another thing. If you have applications with an unpredictable load that works really well. Then, things that are periodic as well. Your fixed cost is low then.

Imagine you have a workflow that is running at 99 percent of the time. There are very few things like that in most organizations, but there are applications like this, and they're not fantastic for cloud.

Let's talk about the things that are pushable to cloud. First, core activities that are essential to the business are not suitable to go to cloud. Those are best in a private cloud. But if you start looking at things that are not unique, immediate, but not a differentiator or are cost-driven, then those are ideal for public cloud.

Basically core activities are very, very good for private cloud and less core activities or that are cost-driven are more ideal for a public cloud offering.

Lock-in and neutrality?Gardner: Glenn, looking at this notion of moving things around in and out of private and public clouds, perhaps moving from a core and context decision process into actual implementation, what about standards, lock-in, and neutrality?

Where are we now in thinking about being able to move applications and services among and between clouds? What prevents us from getting locked into one cloud and not being able to move out?

West:Gartner actually did a study, and found that HP is one of the most open players in the industry, when it comes to cloud. A significant number of the public cloud suppliers actually use our equipment. We make a point of being totally open.

There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards. The cloud industry is moving fast, and if you look at cloud, it's about openness. If you have a private cloud then you cannot have the ability to burst to public cloud.

Guess what, that’s not a viable marketing offering. But the cloud industry as a whole, because of the interoperability requirements, has to be inherently open.

There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards.

Gardner: So it's not only important to pick the technologies, but it's very important to pick the partners when you start to get into these strategies?

West: That’s absolutely right. If the viewpoint of the company that you're getting your cloud from is to lock you in, then you're going to get locked in. But if the company is pushing hard to stay open, then you can see it, and there are plenty of materials available to show who is trying to do lock-in and who is trying to do open standards.

What do we need to think about here? Flexibility is obviously important. Interoperability -- and I think Dana nailed that one on the head -- being able to work across multiple standards is important. The cloud is about agility. Having a resource pool and workflow that can move around the resource pools on demand means that you have to have great interoperability.

Data privacy and compliance issues come into play, especially if we move from a private cloud into public cloud or hybrid offerings. Those things are important, especially on the compliance side, where the cloud supports data being anywhere.

Some requirements, depending on the industry, actually restrict the data movement. Skill-sets are important. Recovery and performance management, all of these things can be managed with the right automation and the right tools in cloud as well as the right people.

Greatest flexibility

We've talked about moving forward and now we're getting into the full IT service broker concept. This is where we have the greatest flexibility. One of the things you said very well was about dynamic sourcing. We can look at the workflow and we can push and share these workflows internally and externally to multiple cloud providers and act as a service broker, optimizing as we go.

You should have this even from a corporate point of view. You could be a service provider where you take those services and you broker and manage those services across multiple delivery methodologies.

At this point, you have to get at an organization very good at doing service-level agreement (SLA) management. SLAs, when you are growing cost and managing workflows through this is very important. When we start talking about going across multiple clouds, advanced automation gets to be very important as well.

As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services. So you have both, the physical as well as the virtual services and you have the appropriate mix. It’s changing practically on a daily basis, as business needs demand.

As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services.

Let's talk about these physical side and the changes in the data center. One of the things that looks quite interesting, if we look at resiliency, is that a lot of data centers are looking at moving further up the resiliency levels, and each level of this has significantly increased cost, practically exponential cost increase.

Once you implement cloud within your data center, you can get a lot more flexibility all of a sudden, because instead of building a single Tier 4 data center, using the efficiency of cloud, you could potentially build Tier 2 data center and have greater resiliency and greater agility.

The big change is in the way data center physical infrastructure is done, but the thing that's changing quite rapidly is density. If you look at it in a traditional data center, infrastructure is reasonably low to moderate density.

When you start looking at cloud enabled data center, high density is the norm. Greater efficiency, power, space, and cooling are all typical of cloud-enabled data centers. This true IT resource is where anything can run anywhere, and it becomes quite different.

The density change is radically different. The power per rack and cooling all change. The next thing with power is that even if you start looking at a traditional data centers, things such as structure, cabling, and power have to have flexibility and have to have the ability to change.

Orchestration also becomes important. If you start looking at a cloud-enabled data center, everything needs to scale. All the cost factors should scale with the amount of business.Standardization and efficiencyThe standardization level also changes as well. Standardizing configurations allows rapid redeployment of equipment. Finally, it’s efficiency -- this dynamic power and cooling during the work loads.

These are pretty radical changes from traditional data centers. Data centers are evolving. If you look at traditional data centers, they were quite monolithic -- one large floor, one large building, that’s pretty well it.

Slowly moving up to multi-tiered data centers, followed by flexible data centers that share resource utilization, and everything can change.

Most organizations when you start looking at the different areas, categories, and types of culture, the technology is there. If you looked at a company today, they will have different levels of maturity. This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry. The thing is that if you look at this example, different areas have different levels of maturity.

The problem is for cloud is that we need to look at something a little different. We need to get an even playing field across all of the areas so that the organizational maturity, the culture, the staff, the best practices are even in the level of maturity for cloud to work.

This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry.

If you bought the best technology, but you didn’t upgrade your governance or the culture, and you didn’t implement the best practices, it won’t work. The best infrastructure without proper service portfolio management, for example, just isn’t going to work. For cloud to work properly, you must actually look at increasing maturity across all areas of your data center, both the people and the process.

Some of the criteria for cloud include technology, consolidation, virtualization, management, governance, the people, process and services, and the service level. Managing the service level can often reduce your cost quite significantly in cloud.

In the process, adopting ITIL and looking at process automation and process management. These organizational structure and roles are quite different in cloud.

Think services. Understand what you have. Decide on what your core and your content are. What is the foundation of your business and what is something that we could start considering moving into public cloud sector?

Get your business units and your businesses on your side. Standardize and look at the automation of processes and explore the infrastructure conversion. Then look at introducing your portal and making sure you have charge back. Start with non-critical or green-field areas. Green field are your new activities. Then, slowly move into a hybrid approach.

Optimize further

Evolve, optimize, benchmark, cycle through -- and optimize further. HP has been doing this for a while. We did a very large transformation ourselves and out of that journey, we've created a huge amount of intellectual property. We have a Transformation Experience Workshop that helps organization understand what changes are needed. We can get people talking and get them moving, creating a vision together.

We have data-center services for looking at optimization, the physical change of data centers. And then we have comprehensive data center transformation services and road map. So get some action going. Let's start at doing the transformation.

A great way to do this is do it a one-day Cloud Transformation Experience Workshop. This is done in panels with key decision makers and it allows you to start building a foundation of how to go through this journey in transformation.

Gardner: Okay. Great. Well, we'll have to leave it there. I really want to thank our audience for joining us. I hope you found it as valuable as I did.

I also thank our guest, Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP expert chat discussion on best practices for cloud computing adoption and use.