Folks, here it is - the "all-in" step-by-step guide for setting up Site Recovery Manager with nothing more than two ESX servers (which you can even do with home-brew servers, as noted here) and the Celerra VSA.

If this is your first read, you can either just click on the document below or start the 101, 201, 301, 401 journey here.

This guide isn't the first one of these out there. I think my respected friend Adam Carter at Lefthand did it first (Adam, hope you're enjoying HP!). The NetApp one is pretty good too. What I like about ours is the extreme completeness. It's also nice that the Celerra VSA has no timeouts, no limits (except that you'e not allowed to use it in production).

A quick shout out... This was absolutely yeoman's work by one of the newest members of the Global EMC VMware Specialist squad - Bernie Baker, with strong assists from one of the other newbies - Stephen Spellicy. Both of these guys started relatively recently, and this work is (IMHO) FANTASTIC - it highlights that this stuff can be EASY (and that they have what it takes to have a early impact). Bernie - you've set the bar high for the team and yourself - GREAT WORK!

Until the next major revision of the Celerra VSA (which will be linked to major feature releases), expect to see only one more "301"-level Celerra VSA post (downloading and using EMC Replication Manager for VMware for point-in-time, VM-consistent instant VM backup/restore) - we're warming up in the batter's box a series on Avamar Virtual Edition....

December 08, 2008

I don't think company culture comes from the top down - or from the bottom up. I think it comes from what people act and what they try to do every day - at every level. I think people overestimate the ability of the people at the top to cause large-scale change (though it's certainly proportionately higher, and a great litmus test of leadership skill), and I think people underestimate the ability for individuals at all other levels to have an impact.

What's the connection to the post title? EMC ControlCenter is Storage Resource Management software - and also integrates with the VC APIs and ESX CIM APIs for an end to end view. There's a demo here.

So - we have a customer who isn't happy with the state of Storage Resource Management software in general, and ControlCenter quite specifically here. Ouch. But, customer is always right (though StorageBod - I have heard that customers are much happier with 6.x than 5.x - but hold us to that).

What I really like is this response from the ControlCenter team - to me transparency is good, and facing the challenge is good. Specifically - I think it's EXACTLY an example of the opportunity to have an impact. Storage Resource Management is a challenge for our largest customers - and here you have someone taking the critique, and offering to take MORE - to take that customer input, and make the product the best it can be.

I run a ControlCenter VM, and Dave, I'll take you up on your challenge. Get that community open in January. BTW - our EMC/VMware Open Community Wiki should be up early in the year also....

November 27, 2008

Glad to see that people are having downloading and success with the Celerra VM on VMware Workstation based on this original post. I wanted to provide a quick "HOWTO" to help customers Replicate between two VSAs, for many reasons - one of the most fun being VMware Site Recovery Manager.

Celerra Replicator (I hope the Celerra Product team doesn't read this) is a RIDICULOUSLY inexpensive (particularly when compared with the competition) but very sophisticated remote replication capability. It can support cool stuff like:

1:N and also N:1 replication fan-in/fan-out replication relationships (like with any technology there are operating envelopes and parameters)

It can support cascading topologies (i.e. site one replicates to site 2 with a given frequency, and then site 2 replicates the data from site 1 to site 3 at a different frequency

It has pretty sophisticated QoS mechanisms (i.e. you can setup different bandwidth use for different parts of the day

You can replicate all sorts of configs - CIFS/NFS/iSCSI, and fully supports thin provisioning at the source or the target

And of course - most interestingly of all - it's fully integrated with VMware Site Recovery Manager (so you can use this to build your own SRM playground.

Of course - SRM doesn't support NFS yet, so I focused this HOWTO on iSCSI - but it's very easy to see how you would do CIFS/NFS (just hit the other radio button in step 3)

Before you do this - make sure you have a working VSA - just follow the Celerra VSA HOWTO 101 here...

I've updated the OVF - tightened it up a bit. You can get the new one here

Ok - have you got it? Have you followed the 101? Want to add more storage - just follow the Celerra VSA HOWTO 201 here.

Ok... then read on...

Quickly then here - in this HOWTO:

Configuring NTP

Correcting the Replication Database (post Clone or deploy from OVF the Celerra VM's serial number is changed) - BIG thanks to Himabindu Tummala and Santosh PasulaReddy in Celerra Engineeringfor helping me figure this step out

Configuring an iSCSI LUN that will be used as a replication target

Configuring Celerra Replicator

One of our elite-delta squad VMware Specialists is putting the finishing touches on an "all in" doc and pair of Celerra VSAs to make setting up SRM with two Celerra VSAs a breeze, but you have every ingredient you need now with these HOWTOs (and SRM itself is a walk in the park).

One quick note - remember that if you are looking for support - post on http://forums.emc.com, in the Celerra, Celerra Simulator forum.

Next up to plate is the HOWTO on Avamar Virtual Edition, and I'm also going to do a HOWTO post on how to use the super-cool Replication Manager with the VSA. Rep Manager - aka "RM" can actually be used with all EMC's primary storage platforms, for all sorts of replication/test and dev use cases - for Exchange, SQL Server, Oracle, and also VMware VMs and VMFS datastores - NFS datastores being supported shortly.

This version is much faster - at least 2-3 times faster than the old one. In terms of speed, I think it would be hard to get faster than this - in our "playing around", it's at least 10x times faster than the upcoming View Composer approach.

So - does that mean we think the array is the place to do this rather than the ESX server (aka VMware View Composer) long term or in all cases? NO.

There are so many things that can be done re: image preparation and handling and management better at the VMware layer than the storage array layer - which are both as important as speed. All the approaches have similar cost savings.

We're embracing the upcoming stuff (stay tuned - another blog post coming on this in a couple weeks). Over time, we're going to work to merge them as much as possible (i.e. use the VMware View tools to manage, but where speed of mass distribution is the key - you can choose to do it at the array layer rather than at the VMware layer (but it's managed at the VMware layer).

November 11, 2008

Glad to see that people are having downloading and success with the Celerra VM on VMware Workstation based on this original post. I wanted to provide a quick "HOWTO" to help customers add more physical storage, so they have more to play with!

Before you do this - make sure you have a working VSA - just follow the Celerra VSA HOWTO 101 here...

I've updated the OVF - tightened it up a bit. You can get the new one here

Ok - have you got it? Have you followed the 101? Ok... then read on...

Quickly then here - in this HOWTO:

Adding physical storage to the VM

Configuring the additional storage to the Celerra VSA (and how to remove storage)

Next up to plate are the 301 (setting up Celerra Replicator and SRM), and the Avamar Virtual Edition.

November 09, 2008

Ok, here's the final wrap-up (a month late :-) - I FINALLY got the video of our keynote - it's below in it's entirety. For the ultimate "being there" experience (HA!), watch the video with the slides at the same time (the camera work was wierd, no slides behind us, they were off to the sides). But, kinda neat regardless. I liked the "how many folks here..." bits at the beginning, and you can see some of the physical demonstrations we did with the CX4 and the iomega StorCenter ix2 (our $329 NFS server/iSCSI target)

UPDATE (Feb 2, 2009) - The SS4200E is a fine choice, and so is the ix2. If you can wait a little bit... I would hold on. iSCSI work is in the lifeline codebase, but still pushing to make it available in all the platforms - which have varying CPUs, and varying iSCSI performance. for now, consider the shipping SS4200E and ix2 as NAS-only until I give a firm update. There are other reasons also to wait for about 1 month....

I also cleaned up the PDF (export of the straight PPT was a bit mangled due to powerpoint builds, and the one on the VMworld site is pretty weak quality), consider this the one to use if you want to look at it. It's useful to look at this as you watch the video (see note above).

November 05, 2008

Folks - if you don't listen to these, I would highly recommend it.... They cover broad topics, and tend to be not the usual marketeering schlock, but rather a frank dialog between technical folks who work with VMware every day.

What we're working on next (i.e. taking SRM to the next level) around Disaster Recovery for VMware

Storage Best Practices for VMware - what we're seeing work well.

And most importantly... anything you want to ask me!!!

A couple "By the way" comments:

A really good post on single initiator zoning on Yellow-Bricks... This has been in all the EMC VMware Academy training sessions we've been doing for more than a year. While we run our "best of the best" through VCP, everyone who is a pre-sales technical resource needs to do the VTSP and these "Academies" which compress the "Install and Configure" course into 2 days (with EMC-specific content on storage, backup and DR you don't get in the VCP training). Glad to see VMware saying the same thing. Soft vs hard zoning is subjective (i.e. up to your preferences, and debatable), but single initiator zoning is not. Do yourself a favor and don't create a single zone for all your ESX servers and VCB proxies.

I've also been up to my eyeballs over the last few weeks (last week alone I worked with customers who in aggregate have more than 20K VMs deployed - wow), but have some free time today, so am hoping to push out a couple of blog updates (including some long awaited Celerra Sim 201 stuff) I've been working on for a while.

Customers running VMware on NFS (on NetApp or Celerra) - make sure you read Scott's post on NFS.lock.disable (hint, leave it at the default of 0) here. Note that this isn't a NetApp technical issue per se, except for in the sense that they published their best practices with this incorrectly in the past). The same thing would occur on any NFS server (including the Celerra). Moral of the story to me?

Locking isn't inherently a bad thing (it's often incorrectly bandied about as a core "VMFS issue" when people are FUD'ing VMFS scaling incorrectly). Locking (at the SCSI on block devices and file level for VMFS and NFS) is an intrinsic and important property of ESX - and critical for ensuring no split-brain condition occurs.

It's very important for all of us to prioritize best practices around production use cases, not anciliary use cases. This may make me a bit of a fuddy-duddy, but I think it's the "right thing to do". While the VMware issue that caused ESX snapshots to be slow on NFS datastores was VMware's - not NetApp's (fix in ESX350-200808401-BG), I personally think it's a mistake to recommend a workaround that compromises availability (since on NFS **and** on VMFS - where the file locks are maintained in the VMFS metadata area - the file locks are an important governator on VM's being booted on multiple ESX servers). Again, people in glass houses shouldn't cast stones (i.e. we've made that mistake too in the past!) - but one of those things to consider as your vendors all bend over backwards to compete.

Did you see EMC and Cisco buy more VMware shares? For big companies - the amounts are small - but the message is clear - Cisco and EMC are doubling down here.

I'm pretty pumped with the Obama win last night. Some of my best friends and colleagues, smart and whom I respect are Republicans. It is perfectly possible to disagree without being disagreeable. But that said, I do feel like it's a fresh start, and is a great testament to the United States' ability to remake itself and adapt.

October 06, 2008

It's been a bit, I know (one thing I do suck at with this blogging thing is "time to market" :-) Have been slammed - we had our EMC Technical Consultant/Solution Architect (and other various pre-sales technical roles) conference last week in Orlando, and I needed to show off some more distant future things.... Excuses, excuses.

I've gotten lots of requests for all the EMC sessions, including the keynote. I've attached them in the body of this post (video of keynote will be posted shortly)

Quick observations:

I think Paul Maritz did a great job on his keynote. He summed up VMware's vision (VDC-OS for the internal cloud, vCloud for a federated external cloud, and vClient for the desktop dilemma). Analysts hammered him the next day - but it was a weird bit of feedback: "right on message, what customers want, clear, and very differentiated. BUT delivering on the vision will take a couple of years, and more SG & A expenses." I don't know what to say to this. VMware is way ahead of their competition - but you always need to innovate to stay ahead of the pack. I think they were damned if they do, damned if they don't. Had they just announced VMware.Next (whatever the next VI iteration will finally be called) - they would have been slammed for being tactical, that "hypervisors are a commodity" and lack of vision. BTW - Joe Tucci got up in front of our thousands of pre-sales technical folks from around the world last week, and outlined EMC's vision of the future last week - and suffice it to say we see the world similarly, we're holding hands, and we're working together as partners to make it happen - and VMware continues to be firmly independent because we know that vision of the world requires them to be independent.

I'm no analyst, but what stood out to me was simple - every customer I talked to at the show loved VMware, loved the technology, and were investing for the future. That strikes me as a very, very good sign for VMware's long term prospects.

The hands-on-labs were good. I did the Virtual Datacenter tech preview, the vClient tech preview along with others, but gave up my seat to customers. Heck, I've got all the betas in the basement lab anyway :-) It would have been good in the SRM hands-on to do a actual failover, and demonstrate how to do a failback. This can be done (though not automated yet) in v1.0

The whole vClient tech preview convinced me of something... While NetApp, NEC, and EMC were each trying outdoing each other to demo VDI at scale (BTW - NEC won - with 12K clients :-), the VMware View Manager (formerly known as VDM 3.0) and VMware View Composer (formerly known as SVI) are simpler. Also, as much as there is an economic challenge with storage (solved via deduplication or writeable snapshot technologies) with VDI, the technical challenge (particularly at scale) is so much more... EMC did demonstrated (forgive the lack of a voice over, this video shows the scaling point, how the ESX clusters were configured, how many VC instances were needed, and the cluster utilization before workload applied) 10K clients, at a 100:1 storage savings (5 VMs per source LUN, 100 snapshots off every source), but the other reality (beyond "wow, look at this scale!") is that the process of update, management, pooling, even using Dan's award-winning PowerVDI script (and this doesn't take away from how cool that is) - it's just more complex, particularly in the ongoing "lifecycle" of the desktops. I think the reality is that VMware themselves are in a better position to resolve this than us as the storage vendor. The round 1 exercise was worthwhile - we learned a ton about scaling up the vClient use case (all aspects, ESX, vCenter, storage, network). We're updating the solution lab as we speak to View Manager and View Composer, and neat unannounced technology from EMC for the user data for solution testing round 2. We also want to bring Cisco as the 3rd leg from day one - there are loads of stuff in the vClient side that have to do with LAN and WAN optimization and compression.

I won't speak for any other vendors, but I was pretty happy with the traffic at the booth - lots of customers, lots of good questions and dialog. I think these events (while exhausting) are a great opportunity for direct customer/vendor communication - trust me when I say that we're listening!

I was very happy with our joint VMware/Cisco/EMC keynote. It was packed, standing room only - I think we would have been right around 1000 people. I attended most of the platinum sponsor keynotes, and I think we had the largest attendence, with NetApp a close second. The others (HP, Dell, IBM) were embarassingly empty. I don't know what the deal is there.... I thought the IBM one was pretty good, for example. In part II of this post, I'll post the video recording itself of the KN EMC session (VMware still hasn't gotten it to me) - but it exceeded my expectations, that's for sure. I wasn't sure originally how well the 3-party model would work (1 hour is not a lot of time for a lot of content), but I think it worked well. At the beginning, I started by asking how many people were EMC customers, then Cisco customers, then VMware customers. It was awesome - nearly 100% to every questions - which affirmed why I decided to share our Platinum spot in the first place. Our customers want to see what their trusted partners are doing TOGETHER to solve their challenges. I'll say again what I said in the session: THANK YOU for being customers. You make our world go around.

I was very proud of the EMC team's execution to support the event. For those that wonder, it takes a gargantuan amount of work to pull something like that off - and that's just the platinum sponsor side - 9 speaking sessions, a booth with 8 stations and a constant theatre schedule, customer 1:1 etc. VMware themselves - well, hats off to the team (I can speak for EMC during EMCworld - and that's 10K people, not 14K). A special thank you to Chris Carrier and Cathy Cushman on my team - wouldn't have happened without you.

To give you a couple of views into the behind the scenes madness:

imagine shipping 13 CX3s, 4CX4s, 2 DMX 4-950s, and 3 NS40FCs to the event (some to Palo Alto for staging a couple months beforehand). That's measured in TONS. Of course - when things are frantic (and shipped in racks!) some arrived with significant damage from all the transit. I got some panicked calls from the VMware folks for the gear supporting a couple HOLs and the VMware booth on Friday before the show. A big thank you to the local Nevada EMC customer support team, who swapped out damaged parts in well under the 4hr MTTR. I think it's fair to say that the VMware team was amazed when an hour after discovering damaged LCC cards, the replacement parts showed up on the scene.

Without being too unkind about it - some of you may have noticed... well... inconsistent VMware naming in some of the sessions :-) The new VMware branding and framework was being updated short days before the event. Upside? I think they really nailed it. A good framework to have broader discussions with customers and between VMware and partners like EMC. It ain't (just) about the Hypervisor. But - the downside - core content was undergoing significant changes up to minutes before the sessions.

We got the vStorage API-enabled builds of... VMware.Next (I can't wait until I can call it something - it's a PITA).. (not the vStorage APIs for multipathing enabled build - we've had that for a while) on Wednesday before the show, and pulled off demonstrations (I think only EMC and Dell/EqualLogic were doing any vStorage API demos at VMworld - correct me if I'm wrong) days later. Great EMC effort from our platform engineering teams and also Scott Dougherty (one of our field VMware technical specialists) - thank you all!

There are something's I think we could have done better, and as the team starts planning for VMworld Europe 2009 (Feb in Cannes - my goodness, that's right around the corner!), we're working on it. What do you think we could have done better? I really want to know!

All the late night evenings before VMworld paid off, and all the late night evenings celebrating with colleagues, customers, and competitors were loads of fun.

Ok - for those of you looking for the EMC sessions read on - and if you have ANY questions on one of the sessions, don't hesitate to comment!

September 15, 2008

Pretty big day here at VMworld, meeting lots of people I know only online - which is great. My photo is in the about section of the blog, and if you'd like to meet me, I'd love to meet you!

So - the VDC-OS is the big news, let me fill you in on one part of it - vStorage.

vStorage has been something that VMware and EMC have been working together on for a long time, in fact, before it was called vStorage (which was recent) - it used to be called VMAS a name only engineers would like. In fact, we've been working on this from almost right after the original acquisition before the program even existed formally.

vStorage is pretty broad: It's 2 very new VMware things, and 2 VMware features

vStorage API for Multipathing

This was the first idea from eons ago. Customers know the pain VMware multipathing can be - setting up static paths, manual "load balancing" by rotating primary paths, and no automated path discovery.... you know what I'm talking about.... But we needed to find a way that we could get PowerPath, which is the far and away the leader in this space, into the vmkernel, but still make it open for others to also innovate with VMware. It so happens of course that EMC are the ones way out in front here.

So, while native multipathing (NMP) continues to get better (round robin will likely soon no longer be experimental), but this gives EMC customer another option - an option which is a quantum leap in multipathing/failover/scale and availability for block storage

It's also an avenue to deliver the other stuff PowerPath customers love in the physical world (path-based encryption, accelerated data migration)

It's also simple, and easy to use and install. It just works, like the best technology does.

Acceleration via storage offloads using SCSI driver primitives (we're demoing this in our booth and in the VMware/EMC/Cisco Keynote) - think "I/O" dedupe (not to be confused with Storage Dedupe). This can have a HUGE impact on storage network utilization - much more efficient, and very important for lower bandwidth storage networks like 1Gbps iSCSI.

Thin Provisioning Integration

While NetApp, EMC, and Dell/EqualLogic are all shipping Snapshot VC integration today (we're demoing Replication Manager in our booth and in the VMware/EMC/Cisco Keynote, and you can also see a demo of this here), vStorage will make this more transparent, but adds somthing REALLY cool - the ESX server tells the array what BLOCKS makeup a VM, so svmotion can be offloaded to the array as an alternative to the file-copy method now, snapshots can potentially be done on VMs even for block devices. Again, this uses SCSI driver primitives

August 08, 2008

Folks, many folks have asked HOW we do the "thousands of desktops, instantly, consuming the space of one" that I demoed and described about here.

Here are the step-by-step instructions and PowerShell scripts we use.

Note that these are provided openly and freely, but with no support, and no guarantees. That sounds freaky, but it's not. The APIs we use (on ESX, Virtual Center, Active Directory, Windows, and of course the array itself) are all standard and supported). The SCRIPT however, always needs a little change here or there for each customer.

We do provide end-to-end support and a custom solution for any customer that wants it, as part of an overall engagement.

But, we wanted to be open and clear - and share the know-how. Questions posted as comment here will get best-effort support (again, customers who have had this professionally installed can just call 1-800-SVC4EMC).

PowerShell scripting with VMware is a fun new frontier, and the tool itself as evolved rapidly as we deploy this at customer after customer. It now does a very clean job of integrating with the connection broker.

It will continue to evolve as the next generation of VDM and other VDI elements come from VMware - exciting things to come there in the coming months. At EMC, our view is that every customer is unique. In some cases, mass array replicas are the way integrated with VDM, in other cases its VI in the back, XenDesktop as the presentation/broker, in some cases in the future it will be leveraging all VMware technology, and we'll happily be fast, available storage. Customers are different. They come in every shape/size/color. BUT - we have the answer :-)

I can't claim any of the brains behind this - it's one EMCer, and one who deserves a lot of credit - Dan Baskette. Dan - I've said it before, I'll say it again:

While EMC may have 400 VCPs and 40K employees, people underestimate the power of individuals to make a massive difference - at a startup, or a huge enterprise. You are a difference-maker.

Oh, BTW - my earlier post, with all the servers? Here it is today. We're now up to 312 of those servers. What the hell are we doing? Answer: Building a 500, 1K, 10K, 20K and 40K (if we can get it that high) client VDI reference architecture - this is the VI layer, there is also the connection broker and client layers (and the storage layer) you can't see in this shot. While we are aiming to have initial results for VMworld, this will remain indefinitely as part of the VMware center of excellence out there in Santa Clara for customer who want to try things that are hard to do at home :-) There are labs and solutions centers with far more, but all in one place, dedicated to one purpose, something neat.

Read on for instructions and the script itself. Partners, Customers and EMCer - note that you can do this with the Celerra Sim so you can easily build your own environment to give it a whirl (instructions on how to get that working here)

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.