“Today we announced HC3 Cloud Unity℠, a new partnership with Google that has been two years in the making. Both companies have committed significant resources and technology to make this happen, and we’re super excited to announce it.”

So what is it? Simply put, HC3 Cloud Unity puts the resources on Google’s cloud onto your local LAN. It becomes a component in your infrastructure, addressable locally, which your applications can inter-operate with in the exact same way they would with any local system.

The impact on operations is significant. For example, this takes the concept of cloud-based disaster recovery to a whole new level, because again, those cloud resources are part of the local LAN. This means the networking nightmare that is typically present in DR is gone, and an application which fails over to the cloud resource will retain the same IP address that it had before — and all the other systems, users, and applications will continue to communicate with the “failed” resource as though it never moved.

This also enables you to think about DR in a completely different way. Usually we think of DR as “site failure” — and certainly that could hold true here. But, in addition, we can now think of using this type of cloud-failover for individual apps and not necessarily entire sites. Again, since those apps failover into the same LAN, retaining IP addressing, they will work in either location.

Those are two concepts, simplified networking and DR, that we think customers will gain immediate benefit from. In addition, those examples should point you to something very new and exciting: true hybrid cloud. With everything on the same network, an application which may use several VMs can have those VMs spread across both their on-premises systems and the cloud, without any change in configuration or use. Furthermore, moving an application to the cloud is as simple as live migrating a VM between on-prem servers, because from a networking perspective, the cloud is “on prem.”

To accomplish this we have combined technology with Google, and both sides have also introduced new tech. On the Google side, this uses the resources of their cloud combined with newly launched nested virtualization technology. On the Scale side, we are using the HC3 platform with Hypercore and our SCRIBE SDS layers, and have now added SD-WAN capabilities to automatically bridge the networks together into the same LAN.

The end result, in-line with all Scale products, is extreme simplicity. These cloud resources are there on your LAN. Any VM can access them, use them, and move in or out of the cloud without reconfiguration or cloud awareness. We know our customers are often running a wide mix of workloads, some of which may be older, legacy systems. Whether old or new, these apps can now run in the cloud with a simple click in the UI.

When we first were approached by Google two years ago, we both immediately saw the similarities of our platforms and approaches. From KVM to software-defined storage, there was a lot that was already “in alignment” that enabled our platforms to work so seamlessly together.

Delivering this type of hybrid cloud functionality is the road we’ve been driving our customers down for a long time. From first coining the term “hyperconvergence” in 2012, to now bringing customers into this cloud-converged environment, we will continue to innovate to meet customer needs while maintaining the ease of use and interoperability that is fundamental to the Scale platform.”

Google released a blog post detailing its compute engine and its beta nested virtualization here

HC3 Unity promises to merge an on premises virtual network of HC3 with a virtual network on GCP creating a single, ultimately flexible, virtualization platform. This will allow (assuming sufficient internet bandwidth) virtual machines an applications running on site to migrate off site as needed, as well as being able to lease compute or storage capacity from google at times of high load. Facilitating optimum utilization of on site hardware, with maximum overprovisioning knowing that GCP can pick up the slack when the in house hardware cant cope.

This for me is a game changer, having the power, scale and reliability of google cloud in house changes the paradigm. Infrastructure as a service (Iaas) is here to stay, IT departments can forget about hardware and concentrate on what matters for the clients and users – the application.

Installing a server on our hypervisor with vSphere

So, where are we?

We have a virtual server with exsi hypervisor running on as a base for a virtual lab, now as a (out of touch) windows domain admin i think installing windows server to refresh myself is a good a place as any to start. Since the last time i administered a domain was when server 2008 R2 was new and starting to get adopted, i figure ill grab myself a evaluation copy of windows server 2016 and see whats new. So I’ve grabbed an ISO of that and im ready to get started.

So here i am, a vSphere front screen, now Inventory, Administration and Recent Tasks, seam like reassuring fields to have connecting to a hypervisor, especially when you can see power events in t the tasks pane. looks like i’m good to go.

Now as a noob, I must confess it isn’t entirely obvious where to go to start creating virtual machines, but hey, i figure inventory is the most likely of the options presented to me.

so opening inventory i find:

After that brief moment of confusion, everything is much clearer now another MMC-like window with a nice explanation of what a virtual machine is and a nice “create new virtual machine link”

that’s more like it, now i’m happy again, so lets do that and create a Virtual machine for windows server 2016.

Up comes another great Wizzard, prompts you for each configuration option. Now this is so quick and easy i am basically going to accept defaults at every stage where possible, after all a vanilla 2016 install must be one of the most common tasks performed and i want to see how it does.

Just calling it testvm for now

again default storage location.

I changed from the default, to server 2016 (64bit).

Default nework configuration looks good to me.

again I accept the defaults, reducing the disk to 10Gb assuming i can always grow it if necessary

Click finish, and in a blink of an eye, its done and all i have to show for it is a new item under inventory;- testvm

looks like it worked, lets see if we can get windows installing on it

nice to not the create virtual machine task is registered in recent tasks, lets click on testvm and see what we can do,

first thing i notice is we get a toolbar with play pause and stop buttons, hitting the play button.

does very little. apart from putting a play icon over the virtual machine, and adding a power on virtual machine event in the task log.

ok makes sense, its for virtual servers, which are usually headless and remote, and from past playing with virtual machines at cambridge i know i need to connect to the console of the virtual machine. lucky for me in the new toolbar one of the icons has a hover over tooltip that says “launch the virtual machine console”, so i hit it:

and i’m greeted by a failed pxe boot screen, again makes sense i haven’t given it an install media and the VM is network booting by default. very useful for remote installs..

so we need to give the virtual machine a install media, i have an iso and as i’m too lazy to burn it i will try and mount it, so closing down the console and shutting down the vm, lest see if we can mount the iso to install from. Right clicking on the testvm i have an option to edit settings..

now under CD/DVD drive Device type we have client device with a message telling us to power on the device then click on the connect cd/dvd button in the toolbar.

sounds sensible, lets try that..

on the 3rd reboot i worked out to press F2 to enter the bios of the VM in order to give me enough time to mount the ISO to boot from it.

now im not going to talk you through installing windows, i’m sure you are capable of that, and in my next post i will mention any hurdles i had.

Let’s get started….

So, I need to learn about VMware, what’s the best way to go about it I asked myself? Jump on In and install VMware workstation pro (12) and have a play!

My desktop has 8 4ghz cores, 24 gigs of ram and a few terabytes of disk, I think I should be fine. So, of to vmware.com to download VMware workstation 12 after creating a free account.

I`m running windows so ill grab the appropriate version, I have a key for version 12 so that’s what I’m getting, the 400ish Mbyte download is quick and slick, as is the installer to follow, the usual ELUA fare and customise options, but if you’re considering VMware you have enough knowledge I don’t have to go through the installer process the only thing of note is you get a full featured 30-day evaluationSo, if you want to play and explore the possibilities you can.

Frankly installing VMware to this point has been simple slick professional installer as to be expected from VMware’s reputation and industry standing.

Now, for learning VMware I figure I need a virtual environment, so I`m going to set up a virtual server within VMware workstation on which I will install a hypervisor (esxi) and use that as a base for a home lab.

So, let’s see what we have and open workstation for the first time…

Not unlike a Microsoft management console (MMC) window, and immediately feeling at home, with the addition of some helpful shortcuts on the home screen.

Strikes me as VMware workstation is very easy to install, for anyone with basic computer skills, and that’s great its accessible for people to learn, and it’s quick and easy for tec-heads like me to install, and i can only imagine that as a a plus in the enterprise world..

Phew, now that’s out the way, I was expecting writers block, but hey, no!

Who is this fool you ask?
Well let me introduce myself, My name is Phil Marsden and I have been invited by @rogerlund to start blogging my experiences, well at least my experiences in the world of cloud computing and virtualization. To that end I think It’s only fair that you should be familiar with the guy currently occupying your eyeballs. I met Roger through a shared love of photography and it is true, photography has become somewhat of a passion of mine, but that is unimportant for now.

What is important is I love technology and what technology can do for us, I could use a photography analogy here, old school photographers, shot film and developed their own film in a dark room patiently waiting for the developing and fixing chemicals to fix the image. They then take this fragile negative and lovingly enlarge and process a print, a process in the most professional of labs, for “instant news” may take an hour or more. Today we shoot digital, and have a 15 + Mb data dump RAW from every shot and load it and develop in digital darkrooms in a matter of minutes, not only can we replicate old systems much faster, technology allows us to go much further than ever before. Part of my love for technology comes from my background, I’m nearly 40 and I grew up in a nice southern English market town where I was always a bit of a nerd at school, never enjoyed school very much but I did well at it, and finally at the ripe old age of 18 I shipped off to university. I studied chemistry at university, which again was fascinating, and I did well at it understanding and growing in the process, but nothing really grabbed me and shouted this is what you want to do!!
In the final year of my degree we had a module called computer aided-drug design and I was hooked, hooked by the concept of a computer being able to model a chemical process. calculating a chemicals properties and interactions with other properties we could virtually screen compounds in silico or design the perfect compound for a drug to act on a specific target. Now this process wasn’t perfect, in matter of hours, or days or weeks we could come up with something that through traditional methods would have taken years with dozens of chemists synthesizing and testing compounds. In short, we could get faster and better, thanks to computers.

I was so hooked, this took me on to do my PhD at Cambridge in molecular informatics, where my project was very cool, and I got to play with stereo 3D visualization long before 3d was even in the cinema. Even in the sea of intellectuality that was the theoretical and computational chemistry department at the university of Cambridge, I was still a nerd, when it came to computers, as I gamed, and I was a poor student, I’d build my own computers, I frequently overclocked them too far and broke components, so I was always fixing something, and that carried over into my professional life and I was frequently the person asked to help out when workstations did funny things so I’d help out after (finally writing up my Ph.D.) I had nothing planned and the head of the group asked me if I wanted to stay on as a computer officer (now we are getting somewhere) and very quickly I realised it was the challenge of learning things coupled with the use of technology that gets me going!

When I joined the group IT was basically firefighting, all users had root or admin on their workstations and frankly it was a nightmare, there were network policies that defined if personal machines were allowed on the network, so this department of 2500 registered machines and around 10k registered users had a team of 7 computer officers managing everything, OS installs, upgrades, application installs, machine meltdowns network infrastructure maintenance and socket patching. It was all firefighting, there was no man power for development and departmental infrastructure.
Now I came on as an assistant to the computer officer in the Unilever centre (hello Charlotte!) and Charlotte to my eternal gratitude, instead of using me as a personal slave assistant in a very hectic work environment (which for future reference contained a “training area” of 25 identical workstations for us to host events and workshops.) decided she wanted me to manage the training area. The training area was frequently reinstalled and having various packages installed for certain workshops. Indeed, one of my first tasks was to install office 2000 on each machine, amazingly we only had one cd and it involved going to each machine in turn, powering up, logging on inserting the disk and installing office – 25 times!

Now I’m not the brightest, but even to me it seamed there must be a better way. Charlotte sat me down and said she wanted me to go on a training course for Active Directory, as she had done one “a while ago” and was “fairly sure it had the answer”. And boy was she right!

The next couple of years of my life were spent implementing a AD infrastructure in out little sub department, and as our carrot to entice the user into letting us manage their machine was a little fileserver I knocked together out of recycled bits kicking around, IIRC it had a 6TB raid 6 array. On which I gave the user space mapped as a network drive. And an assurance that any data put there needed 3 disks to die simultaneously for their data to be lost.
Over time this grew, and when charlotte went to follow her passion of marine biology and her job became available I was successfully recruited, and the domain slowly grew research group by research group. The domain also only grew thanks to a cohesive strategy emerging within the department, which resulted in departmental resources, which we spent on servers. Partly because the domain had proved its worth both in terms of network security and machine maintenance and resiliency of user data and now justified departmental resources. When we first set up the domain we used old workstations, probably 1Ghz Celerons as Domain Controllers and made sure we had enough for replication to keep us safe.

Now with investment we moved onto virtualized servers where each virtual server would be hosted on a pair of real servers with network mirrored hard drives, with automatic failover etc, System uptimes from that point on were just great 99.9% upwards.
No whilst at the university I met a lovely Brazillian doctor and we fell in love and got married, and in time my eldest son was born. Well circumstances led us down a path which resulted in us moving to Brazil and my becoming a stay at home dad.
Fast forward 7 years I was talking to my family at lunch about this opportunity to be blogging this today, and my son ended up asking “what is cloud computing?”

That’s pretty much where I am, I was a sysadmin 7 years ago I’m familiar with the concept of virtualization, I’ve run virtual machines in the real world. I’m a bit out of touch at the moment, but I am now at a point in my children’s life that I have more time to develop and a brain that still wants to learn
I eagerly accept Rogers invitation to learn about the VMware platform and blog about my experience.