Pages

2014-12-31

60 Blog posts in a 2014 – sometimes I don’t understand how that happens. Is that a lot? A little?

I have always said that I do not blog for the sake of blogging, but to share information and my thoughts. It is good to see that people find this useful – and do take an interest in what I have to say.

The 5 posts that received the highest number of visitors in the past year were:

What did I blog about? I calculated this with the tags I attached to each of the posts:

OpenStack (30)

VMware (19)

Cloud (12)

DevOps (7)

VMworld (6)

Automation,

Administration,

Architecture,

Design and Docker (5)

(Ok I lied – that was more than 5 topics)

If you would like more of a graphic presentation – here you go.

For me it has is interesting to see that my focus has changed, and not VMware centric any more. I guess that is an expected thing – with my current role and also that I have a more overall solution in mind during my daily work – which is so much more than virtualization.

I hit a Milestone – on December 03, 2014 where I surpassed over 2,000,000 Pageviews on my blog.

It took me 5 years to achieve my first million, the second was achieved in only two years. I have been blogging for 7 years now, it always good to share my thoughts, my insights, and sometimes my rants. I hope you all benefit, and will continue to make use of the articles I write.

Here is looking to a great 2015, filled with opportunities, community and exciting things ahead.

And how did 2014 turn out for you? Please feel free to leave your comments and thoughts below.

This is not a VMware bashing post. (Even it might be perceived as such)

I hold all three of the authors in very high regard.

Here goes.

When reading this document I was hoping to hear something new, something refreshing, something that VMware customers have been asking and verbally complaining about for a very long time.

Alas – this is not the case.

vCenter is a single point of failure. There I have said it. I have said it before, I will continue to say it in the future until this if fixed.

In the following article I will be taking statements directly from the text, providing my thoughts as I go along.

Great start – This document will discuss the requirements… After re-reading that statement – I understood what VMware did. VMware has not provided us with a method of providing HA for vCenter – but rather – have explained what they think should be defined as High Availability be for your vCenter server.

The authors then go into explaining all about MTBF and MTTR – they did a great job. I will not go into the details here – you should read the document

SLA’s are extremely important – and for and every environment – an SLA is something different – yours may differ from your neighbor, so it is important to understand what you need to achieve.

They then go into describing the tests that were run in order to measure the amount of time it would take for a vCenter server to recover. Fair enough.

Here is where it starts to get interesting. Let us look at this in a picture.

Bottom line is – that once a vCenter server has gone down – it will take a little over 5 minutes until it is fully functional.

This part of the document states that having vSphere HA – and having vCenter running as a virtual machine actually provides some level of protection.

A dedicated management cluster is of course advised – that way you have a dedicated environment to run your management components without having to worry that the client workloads will interfere.

Also putting the database in the same management cluster is recommended – seems logical.

I then noticed that the only SQL version that is supported for vCenter 5.5 is Enterprise and up – which was news to me. I gather this is a documentation bug – because the VMware Product Interoperability Matrix says that Standard is supported.

So how do you protect vCenter?

It would really be great if they would explain exactly how that would be possible and how that should be done. It still might be possible? How exactly? In order to protect vCenter – I will need another vCenter? Licensing? Implications?

Emergency Restore was a new one to me – but it is only available in vSphere Data Protection Advanced Edition – that is something that was left out – which is approximately $1,500 (list price) / per socket. As a result of the feedback received in the comments – I have amended this. It seems that Emergency restore is also available in all editions of VDP – not only Advanced (more information here).

OK, enough copy and paste. This piece above is what set me off.

Essentially what VMware are saying the following:

Use a separate management cluster

Run vCenter in a VM

Run the Database in a VM

No matter what happens – if your vCenter crashes then it will be down for 5 minutes.

Your workloads are safe because they are running on your ESXi hosts are protected by HA and can continue running without a vCenter server.

Points 1-4 - I totally agree. With point #5 I also agree.

But there are environments that cannot afford to have a 5 minute outage. VMware might say that having vCenter go done and out for five minutes, is not really an outage per se, but I would very much like to disagree here.

If I cannot provision a new VM because my vCenter is not available – that is an outage.

Where would this be an issue?

VDI environments – If a user logs in and his desktop is not provisioned because vCenter is down? How about the whole 100 or 1000 employees?

vCenter is probably the most crucial part of your virtual infrastructure. And all that you can expect from from an availability perspective is that you have to accept as a given that vCenter might go down for 5 minutes at a time.

There are environments that will accept this - I would actually say that the large majority are fine with this – but what about those who are not? Those who cannot afford having this kind of outage? What do they do?

There used to be a product called vCenter Server Heartbeat – which was retired.

Where are those promised options? When will they be available? What do companies do in the interim? Pray that there vCenter does not crash?

The scenario on which VMware based their whole presumption was on the fact that the host on which vCenter was running would crash, HA would kick in and the VM would be restarted on another host within 5 minutes.

The whole scenario of having a problem with your database, or a vCenter service problem (and believe me it happens), that was not covered.

Take the following scenario. You have a vCenter appliance. For some reason the vCenter service stops responding on the VM. There is no automatic restart. Eventually you get a call, something is not right. You try and restart the service, nothing happens. You restart the VM, nothing happens.

Now what? Open a call with VMware? Deploy another vCenter appliance and hope that nothing goes wrong? I can guarantee you that will take a hell of a lot longer than 5 minutes.

Why does the document even go into providing a clustered solution for the MSSQL database? Because that might fail? Yes it could happen. But guess what – the whole system is only as strong as its single weakest link. So providing a clustered database solution might give some piece of mind – but it will not protect you from an outage – because there is no way to cluster a vCenter server.

In conclusion – yes there are considerations. I would definitely not say that VMware have a High Availability solution for vCenter, they have done their best to minimize the impact it will have when it vCenter crashes – but that is not HA!

What do you think? Am I making a mountain out of molehill? Or this a real and valid concern? Please feel free to leave your comments and thoughts below.

2014-12-08

OpenStack is a living product – and because it is community driven - changes are being proposed almost constantly.

So how do you keep up with all of these proposed changes? And even more so why would you?

The answer to the second question is because if you are interested in the projects then you should be following what is going on. In addition there could be cases where you see that the proposed blueprint could break something that you currently use or is in directly contradiction to what you are trying to do – and you should leave your feedback.

OpenStack wants you to leave your feedback – so please do!

About the first question - the answer is here – http://specs.openstack.org. This is an aggregate of the new blueprints (specs) for each of the projects as they are approved.

I use RSS feeds available for the blueprints which helps me keep up to date as soon as a new blueprint is added.

I have compiled an OPML file with all the current projects that you can add to your favorite RSS reader. You can download it in the link below.

2014-12-02

In my previous post I showed you how to get your OpenStack git environment up and running by using a container.

In this post we will go through the steps needed to actually contribute code. This will not be a detailed tutorial on how to use git and gerrit, and its functionality, but rather a simple step by step tutorial on how to get your code submitted for review in OpenStack.

First we start up the container.

Since playing around with real OpenStack code is not a good idea when you are just learning – there is a sandbox repository where you can perform all your tests.

First things first we need to clone the repository so that we have a local copy of the files

git clone https://github.com/openstack-dev/sandbox

What this does is, you copy all the files in the repository to a folder of the same name under your current working directory. Depending on the size of the repo – this could take seconds or minutes.

Enter the directory and look at the files.

cd sandboxls –la

You will see the files are the same as the those on the repository on the web.

With the exception of one file the .git folder which is not visible on the github repository. This link will give you some more explanation as to what is in the folder.

Now make sure you have the latest code from Github.

git checkout master
git pull origin master

Create a branch to do your work in that you'll do commits from.

git checkout -b MYFIRST-CONTRIBUTION

Now we get to the changes.

I am going to create folder named maish with two files inside, like the structure shown below

Here I just created empty files – but it could be correcting someone else’s code or adding new code, the process is the same.

Once you have completed your work you will need to add all the changes and push them back up to the original branch.

Add all the files and changes by running

git add .

Next, you commit your changes with a detailed message (and you should really understand how to commit proper changes) that'll be displayed in review.openstack.org, creating a change set.

git commit –a

A VI editor will open were you now can add the reasons for your change and mention any closed bugs. Follow the conventions about git commit messages giving a good patch description, adding a summary line as first line, followed by an empty line, descriptive text, backport lines and bug information:

Save the file by typing :wq, and you will see that your files and changes were added.

Set up the Gerrit Change-Id hook, which is used for reviews, and run git review to run a script in the /tools directory which sets up the remote repository correctly:

git review

You might be asked prompted to accept the SSH key, type yes.

If all goes you will see something similar to the output below.

Looking back at the github repo – you will not see any changes. You might ask yourself – where did my code go?

The reason you do not see any change – is that before any code is accepted in the master branch it has to be reviewed, both by an automated set of tests and also by humans.

What Is VIRL?

VIRL is comprehensive network design and simulation platform. VIRL includes a powerful graphical user interface for network design and simulation control, a configuration engine that can build complete Cisco configuration at the push of a button, Cisco virtual machines running same network operating systems as used in Cisco’s physical routers and switches, all running on top of OpenStack.

How Does VIRL Work?

VIRL uses the Linux KVM hypervisor and OpenStack as its virtual machine control layer, with a powerful API enabling the creation and operation of VMs in a simulated network topology. Users design their network using the VM Maestro design and control interface, with network elements such as virtual routers, switches and servers. The design is translated into a set of virtual machines running real Cisco network operating systems.

Cisco VIRL Personal Edition annual subscription license provides a scalable, extensible network design and simulation environment for several Cisco Network Operating Systems for students. This includes IOSv, IOS XRv, NX-OSv, CSR1000v as well as third party images such as Ubuntu Linux.

Educational pricing is available for this product for college students, parents buying for a college student, or teachers, homeschool teachers and staff of all grade levels – limited to one purchase.

2014-11-27

One of the most daunting and complicated things people find when trying to provide feedback and suggestions to the OpenStack community, projects and code – is the nuts and bolts of actually getting this done.

That is why I embarked on providing a really simple way of starting to contribute to the OpenStack code. I was planning on writing a step-by-step on how exactly this should be done – but Scott’s post was more than enough – so need to repeat what has already been said.

Despite that there are still some missing pieces in there which I would like to fill in here in this post.

Before we get started there are a few requirements/bits of information that you must have, and some things that you need to do before hand - in order for this process to work.

--name="git-container" – this is just to identify the launched container easily-e GIT_USERNAME="\"Maish Saidel-Keesing\""– the quotes have to be escaped \"-e GIT_EMAIL=maishsk@XXXX.com – Don’t forget to put in your real email address!

Once the container is launched – provided you have followed all the steps correctly and the variables are also correct - you will see some output printed to the screen with the SSH key that was just created and you will also be able to see that key in the gerrit web interface as well.

You can see that the comment on the web is the same as the hostname of the container.

Embedded below is a screencast of the launching of the container.

In the next post – I will show you how to actually contribute some code.

If you have any feedback, comments or questions, please feel free to leave them below.