Today I would like to touch on a slightly sensitive topic – because it is a two edged sword (so bear with me please).

VMworld is the place where everyone comes to – it is the VMware show of the year, attendance grows each and every year and sometimes it makes me wonder if something new –0 something that has not yet been done before – will happen at this years show. Any and every vendor that is connected with VMware in some way will probably be there.

Have you ever noticed how many people change companies after VMworld? It is not something that I have performed a scientific study on – I do not have exact numbers.

It dawned on me – after a few conversations in the hang space last year - that several people were there to look how to advance their career. Some of them had specifically set up meetings during the show in order to interview candidates or to apply for a position.

And that is quite natural. I think that many of the vendors at the show are looking to recruit talent, talent that they have the option to meet in person, get to know them and see if they are a “match made in heaven”.

I will not be naming any names (of companies or individuals), but I distinctly remember at least two vendors that had recruiting agents at their booths, on the Solutions Exchange floor.

One individual (who asked to remain anonymous) answered my question.

Q: Do you know of any other people who changed companies as a result of attending VMworld?

A: Not specifically because of VMworld. I think the changes were already underway, or people were already thinking about it before VMworld. It just happens that a lot of key people end up being at VMworld. So I think it's more a result of everyone being together at one place that helps make things happen.I'm sure the other big trade shows are the same as well.

Matthew Brender also got back to me with this answer (and he was so kind to offer that you contact him if you have additional questions)

Q: Did you change companies as a result of an offer you got during / after VMworld?

A: My job shift didn't have to do with VMworld directly, but is a direct result from attending for the last few years. It's connected me into the pieces of the tech community that lead me down a path. I began looking at VMworld and received incredible mentorship there.

.. I'm not sure of anyone who can trace their transition right back to VMworld. What I appreciate about that point is something I think you believe as well: your social good will does not come to a climax at VMworld. Success in our space is about *continually showing up.* Dots are connected at shows, though bonds are built over time. You can't leapfrog the required effort.

I do think it opens up possibilities, quite a few – but that does not mean that just because you attend VMworld you will get a great job somewhere else. It does take time, a lot of effort and patience to build those relationships, maintain them and sometimes those connections will advance you personally and professionally – but perhaps not.

Why I mentioned the double edged sword before – is because it might be possible that when a manager would read this post they might say,

“I don’t want to send my employees to VMworld – they will get snatched away”.

This could be a valid concern. I would like to stress that this is not the purpose of this post and also why this concern should not affect your decision to send members of your team to VMworld.

If you are afraid that you might lose your staff as a result of them being snatched up – then they were not yours to begin with. A satisfied employee will stay where he is – and this usually means they are happy with what they do, have security, are compensated well and have a career path in the company. If this is not the case – then they would probably leave sometime – so VMworld was just an opportunity that presented itself – and not the reason.

Your employees will learn SO MUCH at the show, which I think outweighs (by a ton) the concern that they might go somewhere else. The benefits you will gain from them attending, the benefits your business will gain, the personal benefits your employees will gain from attending are well worth, the satisfaction, the energy, the motivation to try new things learned at the show, outweigh your concerns.

So for those attending VMworld this year – my advice to you would be go out there – and meet people, talk to people, eat lunch (if its edible) with people, chat with others even if you do not know them, you never how the person next to you help you advance your career.

2014-08-11

There has been a lively thread on the openstack-dev mailing list these past few days, largely to do with GBP. I don't want to go into the intricacies of what exactly sparked the discussion but rather to discuss one of the by products that came out of it.

OpenStack is now 4, and on its ninth cycle of development. There has been a huge amount of innovation that has gone into the product and I think that the community is now coming to a stage where these growing pains are starting to show.

I think it is part of human nature to like the next shiny thing. We are always looking for the next best thing.

Let me give you a classic example. What was wrong with the iPhone 4? Of course there will be those that say the new iPhone 6 that will be released in the not too distant future, is better, faster, more powerful, looks nicer etc. etc.

But if you look at it from another perspective, is is just a nice and shiny new toy, that basically does the same as your old phone.

Did you really need to get a new one, probably not. Is it nice to have a new one, I would definitely say so.

But have those annoying things from your old phone ever been fixed? Does your battery last longer or shorter with you new phone? Are there annoying bugs that have been around forever and have never been fixed?

Here is another example. There has been a cosmetic, but annoying bug in the vSphere client since forever. I am sure you all have come a cross it. When provisioning a new VM the first window focus will always jump to the location field instead of staying in the first field which is the VM name. Annoying as hell! But since this bug has been around, VMware have developed the dvSwitch, SIOC, NIOC, VSAN, vCloud, VCHS and more and more. But that bug has never been fixed.

I see this again as a question of innovation (and the next bright and shiny thing) or fix up those annoying bugs. Of course innovation will always trump the mundane work of maintenance. I have done it myself, more than once, focused on new projects instead of taking care of the stuff that needs to be fixed.

I do think that it is very hard to keep a proper balance between the two. Again human nature.

This is no different if you are a developer. Do I stabilize my code, I prove my code so that it works in a more efficient manner or do I let it chug away, in the same old manner and add a new feature that improves the product as a whole. It is a serious dilemma.

OpenStack is currently at a stage where there are some fundamental issues with the current state of the software components. There are a number of issues that are preventing the full adoption in Enterprise market – and yet a large portion of the development process is dedicated to the new and shiny stuff, and not to the stability of the product. Quite a while ago I wrote a post about release cycles – and why we are chasing our tails – and with OpenStack which has a release cycle once every six months this is amplified ten times over.

I would like to share with you something that Thierry Carrez (Chair of the Technical Committee and Release Manager for OpenStack) wrote as part of this discussion.

Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack.

With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture.

It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population.

At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues.

We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ?

We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ?

On the integrated release side, "more projects" means stretching our limited strategic resources more. Is it time for the Technical Committee to more aggressively define what is "in" and what is "out" ? If we go through such a redefinition, shall we push currently-integrated projects that fail to match that definition out of the "integrated release" innercircle ?

The TC discussion on what the integrated release should or should not include has always been informally going on. Some people would like to strictly limit to end-user-facing projects. Some others suggest that "OpenStack" should just be about integrating/exposing/scaling smart functionality that lives in specialized external projects, rather than trying to outsmart those by writing our own implementation. Some others are advocates of carefully moving up the stack, and to resist from further addressing IaaS+ services until we "complete" the pure IaaS space in a satisfactory manner. Some others would like to build a roadmap based on AWS services. Some others would just add anything that fits the incubation/integration requirements.

On one side this is a long-term discussion, but on the other we also need to make quick decisions. With 4 incubated projects, and 2 new ones currently being proposed, there are a lot of people knocking at the door.

Thanks for reading this braindump this far. I hope this will trigger the open discussions we need to have, as an open source project, to reach the next level.

Cheers,-- Thierry Carrez (ttx)

So I go back to the question – the title of this post.

Innovate or Stabilize?

I do not have a clear and definitive answer to this dilemma. On the one hand if you do not innovate – then you will get left behind, your competition will beat you – because they have the next brightest and shiny thing – and you are the dinosaur.

But if you only innovate and do not fix things that are broken – your will not be seen as a trustworthy company – because you always let the broken things “stay broken”.

It is – like most things in life – a delicate balance that you need to find. You cannot only do one or the other. It will be a mixture of both, sometimes one will take precedence over the other and there will be time when that is reversed. In order to stay relevant – this mixture should be evaluated on a regular basis and focus changed when need be.

OpenStack specific - I personally think that the time has come to perhaps to go to a different mindset – one option that just comes to mind – is to dedicate one in four Openstack releases to only fixing stuff that is broken, no new features will be added – unless everything else that was supposed to be fixed, has been dealt with. It could be that one in four is too often – or maybe not often enough, but running at such a pace, is not good for the community, not good for the operators and in the end will not be good for those who will use OpenStack.

(Even Redhat – “the mother of all opensource” only releases a major version release once every 3-4 years)

Just by the way – I assumed that the OpenStack projects were run in a more Agile oriented mindset – evidently – they (as do we all) have a great deal still to learn.

I would be very interested in hearing your thoughts and suggestions on this subject, please feel free to leave them in the comments below.

I love that OpenStack is an open-source, community-developed system which, when leveraged properly within an organization, can have tremendous impact on every aspect of how that company does business. The effects of OpenStack on business operational efficiency and agility are incredible to me.

Lack of cohesiveness between projects is one of the biggest problems that I see facing OpenStack. Features are sometimes developed without consideration of other OpenStack projects implementations of same or similar features.

More cooperative efforts between projects to develop features with parity.

The "open" nature of OpenStack means that anybody can get involved, and anybody can make it do what they need it to, if they are willing to put in the work. The possibilities are endless, and I'm passionate about that.

I'm sure there's much that I "dislike" exactly, though there are some things I wish worked better, or were easier to use. Deployment could be a little easier, of course.

Public perception. :)

Complete convergence so that hybrid and multi-cloud are not just normal but transparent.

It is quite interesting to see that some of the sessions are targeted at how you can migrate your workloads away from VMware and onto OpenStack, something that I think people will be looking into a lot more in the near future.

I also have a few sessions that you can cast your vote – if you so choose..

In July, 2014 The OpenStack Foundation brought twelve members of the OpenStack community together at VMware HQ in Palo Alto, California to produce the OpenStack Design Guide in just 5 days. This panel brings many of these authors together for an open discussion about how to architecture an OpenStack cloud.

Bring your real-world questions and be prepared to talk OpenStack architecture with a panel of experts from across multiple disciplines and companies. We'll be drawing on real architecture and design problems taken from real-world experiences working with, and developing solutions, built on OpenStack. Following a brief introduction, panelists are ready to field questions from both the moderator and audience members and provide ongoing discussion the design process for architecting cloud solutions based on OpenStack.

This session will go over how the Video Service provider group has added focus to deployment of its platform to support OpenStack. How this has evolved over the past year, the challenges that came up along the way and how these challenges were addressed - and solved.

I have been designing VMware clouds and architectures for the past 4 years, and have now moved my focus to OpenStack. The change was not a simple one. There are terminology differences, architectural differences, differences in use cases. Differences in considerations regarding storage design, networking, automation, deployment - across almost every single aspect of the solution.

In this session you will learn what kind of change in mindset is needed, how to adapt to different architectural constraints, requirements and technical decisions.

The Cisco OpenStack Installer provides automated deployment of OpenStack core components, as well as monitoring, storage, and high availability components. The release schedule of Cisco OSI parallels the community release. Where possible, Cisco OSI provides unmodified OpenStack code. Every new release of Cisco OSI follows the latest community stable release; however, in some cases Cisco might provide more recent patches that have been accepted into the OpenStack stable branches, but have not yet become part of an OpenStack stable release.

The Cisco OSI code update policy is to contribute code upstream to the OpenStack project and absorb patches into Cisco OpenStack Installer after they have been accepted upstream. Cisco deviates from this policy only when patches are unlikely to be reviewed and accepted upstream in time for a release or for a customer deadline (in such cases Cisco applies the patches to the repositories, submits them upstream, and replaces the local change with the upstream version when it is becomes accepted). Cisco also uses and contributes to modules from other upstream sources including Puppet Labs on StackForge.

In this hands on lab you will learn how to deploy OpenStack with the the Cisco OSI - knowledge that you will be able to take back with you to your organization and utilize for your own deployments.

This session proposal came as a result of a conversation on a blog post with Stefano Maffulli with regards to the acceptance of non-developers into the OpenStack world.

What tools are needed to interact with the developers - and the "developly challenged" people who are now starting to interact with wider OpenStack community. Because of the plethora of tools and the substantial on-ramp and learning curve in order to adapt - non-devs are finding it hard to contribute-voice their concerns or help.

This session will go over the tools used today, which tools should be used and when - and what we can expect in the future

As were the others, I was also skeptical about if such a process was even possible – but it was and I find it was actually a great success.

Everyone I have spoken to since the sprint was surprised that you actually can write a book in 5 days, it just shows that with a group of dedicated, task driven individuals – that have a deadline, and a common goal, it is possible.

So how did it actually work?

VMware (thanks to Scott Lowe) was kind enough to host us for these five days.

It was the first time I had actually been to the VMware campus – so this also was a first for me.

The diversity of the people involved was – I think – a good mix. There were Networking people, Openstack people, Architects, Storage Architects, Writers, Infrastructure Administrators, Project managers, a bit of everything. Each of us had input from a different aspect into the content that was going to go into the book and how it would be written.

The first day was mostly dedicated to the book structure, what the content should be about – who the audience should be, layout and such.

A good amount of brainstorming, discussions – getting to actually know each other – because not everyone was acquainted with everyone else.

The graph on the picture above is actually better explained here

Each of the vertical lines is a day. As you can see the concept of what actually goes into the book is mainly done on Day 1 and a bit on Day 2. On Day 1 you also start creating the content where most of it is done on Day 2-4 – where at the end of Day 4 – almost all of the content is actually done. Revision starts on Day 3 and continues all the way till the end. And this exactly how it went.

We broke up into groups that would do the writing according to chapters. At first discussions in each group – what should go into the chapter, then high-level chapter points and then after that churning out content.

We had some problems with the software that we used, mainly because the majority of us are used to having tools where you can collaborate simultaneously on the same document (Google Docs or Etherpad) and here we were limited to one person on a section at a time. We found the middle ground of working with all of the above and synchronizing content – that allowed us all to work efficiently and keep the flow of the Sprint going.

I expected there to be some bottlenecks along the way – due to the fact that in order to have the book come out as though it was written in a “single voice” – it needed to go through what Adam (our moderator) called a “filtering process”. That mean it needs to go through one or two people that will organize the content with the same narrative, line of thought and style. And evidently that is what happened towards the end

Obviously we had different writing styles – so adaptations needed to be made along the way.

And so we trudged on – writing, editing, creating diagrams, and re-editing.

The combination of the constant supply of caffeinated soft drinks, M&M’s and other sugar saturated stuff, was about enough to get us through the sprint.

Getting to the end of Friday with checkmarks across the board was a very satisfying feeling.

I had a great time, a wonderful experience. Out of all of the participants I had only ever met Scott Lowe, all the others were either through interaction over Twitter or other means, but not in person.

It was a enlightening experience, very satisfying and something I would definitely do again if I have the opportunity.

I hope my co-authors can forgive me for the Kosher food they had to eat during the Sprint – I must say home-made cooking (especially my wife’s) is a lot better than what we all got. So whenever you guys are are in Israel for a trip – I will be happy to invite you all for a home cooked meal.

Automation is an essential component of the Software Defined Datacenter, without an automation solution this is destined for failure. We want to automate it all, from the deployment of the hardware, the hypervisor, the operating systems, and the application. This session will go through a customer story inside Cisco where an automation solution was implemented using PowerUCS, PowerCLI, Razor and Puppet to ensure a successful deployment from end to end. The session is technical, will provide a detailed architecture and methodology used in this customer, and how the solution reduced the deployment time from a number of days to a matter of hours.

You can expect a deep dive session here on Wednesday afternoon at 14:00-15:00. I will be one of only 4 Cisco sessions at the show with demos and some awesome integration between a great number of technologies.