OpenStack Nerd, CCIE, DevOps Junkiehttp://www.colinmcnamara.com
Changing the world, one person at a timeWed, 20 Aug 2014 16:43:27 +0000en-UShourly1http://wordpress.org/?v=4.0Look up from your phone – a lesson to us in “Social Media”http://www.colinmcnamara.com/look-up-from-your-phone-a-lesson-to-us-in-social-media/
http://www.colinmcnamara.com/look-up-from-your-phone-a-lesson-to-us-in-social-media/#commentsFri, 13 Jun 2014 15:23:04 +0000http://www.colinmcnamara.com/?p=1894Last week I did something that is unheard of in today’s world. I went a whole week without taking my phone out of my bag. I went a whole week without access to Twitter, Facebook, the Internet, my email.

And you know something – IT WAS AMAZING

I learned to dance, I learned to hula hoop. I can even cartwheel now. I wrote poetry, I performed at a Poetry Slam!. I hiked, I meditated, I read, I wrote. Every meal I had was savored, in every conversation I made new friends. I truly listened, and spoke from my soul. I wasn’t distracted by my phone, I was engaged by this crazy thing we call real life.

This morning, I was going through some video’s on TinyBuddha.com and came across this Spoken Word titled “Look Up”. I would really recommend taking four minutes of your day to watch it. It is awesome, almost like it is written to those of us who blog, or are heavy on social media. It starts out with a powerful statement -

I have 422 friends, yet I am lonely…..

Myself, right now I am going to head to the local coffee shop and sit at the counter (this is turning into a morning tradition for me). I am going to write a little, and chat. My phone is in my pocket, not in my mind.

I am going out tonight to celebrate a friends birthday. I am going to dance, and have a great time. I am going to live in that moment, be present in today, and savor life for the beautiful thing that it is.

So be brave, step away from the phone. Look up, connect with the people around you, not just the people on twitter e present, be here, connect. Be comfortable with yourself. it is a powerful thing. Put away that phone for a bit and live.

I think the biggest challenge with OpenStack right now is finding a healthy balance between all user archetypes that contribute code.

These include (but are not exclusive to)

Manufacturer

Integrator

Operator

Educational Contributor

Individual Contributor

Right now, there is a risk of the sometimes conflicting interests between each of those Archetypes creating situations that stall forward progress of this great project. We need the Neutron PTL to hit this issue straight in the face before it gets toxic.

Why Kyle is the right person for Neutron PTL

First, let me say that Mark is an amazing PTL. His leadership has been great, and I’m sure will be great in the future. As an employee of an Operator (Yahoo) he is a close to neutral ground as we can get.

That does mean however that he may not always be fully aware of all the ways that manufacturer politics and angling can manifest themselves in a project.

This is where Kyle comes in. Kyle works for Cisco (specifically an OpenSource spin in called Noiro).. but still Cisco. I have known Kyle since 2012, and in that time I have seen Kyle at ever turn do what is best for the community, even when that may not have been “completely the best thing” for his employer.

Day in and day out, I see Kyle act like I feel (and hopefully how I do). Exemplifying the “Badges Away” philosophy, and using OpenSource to build a bridge of collaboration between parties that at many times compete with each other. This leadership style is demonstrated in OpenStack, Open vSwitch, and OpenDaylight.

Kyle will be under scrutiny

The fact is, that with a manufacturer employee taking a leadership role in a project there is always a risk that a manufactures agenda will be pushed.

That however is a great thing, Kyle will be under the highest levels of scrutiny. Him taking this role will require a new level of transparency around manufacture dev reviews and approvals, and will result in increased health as the project as a whole.

Kyle’s biggest task as PTL

OpenStack is maturing from a project full of beards and tattoos, to a place where suits are as common as birkenstocks. What is happening is that the same market manipulation tactics that are rampant in IETF, T11 an IEEE are making their way into OpenStack (They were a heavy undercurrent in the Hong Kong summit).

Kyle’s biggest task that I see is setting up transparent policies that discourage standards body manipulation techniques, and ensure that the contributors outside of the manufactures have an equal chance to both share the burden of contribution, but also enjoy it’s benefits.

At the end of the day, in the few years that I have known Kyie, he has stood out as an individual who loves this community. Who is connected to us as a whole, and who also strives to find “balance in the force”

This is why I am eager to see Kyle elected as the Neutron PTL. If you feel the same way as I do, I encourage you to make your opinion known too.

While I actually disagree with characterization of Washington as Classic IT (I think the myth of Washington is much different then the historical documents of a man drawn into a struggle by duty) I’ll stick with Mark’s use of George Washington as classic IT, and Donald Trump as Shadow IT.

In his post he describes some interesting alignments of Trump to fulfill the business need through “Any means possible” and GW (the founder of our country, not the “Other” one) as the keeper of the status quo.

I feel that Mark may be implicitly communicating one assumption, that Shadow IT teams are Rogue, and less aligned to the goals of the business as a whole.

Let me share my take.

Pressure from C Level (Executives) to Dev teams

What most people in “Classic IT” organizations don’t see is the extreme pressure to release quality code that is put on a modern software development organization.

As far back as 2009 executive reading such as Harvard Business Review have been discussing Agile. In 2012 this started to be commonly communicated as the DeFacto standard for high speed, high quality product development through senior executive teams.

The result of this has been a changing expectation of performance from a development team. In the past you may have had one to two years to develop a product. Now the expectation is that you have 90 days for the first release (at max) and release every month after.

Dev Team Alignment to biz

This expectation has been core (in my opinion to the creation of “Shadow IT”. In a company, software development efforts are normally funded out of requests by the line of business (normally approved by the Exec team). The work done is heavily skewed to towards the introduction of new features that improve the competitiveness of the company and / or it’s employees.

The key concept here, is the VP of software development is normally closer aligned to the goals of the business then any other technology organization in a company. That organization is tasked with, and accountable for delivering these products in a very short time, with low defects. All while not drastically growing headcount.

IT Alignment to Dev and Line of Biz

IT organizations more and more are reporting up through the CFO. (CEO >> COO >> CFO >> CIO >> VP-IT). The exec focus and leadership mandates are generally going to be containing costs of delivered services, as well as following established management guidelines.

I hope you see the rub in this organizational structure. Classic IT Ops is more commonly managed by.. well accountants. The pressure of this executive direction is pretty consistent, and will drive management behavior in organizations.

The sad fact, is this reporting structure drives a disconnect between IT and the line of business when business is changing. (While optimizing for efficiency when the business is in a steady state) I believe this is core to the current disconnect between Classic and Shadow IT.

Classic IT – What most miss

Let’s expand on this disconnect. A well run IT organization has five area’s that they focus on.

Service Strategy

Service Design

Service Transition

Service Operation

Continual Service Improvement

Inside of each of these area’s are a number of functions that are critical to a healthy thriving IT / Business relationship. However when optimized for minimized Opex/Capex (primarily Opex btw) is that IT organizations tend for focus on minimal area’s in elements 2,3 and 4. They mostly ignore Service Strategy, significantly ignore Service design, and rarely take a serious stab at Continual Service Improvement.

The Manufacturers role in this failure – “Is It Enterprise Ready”

The sad fact is, that in the most common implementation of this failing IT strategy – Service Strategy, Service Design and Continual Service Improvement are effectively outsourced to the Manufacturers selling them stuff.

IT organizations have been trained, incented, manipulated, whatever to rely so heavy on outside forces to tell them what to do and or buy, when to do it, how to create services to expose to the business.

This has created a situation where classic IT dependancy on outside consulting and influence for service strategy, design and improvement has created a vacuum. Few of these classic sources have real solutions that service modern software development. Furthermore, many of them are threatened by the “Next Generation” of IT services, and actively slow down adoption or creation of these services.

DevOps is just good ops

Shadow IT is not evil, though many times it exists outside of the IT governance initiatives. It generally occurs when an IT organizations fails to attain a level of maturity in identifying new offerings that the business needs, and integrating them into the IT service catalogue.

Specific to DevOps (Close collaboration between Development, and IT Operations, commonly using modern software tools and methods to manage infrastructure and services), this is just GOOD IT OPS.

What makes good IT Operations? 1. Understanding how your org can support the goals of the business 2. Supporting these goals in a high velocity manner (quickly) 3. Providing these services in a high quality manner 4. Maintaining proper Governance of these efforts, so that you don’t break things (laws, kill people, etc) while doing the above.

I could substitute IT Operations with DevOps or Shadow IT in the above statement. Each would still be a true statement.

Shadow IT/DevOps and ITIL/ITISM can live together

Here is a dirty little secret, When you get past the political and emotional barriers to change, you will come upon a simple truth.

DevOps(Shadow IT) and IT Ops (ITIL/ITSM) are not mutually exclusive. They can both operate together.

More specifically, each element of the ITIL change and release management process can be accomplished using DevOps tools and methodologies. Heck, most of the time as a classic IT shop (ITIL) implements the same tools that methods that are in shadow IT, they will notice that their ability to provide high quality services, in a quick fashion while maintaining governance actually improves.

Stop the insanity, and start sharing

Let’s stop the insanity, and start getting along. At the end of the day, if you have shadow IT in your organization, it is probably because the business requires a service that “Classic IT” is not providing.

On the flip side, DevOps / Shadow IT teams that create islands upon themselves are not serving the complete goals of the business either.

What can you do to change this? I personally think that change like this needs to have exec sponsorship. Which can be a huge challenge at times, especially if you are in the “Classic IT side”. it is possible though.

Here is my short list 1. Take ownership of your IT service creation and Continuous Improvement processes. 2. Score your own performance, have your line of business score it too. They are your customer, listen to them. 3. Use lean tools like Value stream mapping to quantize the area’s that IT is impacting the business. 4. Stack rank area’s of improvement, you will probably see Dev services be at the top if you have Shadow IT and cloud consumption. 5. Embrace these Shadow IT/DevOps teams. Leverage them as skill centers. Use them to create dotted lines into your classic IT org. This allows you to learn, and grow together. 5. Share what you learn, this change is huge for most Organizations. You can help others by sharing your own story.

]]>http://www.colinmcnamara.com/is-shadow-it-devops-disruptive-or-transformational/feed/4Thanks to Susie Wee re: DevNethttp://www.colinmcnamara.com/thanks-to-susie-wee-re-devnet/
http://www.colinmcnamara.com/thanks-to-susie-wee-re-devnet/#commentsWed, 12 Mar 2014 17:13:45 +0000http://www.colinmcnamara.com/?p=1851Public discussion around features and deficiencies is an important piece of this modern world of OpenSource and social media. It is easy to start a public discussion around deficiencies. It is also important to have a public discussion around success.

I want to use this opportunity to acknowledge Susie Wee and her teams at Cisco.

I’m going to tell you something, that’s probably really hard to hear. It is near impossible, to use DevNet to find information that we need to be able to achieve our goal. I think that’s your goal, helping guys like us integrate our software with gear like yours.

Now all the product specific links have been removed. I got to API docs on my first try. I have to say, I am happy with this, and am looking forward to seeing further improvements and integrations in the future.

As I said at the beginning, it is important to discuss both issues and success. Thank you very much Susie for taking feedback and improving me and my teams experience. I appreciate it.

All us bearded (and virtually bearded) OpenStack hippies are at it again, trying to change the world through training and enablement, and a little bit of code. In this current experiment we are biting of a big chunk and creating something very similar to the Amazon Architect Course.

The goal for this course (Building on the work done in OpenStack Training for the Associate and Operators Course – http://docs.openstack.org/training-guides/content/ ) is to have a developer armed with understanding and experience building applications on the core components of OpenStack.

User Archetype – Web Application Developer, current or future developer of Apps to be run on Amazon Web Services (AWS).

Why I do this crazy stuff

I’m a core review on OpenStack docs, and co-founder of OpenStack training. My hippy streak runs strong, and I believe that by lowering barriers to adoption of platforms we will continue to get more Operators involved. Teaching developers to consume OpenStack is as important as teaching engineers to Operate and Install it. While this burns many nights and weekends, I absolutely love being a small part of this amazing community. And every once and a while what I do makes a difference. That is an epic win.

It is my belief that classic network engineering skills (with the current revisions of the CCIE being the pinnacle) are currently in the process of losing value. My thoughts supporting my belief, as well as recommendations on actions are contained below.

I believe that the shift to programatic control of infrastructure, led by shifts in the tools, process’s and methodologies used in modern application development will shrink the available job pool for classic engineering skill-sets.

I believe that a prudent course of action for any engineer looking to gain the CCIE value proposition, or any one who current benefits from their number to diversify (and) enhance their current skills by learning software development skills. My views on this are completely transparent and open. This unfortunately triggers emotional reactions in people who invested huge amounts of time, money and focus on getting to the top tier of their career (The CCIE).

In the future, the CCIE may evolve to take into account programmatic skills. This may or may not happen on an unknown timeline. I am not sure that even the evolution of the CCIE track alone will be sufficient to support the current salary premium in the marketplace.

Certification Industry Influence

For many years there has been an entire industry selling candidates on the idea of a job for life, of the golden ticket, your CCIE number. This is a good business for the Cert industry, and also can be very good for the candidate as it allows them to grow their salary (value) immensely in a very short period of time.

What people fail to do however, is fully understand why their cert, or skill has value. They only look at the current state of the market. Value of a certification is derived from a simple supply / demand relationship of certified individuals / available work. When there is more work to do, than people to do it salaries rise. When there are more people than work, then salaries fall. It actually is as simple as that (there are other external factors that contribute to CCIE salaries such as vendor programs, however they aren’t as significant as the supply/demand discussion here).

Simple Rules for a Career in Tech

Can you have a successful career in technology? Yes you can, there are some simple concepts to follow.

I have taken these topics from a book we used to give people before they got re-organized, or laid off years ago (2004 I think) at a previous global enterprise service provider with thousands of employees. That company got caught in the bind of change, the employees paid the price. This book was given to the ones that got saved from the chopping block to help them see that change is inevitable, and that given the right attitude and perspective it can be the best thing ever.

Free Resources so you can read the very short book “Who Moved My Cheese”

1. Change happens

The only constant in our industry is change. I’ve been in this industry for 17 years now. Rates of change ebb and flow. Over the past 10 years much of our industry has been in a steady state. While change (innovation) constantly occurs, it stays within the same skill dimensions that define our “Value”.

In my opinion, this steady state was brought on unnaturally by the mass implementation of market management strategies popularized by Geoff Moore (crossing the chasm). The implementation of a common strategy has provided the illusion of safety in the IT Ops / Engineering job roles over the past 10-15 years.

And now for a scary truth. Times are changing. The mass adoption of market stabilization strategies that resulted many people flocking to certifications and specialization as a method of fast tracking their careers is being shifted to a mode where innovation and change is the norm.

Specifically, there are many people who have been enjoying immense value from investments long long agon. Those that believe that the luxury of steady state technology will last risk experiencing the result of what happens when an industry evolves, but they don’t.

2. Anticipate Change

There is a huge mental leap that a person has to go through to succeed in a tech shift. You have to go from assuming that your value of today will always be your value, to identifying your ability to adapt, move and learn as your only persistent value.

When you identify mobility and ability to grow as a value, you start a very productive habit – Constantly evaluating whatever is making you money, your CCIE, EMCTA, VCDX, etc is still viable. And part of that is clearly understanding why it gives you value, and what may change that equation. This method allows you to anticipate changes before they occur.

3. Monitor Change

Once you have decomposed the value of your Certification, Job or Skill into WHY it is valuable (why do people pay you for it). Then you have a clear “non marketing” view of what challenges that value.

In some cases lowering barriers to entry for a specific skill or cert increase the hiring pool. This increased hiring pool can hit an inflection point where the number of people with that skill or cert changes from being less then the available job pool (less people and more jobs = higher pay) to the inverse, where there are more people then jobs (this causes pay to drop).

Many things can drive this very simple ratio (supply and demand) The one example is increasing competition for a fixed number of jobs. This happened to the MCSE years ago. Early on, these were the CCIE’s of the time. However because of easy access to software to learn on, the value of a person with an MCSE dropped.

Another example of the Supply/Demand ratio changing is when a technology shift lowers the need for skilled workers in a segment.

I personally have seen this countless times in our industry. Manufacturers, and certified individuals are sitting high on the horse, printing money and then very quickly a tipping point happens and entire segments are wiped out.

In technology, the mainframe operator was one example. In the past this was the pinnacle of IT Ops. A friends wife worked as one at Chevron years ago. She was on a team of 60 operators, the ninja’s of IT. Within 12 months she was the only one left. Change happened, and most on the team didn’t believe until it was too late.

Other examples in more recent history is the Novel Administrator. There was a time, not so long ago where the Master CNE was the CCIE of the time. There were huge salaries granted to “certified” individuals to run this IT infrastructure.

What happened? A little company up in Redmond decided to rip off Novell Directory Service(NDS) and package it with some workgroup file sharing and messaging/calendar services. Within 36 months of Microsoft releasing Active Directory the market for Novell Master CNE’s had completely dried up.

Many were left behind, but the best Microsoft Architects I know are former Master CNE’s. They clearly understood that their current value was not their future value. Instead of being afraid of Microsoft, they learned about it, and when the time was right they leveraged this new knowledge and expertise to define their NEW value.

4. Adapt to change quickly

At some point, while monitoring for changes in their ability to earn money, these individuals saw the market, tech, whatever hit a tipping point.

Once at that point, their focus switched from monitoring the rate of change in their current skill set, to prepping to flip to the new cheese, the new cert, the new skill that will provide for them and their families for the next couple years.

Did they focus on why the old cert or skill was decreasing in value? Did they cry and argue about it? No, they understood that change is a constant in our industry. They understood that the only way to ensure success and survival is constant forward motion. They understood that the key to success is change.

5. Change

Once you have mentally shifted from static to dynamic, from enjoying the rewards of work, to knowing that there is much more work, but huge benefits ahead you have to enact change.

One interesting fact about change. The first people to change are known as leaders in the industry. The value of those certs, skills, etc are very high once you “cross the chasm” of about 10% market adoption. Living and working in the first part of the adoption curve is a great way to earn a living. The only catch is you have to be in a constant state of flux. Your value at that point is the ability to identify for, and exploit change for your benefit. (vs change destroying your value in static modes)

6. Enjoy the Change

Change is great, we are all in this industry for many different reasons. I myself entered the technology industry by pure chance 17 years ago. I have had many different certs, skills and roles over those years. Luckily, early on I found that I gained great satisfaction in learning new things. I also found that people will give you money when you know something new. This has led to a habit throughout my career, of always looking for something new and transformational. And to take joy in that transformation. The joy of learning and sharing is immense. It fuels the soul. It also doesn’t hurt that you can make quite a good living out of it.

Once you learn to enjoy the act of change, you will find that your perspective during tech transitions changes from fear, to wonder. That this world and industry is an amazing place.

7. Be ready to change quickly and enjoy it again

One thing to remember however is – You must always be ready to change quickly and enjoy it again. This is not the first, nor will it be the last change that happens. Change is a constant in this industry embrace it, love it, live it.

My perspective – Starting poor

In the book who moved my cheese, they call out two groups. Mice who instinctually are always looking for new cheese, and lilliputians who believe that the room their cheese is in will always exist.

I am the mouse who is always looking for food. I grew up in a smaller town, my Dad left my Mom and I when I was 2. My Mom worked at a rug factory, and then got a job as a secretary typing reports, and then moved up to the front desk at the local police department.

I never understood that concept of a job for life, or always having things that other people had. In my world work in any way possible was a reality to get anything that I wanted(or felt I needed). I did a range of odd jobs as a kid to get money, wash cars, mow lawns, wax airplanes, clear debris, wait tables, cold call for surveys. This combined with some really awesome people (who didn’t have to be so awesome, they choose to be that awesome) gave me the same opportunity to succeed that most people around me enjoyed as a privilege of being born into a stable family.

Learning Technology Changed My Life

As a young adult (18), I was two weeks from being homeless, living off cold 25 cent hamburgers from McDonalds and Top Ramen when I got my first job in tech. I was literally walking up to apply at Taco Bell when I ran into an old friend that gave me a chance to start on the midnight shift building PC’s.

I took the the opportunity that was given to me, and learned everything I could from that job. I learned Windows, FreeBSD, SCO, Linux, Routing/Switching, Wireless Backhaul, Key Systems(Phones), Novell, VoiceMail Systems all in a two year period. Within six months I had been promoted from the midnight shift to the day shift doing more advanced support. Six months later I got busted hacking the sales floor PC’s to make them pop up stupid messages and instead of getting fired, I got promoted to running the ISP side (DKAonline).

Since that moment I have spent at least 1 hour a day for the past 17 years learning something new. I will never go back to being poor and hungry. I know that the great life that my family and I have is dependent on skills and industry constructs that are transient.The only way I can guarantee my ability to provide is to be able to constantly change.

How this affects you

Today, there are major shifts going on. Network Engineering to Network Development(SDN), Enterprise Virtualization to cloud platforms, etc etc. Development organizations are changing how they operate, IT organizations are feeling more and more pressure every day.

I fully believe that the progression of technology will affect the hiring pool. I believe that the value that the CCIE currently holds will be replaced by the concept of “Network Developer”. As I have gained more and more experience with using Software Dev Tools to do the job that my fingers and brain used do do in the past I am left with one single clear concept -

The only thing that provides safety is constant forward movement.

I hope that those that are afraid of this change in the industry will notice the opportunity in front of them, and use it for growth vs letting the world move past them.

The business of OpenStack

The business of open source can be dominated by some strong personalities. Many companies are trying to use their first mover advantage to create sustainable business models. The reality is that statistically many will fail, as a second to market strategy combined with early experimentation with R&D allows a diversification of monetization vehicles, while maintaining cash flow. (e.g. most of these small guys are fighting for a small pie, and I believe are likely to at minimum recieve buyout offers, and depending on the cash flow may be likely to accept them). This fact of a maturing market may cause people to miss what I believe is the secret sauce of the Open Source community.

The beautiful truth of Open Source

What may be a bit harder to see, there is alignment between the R&D organizations of Vendors, Operators and Integrators. There is this beautiful truth that emerges when you are dealing with some of the smartest people in the industry, who don’t really care about gear with a certain logo being pushed. Their real goal is to combine as a community to drive experimentation and innovation in a shared domain.

For me, Open Source communities are a place I don’t have to deal with vendor alignment. Where I can state my beliefs, thoughts and experiments and have them analyzed by people much smarter then me. The intellectual (and for me emotional) support of being part of something bigger, with mostly altruistic goals of improving the cutting edge of technology is one of the things that keeps me sane.

How the project is being influenced

Of course, over the past couple years the OpenStack community has transformed. When I first got engaged with the community in late 2011/early 2012) It was a community where across the board I could feel safe and frankly anonymous. I could be a small fish in a big pond learning and growing from everyone around me. I was surrounded by people with beards, tattoo’s and t-shirts who in their spare time made software products that rivaled the best commercial R&D groups in the world.

Over the past year however, it has become the battleground for vendors to establish a common reference to support the same strategies used in IETF for the past decade. The politics and business models that are an every day experience in IETF, have changed my experience in OpenStack from a place where I felt like I identified completely with a community of like minded individuals, to a place where those same individuals exist (and I gravitate towards them to continue to drive forward technology as a community), but the larger message and tone is becoming dominated by business interests. It is a necessary and expected evolution of the project, however it introduces all sorts of drama into something that I have grown to love.

Changing face of the community

This changing balance of power, from innovators to corporations (and in some cases the innovators themselves have changed) is affecting my experience in the community. In some cases, such as the ability to monetize the R&D investments of my day job, this is a good thing. This shift allows me to put even more resources into really amazing transformative community education projects like OpenStack Training, as well as expand development resources that have OpenSource contributions as part of their 9-5 jobs.

In other cases, this larger shift from innovation to monetization has driven me mad. The various corporations can be expected act in their self interest, in some cases battling against each other and catching those of us with beards (and most of the time not in suits) in the middle. I do believe that there is a strong middle ground. Where we as a community, that is inclusive of Operator, Integrator, Educator and Vendor can all work together in a sane way. I do believe we can preserve and protect this beautiful community we have built. My personal belief is that we need to help Operators and Educators(Researchers in EDU) to integrate into the community and be able to stand as an independent voice of reason.

My Hope

I hope that we as a community work together to provide balance to the force. To protect and accelerate the is innovation forge that is centered around OpenStack, but also extends to all sorts of adjacent platforms and projects. I think we can do it, hell. I think we must do it. And while I’m not always the most popular person for voicing my opinions, my goals are simple to accelerate the pace of innovation in the new norm, where Vendor, Integrator, Educator and Operator all share the burden and benefits of invention. As a community working together, I think we can get there.

It’s time to vote for presentations for OpenStack Summit in Atlanta. I’ve submitted a few presentations, as well am co-submitting some presentations with people I really respect in the community. I’ve stack ranked what I feel are the priorities to discuss in each, as well as bolded the critical presentations in each area. If you find these valuable, please let your voice be heard and vote.

Now, a great presentation may not be listed in here. I’m under a bit of a time crunch right now, and took the short list of presentations I’ve discussed with people. As I got through and do my own voting I’ll add in others that are of value and merit to the community.

What I wanted to do today is, show you some of the challenges, My teams and I are having interacting with DevNet.

This is the developer portal, that you guys put up. To help ease teams like mine and find access to API documentation, software development tool kits, and both closed and open source code.

My perspective

I’m a CCIE. I have automated and some large data centers using a wide range of API’s. I’m not a great programmer, luckily I hire people who are. My software development teams and I are building our own application stacks.

They run and put on Amazon right now. Our goal is to make sure that they easily and automatically deploy onto UCS platforms. And Nexus 9000’s, 5000’s, 7000’s with cool open source projects like OpenStack and OpenDaylight get integrated with them.

Feedback – DevNet is impossible to use

I’m going to tell you something, that’s probably really hard to hear. It is near impossible, to use DevNet to find information that we need to be able to achieve our goal. I think that’s your goal, helping guys like us integrate our software with gear like yours.

Example – Finding SDK’s and API Docs

The first thing, a really easy example, a basic thing that you’d want to do or that you’d want a developer to help a team be able to accomplish, is find SDK, find API documentation, and find software development tool kit. To allow us to hook into your gear.

However when me or anyone from my teams attempt to navigate to through devnet to find API / SDK’s the default is to pop us to a product page that has nothing to do with software development.

Instead of having my first and easiest navigation option go to the software that I’ve already made the decision to go ahead and integrate, I get sent to a product page that is trying to influence me to buy a server.

I’m looking for the information of, how I can integrate it. Why another product pitch? It makes no sense, absolutely no sense.

The hardware is not important, the API and SDK’s are

I don’t care what hardware data sheets there are. It’s not important. I’m looking at programmatically integrated to my system.

Sorry if that’s very aggressive, but this is really, really frustrating. This feedback as I bring new developers into my team, they get me too. “You told me to integrate this stuff, but where is this stuff?” First question, where’s my SDK?

Normally what would happen, is I would go open up my local git clone and do a get/pull (this is a linked clone of a source code repository owned / hosted somwhere else). If found the SDK before, I normally wouldn’t navigate to a site to see an update. I would have that repo integrated not only with my my local dev setup, but it would also be integrated into any tooling that I have. (Listed as a dependency, and auto built by Jenkins)

An example of what to do – Insieme and Webex on Github

One place that I love to go is http://www.github.com/CiscoSystem. This place is where the the WebEx team and some of the guys from, NOSTG push a lot of information. Kyle and Mark put a lot of stuff here too on GitHub. This is a place where developers collaborate.

This allows me, as someone who is integrating applications on their stuff to go ahead and, not only through a web page, go ahead and see some interesting stuff.

Very specifically, a great example, the Insieme team. Mike, an awesome guy on that team, has posted some interesting SDK’s information for the Nexus 5000.

It’s easy for me to find. I could Google it. I could search for it on GitHub. I can do a lot of cool things, things that are important to me, in my software development work flow.

Nexus 9k API docs on Github

In this example I talk about Mike Cohen’s code for Nexus 9K. This is how it should be done. Example –

Over the years I’ve had to hack these stupid expect scripts and interactions with NetConf to do shut/no shuts and do network testing. You get 1,000, 2,000 nodes, and with four links each you’re going to have transceiver failure and cable failures all over the place.

You end up writing these scripts, to go ahead and exercise the network and start tests on either end. One, it takes forever. Two, it’s a giant pain in the rear, especially when you have to diverse platforms.

I was poking around, and I see, “Oh, wow, someone wrote some interesting code, which, if you look at it, goes in and provides a link test.” This is really, really cool. This is code that I can use. I can integrate it into my own stuff, my own operational procedures. Not so bad.

Use of Developer Toolchains for collaboration

As a developer, Git provides a set of the tools that are used to track, communicate and collaborate. We use this to answer questions like – “who the heck wrote this?” The first thing we would do was go ahead and check the log.

Who wrote this stuff? OK, I see Mike Cohen. I’ve got his email address.

I can even go so far as to see when he did it. OK, that was at the end of December, December 19th.

I can see exactly what was added each day. Now there’s a log for the entire repository, Mike from Nexus repository.

I can see that there’s another developer who was contributing early on to the repository.

I can see that if I need to get clarity or have a question about utilization of this code, I can ping Mike Cohen.

Collaborating on code via Pull Requests

Say I improve the code. I find something unique and novel that I can use in my own networks, but I’m not in the business of maintaining a small patch to this.

I can go ahead and make my patch. Then I can submit a pull request back to Mike. Mike just gets an email that says, “Hey, please merge this. I found this useful. Inspect the code, test it, validate it. If it’s good, merge it.”

That makes it something that all the community can use. I’m not in the business of maintaining these minor patches. They distract from the effectiveness of me and my team. I want to be able to submit that.

My experience trying to do this with the UCS SDK on Communities

I downloaded it from some community’s page. I don’t know who wrote it. I don’t know who to contact. I can extend it. I can keep that private. It’s a top down community. It doesn’t support the community development, that’s been really key to the growth of open source.

Collaboration between Manufacturer, Integrator and Customer in OpenSource

We saw a move of Open Source projects from SourceForge to GitHub. It’s been real. GitHub has allowed this interaction between us that’s key to our development collaboration on OpenStack and OpenDaylight, SDN controller.

My team is one use case. Our company is not huge, but not small. We do about a half billion in revenues, but we’re big enough that we’ve been contributing for OpenStack for two years now and OpenDaylight since mid 2013.

We can push code back. We can contribute into the community. Where we find things that support an unfair advantage market, a new secret sauce, things that support it, we can share the burden, we can be good stewards.

We collaborate with Cisco, with Red Hat, with IBM, with HP in the open domain. What I’m pointing out here is there is no method enabled within DevNet to support that collaboration.

Cisco DevNet needs to support that collaboration

There’s a huge opportunity to do that. I’m hoping you take that chance. It’s not about tools. It’s about the back-end. It’s about the collaboration in the community.

My Recommendations

Don’t point me to product.

If I’m already looking for software integration information, I have made my purchasing decision. Shoving product in my face just gets in the way of my job.

If I’m so frustrated where I can’t find the API documentation and, very specifically, SDK’s, as a developer. I’m going to go to the competition and try to find theirs. It’s all about the software integration, not the tin can that we wrap a CPU in. We call that a server, right?

Please, please consolidate information onto GitHub

In the open source domain right now it’s not only where we collaborate, but it’s a tool chain that allows us to collaborate. It is the standard for open source communication and collaboration.

Linus Torvald invented it for a reason. The guys at GitHub have been really doing a great job in enabling this innovative community. That breaks down the barriers between manufacturer, integrator, and customer.

Set up a IRC channel in this community

Every day OpenStack, OpenDaylight, my own internal development teams’ use IRC channel’s to collaborate. You can see all my teams up on the public open source channels. Get Cisco channel up there. Make it so it’s integrated with DevNet. It’s not that technically challenge. That’s a huge opportunity.

What you should take away from this

Fundamentally you need to communicate and collaborate. DevNet should not only be we can be pointed to the software but a tool to encourage that collaboration.

Make sure that we know very clearly who is maintaining that repo, that there’s someone dedicated to evaluating emerging pull requests. Use us as a tool.

There are a lot of people that use Cisco gear out there, Cisco routers, phones, servers. There’s a small number of people there that, for many, many years, have been writing code that configures it.

If you can enable DevNet to be what it can truly become, which is a collaboration point, integrated with the common open source collaboration tools and platforms, GitHub…Maybe I’m dreaming of this day ,when I don’t have to re-factor my code every three years.

You can really start to create that community. Frankly there are a lot of us that are just waiting for it to happen. I’m looking forward to you creating a conversation and seeing this become what it could be.

]]>http://www.colinmcnamara.com/open-letter-to-cisco-about-devnet/feed/9Are you an OpenStack Active Technical Contributor – Register for the design summit nowhttp://www.colinmcnamara.com/are-you-an-openstack-active-technical-contributor-register-for-the-design-summit-now/
http://www.colinmcnamara.com/are-you-an-openstack-active-technical-contributor-register-for-the-design-summit-now/#commentsTue, 28 Jan 2014 19:01:59 +0000http://www.colinmcnamara.com/?p=1797Stefano just sent out the registration codes for the spring design summit in Atlanta. If you are like me, you probably went right to the code, clicked the link to the website and wen’t to register with EventBright. The danger is, if you are like me, you might have almost accidentally charged $600 dollars to your credit card.

What you need to click (thanks Stef for sending this out)

Thanks Stefano for sending this out. Pay close attention to the ENTER PROMOTIONAL CODE LINK

]]>http://www.colinmcnamara.com/short-video-openstack-qa-with-colin-mcnamara/feed/1OpenStack needs beards on the boardhttp://www.colinmcnamara.com/openstack-needs-beards-on-the-board/
http://www.colinmcnamara.com/openstack-needs-beards-on-the-board/#commentsMon, 13 Jan 2014 18:03:48 +0000http://www.colinmcnamara.com/?p=1782Yes, we need more beards on the board. We need big full beards, that represent the interests of the user community. There needs to be a balance against the Manufacturer/Vendor interests that dominate OpenStack today. The great / sad fact is that 93% of commits to OpenStack come from corporate interests.

Why more beards on the OpenStack Board?

What better exemplifies the interested of the OpenSource user then a full majestic beard attached to a guy in a who cares as much about the future of OpenStack as you do?

[me and some amazing people hanging at one of the after parties in OpenStack Summit Hong Kong]

I am part of OpenStack because of many reasons. it reminds me of joining the Linux movement in the 90’s. The fate of technology is in the hands of us, the Operators. We have the ability to drive the direction and execution of OpenStack in a direction that benefits the end user as well as the corporate contributors.

What this beard has done for the community so far

I’m not just a giant hippy. I’ve spent the last couple years executing on a passion for this great community and code base we call OpenStack.

I am an OpenStack ambassador.

I am an Active Technical Contributor for Folsom, Grizzly and Havana (though my contributions pale in comparison to many others)

I’ve helped many OpenStack user groups get off the ground

I’ve been the hairiest booth babe ever at many OpenStack conference booths

I’ve partnered with other users/operators just like me to start OpenStack training

I’ve used my visibility in social media to influence many others to become users and contributors

My full Application

Vendor employees on the list

I want to be completely fair to the vendor employees on the list. I know many of them personally, and they do an AMAZING job resisting the pressure to represent their corporations to represent those interests vs those of the community.

I am humbled every day by the amount of effort and contribution they give to OpenStack and the community. People like Kyle Mestery, Rob Hirschfeld, Monty Taylor make contributions to this community every day that I wish to someday rise to.

Vote in the OpenStack individual directors election

You received an email on Sunday night prompting you to vote for the 2014 individual directors of the OpenStack foundation. If you search for “OpenStack Foundation – 2014 Individual Director Election” you should find it.

This email gives you the chance to have your voice heard, and in my opinion provide balance to the force (between Vendors and Users). You have an opportunity to make your voice heard with the OpenStack individual directors election.

If you want a voice on the board who has the needs of the community firmly entrenched in his beard, then find your voting link email, scroll down to the bottom of that list, and vote for Colin McNamara.

For those that haven’t made the jump from network engineering to network development, a rebase is when you take a number of commits (saves) to your local working branch and squash them together into one large commit to push up to your Git or Gerrit server.

Apparently what is happening – (as highlighted on StackOverflow – http://stackoverflow.com/questions/5074136/git-rebase-fails-your-local-changes-to-the-following-files-would-be-overwritte ) is that the Mac file revision tracker – RevisionD is changing the timestamps of local files. This causes git to think that you have changed a file that has no changes in it’s content.

Solution to git rebase error on Mac OS X

In your terminal execute the following command telling git to stop relying on file system information alone –

git config –global core.trustctime false

Hope this helps others save a bit more hair then me.

–Colin

]]>http://www.colinmcnamara.com/fixing-git-rebase-failure-on-mac-os-x/feed/0Fixing your WordPress Blog – MySQL failing due to corrupted tableshttp://www.colinmcnamara.com/fixing-your-wordpress-blog-mysql-failing-due-to-corrupted-tables/
http://www.colinmcnamara.com/fixing-your-wordpress-blog-mysql-failing-due-to-corrupted-tables/#commentsMon, 30 Dec 2013 17:30:07 +0000http://www.colinmcnamara.com/?p=1772It is 6am in the morning, and I am fixing my self hosted WordPress blog. All I wanted to do was post a simple note about my first merge into OpenDaylight. Instead I’m fixing WordPress and MySQL…..

What do I see this morning – Error establishing a database connection

My site keeps going down because the MySQL instance on the backend keeps failing.

When i log in I check MySQL status, see that it is locked up and restart it to get things working again

Why is it failing? Because the cobblers son has no shoes. In real world terms my personal Linux instances aren’t exactly taken care of. Specifically I had a WordPress backup job that ran my system out of space. This caused tables in MySQL to get corrupted, evidenced my tailing /var/log/mysqld.log

[cmcnamara@www2 ~]$ sudo tail -10f /var/log/mysqld.log [sudo] password for cmcnamara: 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:23:54 [ERROR] /usr/libexec/mysqld: Table ‘./wpblog_cmcnamara/securewp_fs_visits’ is marked as crashed and should be repaired 131230 14:24:29 [ERROR] /usr/libexec/mysqld: Table ‘./openchimpphpbb/phpbb_sessions’ is marked as crashed and should be repaired 131230 14:24:29 [ERROR] /usr/libexec/mysqld: Table ‘./openchimpphpbb/phpbb_sessions’ is marked as crashed and should be repaired 131230 14:25:56 [ERROR] /usr/libexec/mysqld: Table ‘./openchimpphpbb/phpbb_sessions’ is marked as crashed and should be repaired 131230 14:25:56 [ERROR] /usr/libexec/mysqld: Table ‘./openchimpphpbb/phpbb_sessions’ is marked as crashed and should be repaired

Fixing the MySQL crash

Taking a snapshot of your system

Ok, let’s fix this crap. First things first, when working on systems that are non-ephemeral (that have state) I always like to trigger a snapshot. In this case I use RimuHosting for my blog (and a bunch of other blogs). So I am going to log into the console and trigger a snapshot.

That takes a couple minutes to run, get some coffee, bang your head into the table, whatever makes you feel good. When it is done,

Using mysqlcheck

Now let’s get down to the root of the problem, corrupted MySQL tables. There are a couple ways that you can fix them. Some are super complicated, some are super easy. In this case we are going to check our MySQL database using mysqlcheck —all-databases

[cmcnamara@www2 ~]$ sudo mysqlcheck –all-databases

[sudo] password for cmcnamara:

actionsoscom.address_book OK

actionsoscom.address_format OK

actionsoscom.administrators OK

…. (removed a bunch of redundant messages —

status : OK

wpblog_cmcnamara.securewp_fs_visits

warning : Table is marked as crashed

warning : 25 clients are using or haven’t closed the table properly

error : Checksum for key: 2 doesn’t match checksum for records

error : Checksum for key: 4 doesn’t match checksum for records

error : Corrupt

wpblog_cmcnamara.securewp_in_series_3_0_11_auth OK

wpblog_cmcnamara.securewp_in_series_3_0_11_entries OK

As you can see in the bold items, I have some corrupt tables. What to do?

You can drop the table if it is from a plugin that you don’t care about.

You can fix the table

Let’s fix the table –

[cmcnamara@www2 ~]$ sudo mysqlcheck –repair –all-databases

[sudo] password for cmcnamara:

actionsoscom.address_book OK

actionsoscom.address_format OK

actionsoscom.administrators OK

openchimpphpbb.phpbb_search_wordlist OK

openchimpphpbb.phpbb_search_wordmatch OK

openchimpphpbb.phpbb_sessions

warning : Number of rows changed from 20137 to 20197

status : OK

As you can see, the number of rows in openchimpphpbb.phpbb_sessions changed form 20137 to 20197. and when I run

sudo mysqlcheck –all-databases again, everything shows up as ok.

Next, I’m going to restart MySQL. This not necessary to repair the DB. However sometimes error’s creep up on restart. It’s best to observe those when you have just made some changes and they are fresh in your mind. So lets restart MySQL and verify everything still works.

[cmcnamara@www2 ~]$ sudo /etc/init.d/mysqld restart

Stopping mysqld: [ OK ]

Starting mysqld: [ OK ]

[cmcnamara@www2 ~]$

WIN!!

There you have it, if your WordPress blog keeps crashing. It may be because of a borked MySQL instance. You can use mysqcheck to check and repair corrupted tables in your database, and get back to blogging.

Resources

]]>http://www.colinmcnamara.com/fixing-your-wordpress-blog-mysql-failing-due-to-corrupted-tables/feed/2The Holy POM.XML of Antiochhttp://www.colinmcnamara.com/the-holy-pom-xml-of-antioch/
http://www.colinmcnamara.com/the-holy-pom-xml-of-antioch/#commentsFri, 27 Dec 2013 18:54:25 +0000http://www.colinmcnamara.com/?p=1764I just came down from the mountains, after a couple of days hacking on a tool-chain i’m proposing for including in OpenDaylight OVSDB for the Hydrogen release.

This toolchain takes AsciiDocs as a source, generates DocBooks XML (which is the OpenStack docs standard) and then generates HTML, PDF, Slidey from there. It is all run by Maven, which while powerful can have hidden and obtuse errors burred in the plugins it calls.

After all of this headache, I have christened Maven the holy hand grenade of open source. The Mavenized script from Monty Python is below -

Cleric: [reading] And Saint Attila raised the POM.XML up on high, saying, “O Lord, bless this thy POM.XML, that with it thou mayst blow thine enemies to tiny bits, in thy mercy.” And the Lord did grin. And the people did feast upon the lambs and sloths, and carp and anchovies, and orangutans and breakfast cereals, and fruit-bats and large chu…

Cleric: And the Lord spake, saying, “First shalt thou take out the Holy Maven. Then shalt thou count to three, no more, no less. Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out. Once the number three, being the third number, be reached, then lobbest thou thy Holy POM.XML of Antioch towards thy foe, who, being naughty in my sight, shall snuff it.

]]>http://www.colinmcnamara.com/the-holy-pom-xml-of-antioch/feed/0Reader Question – Should I get a second CCIE or focus on SDN + Cloudhttp://www.colinmcnamara.com/reader-question-should-i-get-a-second-ccie-or-focus-on-sdn-cloud/
http://www.colinmcnamara.com/reader-question-should-i-get-a-second-ccie-or-focus-on-sdn-cloud/#commentsThu, 19 Dec 2013 18:30:49 +0000http://www.colinmcnamara.com/?p=1754I’ve been having lots of conversations recently with CCIE’s about where their future lies, and how to best leverage the skills and value they have created in the past in this new and emerging world of Cloud and SDN. These conversations started with a few peoples interest being peaked with what I have been personally working on, and recently has has increased to a dull roar as many network engineers in this industry are faced with an unmistakable fact – the world of network engineering is changing.

Reader Question – On 12/17/13 3:54 PM, [Redacted] wrote:

——————–

Hey Colin, it’s been awhile and hoping to get some professional advice as you’re definitely looked up to as a thought leader and in the mix when it comes to the cloud. I’m basically at a cross road. I’ve started taking steps toward the 2nd CCIE in DC but with the proliferation of articles implying that SDN may make an impact into the enterprise considering private clouds, I want to make a careful decision here and am considering a learning path towards Linux, Programming Languages like Python and eventually Openstack. Is this a premature thought or paranoia? To me, all of this is news as I’ve been stuck in my virtualization bubble as well as the typical day-to-day networking technologies. But, now with VMware NSX, Cisco Insieme and the open source Cloud frameworks, I want to prepare for the steep learning curve now, if possible. You’ve been directly involved in these technologies for the past few years so I’m hoping you can provide some foresight. I know it will be your professional opinion, but to me, it’s valuable. Thanks in advance, [Redacted]

My response to [Redacted]

[Redacted] you are on thinking in the right direction. The value you have as single CCIE in service provider is effectively maximized right now. Continuing with another vendor specific certification in a time of rapid market transition [commoditization] is not the best way of increasing your value in the market. Think back to the core value proposition of network engineering. In my opinion this is to take Business Policy, combine with application requirements, and design / implement a system that allows both of those to be achieved within the operational requirements of your business.

Five years ago, a reasonable next step to increasing your value (ability to achieve the above stated goals) would be to add another silo of expertise, validated by an expert level certification. This might be released as adding another CCIE, getting your VCDX, JNCIE, etc, etc. By adding another area of focus, and understanding at a very deep level all the elements that are required to deliver it, you increase your capabilities and therefore your value as an engineer.

Now for you to continue to add value to the market as you have in the past as a CCIE, you have to understand how to translate that knowledge and experience into the new integration space (Open Cloud and SDN platforms). This requires at minimum you attaining a baseline of skills in Linux systems administration (which btw, when you read DevOps in a job description what most of these positions are). You then have to explore the true value of a CCIE, with DevOps (Linux SysAdmin) skills. Past this minimum level of Linux Systems Administration skills surrounding the installation and configuration of Open Cloud and SDN platforms you also have to learn how to be a software developer.

As you polish your software development skills I believe you will start to realize the full benefit of the modern network engineer. Not only will you have an understanding of Cloud and SDN platform operations, but you will have the skills to consume and improve them. Let me repeat -

I believe the role of the modern network engineer is to Architect, Consume and Improve Open Cloud and SDN platforms.

Which Cloud / SDN platforms should you start with?

In my opinion OpenStack and OpenDaylight are great places to start. These are the new area’s of integration, the new IETF protocols. In the past you would learn OSPF, BGP, MPLS, etc. And be able to create complex networks to support complex business policies. All of this multi-vendor integration points have been moved up into OpenStack / OpenDaylight / etc. In the grand scheme of things both projects are pretty early on in their life cycles. However they are both rich in functionality and contributed / supported by an incredibly diverse community.

[Redacted], I hope I answered your question. Though more importantly I hope that I left you with some questions that I didn’t have before. These questions may be “What is my role in this new world of Cloud and SDN”, “What unique skills and perspectives can I bring to the community” and “Now that I have seen a new way of doing things, what can I do to help my friends”. I’ve asked these questions myself many times over the years. In the past couple years leaders in the community such as Kyle Mestery @mestery and Brent Salsbury @networkstatic (as well as many others) have helped me answer those questions. Hopefully I can can return the favor by helping others down that same path.

]]>http://www.colinmcnamara.com/reader-question-should-i-get-a-second-ccie-or-focus-on-sdn-cloud/feed/3Resizing PDF files natively on your Machttp://www.colinmcnamara.com/resizing-pdf-files-natively-on-your-mac/
http://www.colinmcnamara.com/resizing-pdf-files-natively-on-your-mac/#commentsTue, 03 Dec 2013 20:53:42 +0000http://www.colinmcnamara.com/?p=1747I have a headache that I face almost every day on my Mac. When I share a presentation externally, I can’t always share the native PPT. I have to distill out a PDF, and I use the native print utility within Mac OSX to creature the pdf.

The challenge is, that I use LOTS of images and graphs in my presentations. This results in 50+ Megabyte files that are impossible to send via email. Resizing these down to a manageable size is not something that is natively available within OSX.

Solution to compressing PDF files natively with OSX

The solution is using quartz filters, and creating an Automator script to create an “application” that you use to compress your files. Below is the procedure to create this on your mac

2. Unzip the output and copy the qfilter files to your ~/Library folder

3. Open Automater, choose CREATE APPLICATION, select Apply Quarts Filter to PDF document and Drag to the right window. then Choose the DPI from the drop down and Save as Application

4. Save as an application (.App) and then drag the PDF you want to resize over that App file. In a couple seconds your PDF will be shrunk and available on your desktop.

]]>http://www.colinmcnamara.com/resizing-pdf-files-natively-on-your-mac/feed/2Application for OpenStack individual election – Colin McNamarahttp://www.colinmcnamara.com/application-for-openstack-individual-election-colin-mcnamara/
http://www.colinmcnamara.com/application-for-openstack-individual-election-colin-mcnamara/#commentsMon, 02 Dec 2013 18:17:57 +0000http://www.colinmcnamara.com/?p=1741Read the Q&A below and see if you want to Nominate Colin in this election.

Q

What is your relationship to OpenStack, and why is its success important to you? What would you say is your biggest contribution to OpenStack’s success to date?

A

1. I am an OpenStack ATC, with contributions to Folsom, Grizzly and Havana though I feel my most impactful contributions have been in the community not the code base.

2. I am active in SFbay OpenStack meetup group that Sean Roberts leads.

3.I have helped three OpenStack meetup groups get off the ground (Denver, Minneapolis, and RTP) and am in progress of getting a fourth one started in Alpharetta.

4. In two of those groups, Denver and Minn, I have flown out to give the initial talk for the kickoff meetup.

5. I developed and delivered multiple OpenStack presentation decks educating and enabling the community. Including “Surviving your first commit” which was delivered at the San Diego summit (and many users groups and events) and most recently “OpenStack for VMware admins” delivered at the #vBrownBag hall at VMworld San Francisco.

6. I’ve helped multiple storage Vendors understand the why and how to develop and release integrations into Cinder.

7. Sean Roberts and I are the founding members of the training-manuals blueprint in OpenStack-Manuals. I have been focused on increasing core and community members, getting the CI system integrated, and ScrumBan (Project Management)

8. I’ve increased visibility and understanding of OpenStack through social media and my blog – www.colinmcnamara.com

9. I’ve volunteered at the OpenStack booth at multiple conferences.

10. Most importantly of all, I’ve had the pleasure of helping others join the community and get their first contribution in to OpenStack.

Q

Describe your experience with other non profits or serving as a board member. How does your experience prepare you for the role of a board member?

A

The non profits that I am active and supporting are

1. Silicon Valley Food Bank

2. St Baldricks

I am a member of the following working groups and boards

1. Entertainment Technology Council (working group in the National Assation of broadcasters

Q

What do you see as the Board’s role in OpenStack’s success?

A

The board has served multiple key purposes, but fundementally a seperate group of individuals focused on all the items that surround, but are no technical direction of OpenStack has been key to the projects success. The ability to create and sustain a successful govenernance model has been key to the success of the growth and success of the project so far.

Q

What do you think the top priority of the Board should be in 2014?

A

Last time I checked, 92% of contributions to OpenStack are from corporate contributors. As we grow and marture as a project, I believe that we have to increase the individual and educational contributions to OpenStack. The board has a key role in achieving this goal.

]]>http://www.colinmcnamara.com/application-for-openstack-individual-election-colin-mcnamara/feed/0Diverging from IETF and IEEE specifications at the northbound AS boundary of OpenStackhttp://www.colinmcnamara.com/diverging-from-ietf-and-ieee-specifications-at-the-northbound-as-boundary-of-openstack/
http://www.colinmcnamara.com/diverging-from-ietf-and-ieee-specifications-at-the-northbound-as-boundary-of-openstack/#commentsThu, 07 Nov 2013 08:50:40 +0000http://www.colinmcnamara.com/?p=1731I have a couple concerns that I don’t feel I clearly communicated during the L3 advanced features session. I’d like to take this opportunity to both clearly communicate my thoughts, as well as start a discussion around them.

Building to the edge of the “autonomous system”

The current state of neutron implementation is functionally the l2 domain and simple l3 services that are part of a larger autonomous system. The routers and switches northbound of the OpenStack networking layer handled the abstraction and integration of the components.

Note, I use the term “Autonomous System” to describe more then the notion of BGP AS, but more broadly in the term of a system that is controlled within a common framework and methodology, and integrates with a peer system that doesn’t not share that same scope or method of control

These components that composed the autonomous system boundary implement protocols and standards that map into IETF and IEEE standards. The reasoning for this is interoperability. Before vendors utilize IETF for interoperability at this layer, the provider experience was horrible (this was my personal experience in the late 90’s).

Wednesdays discussions in the Neutron Design Sessions

A couple of the discussions, most notably the extension of l3 functionality fell within the scope of starting the process of extending Neutron with functionality that will result (eventually) in the ability for an OpenStack installation to operate as it’s own Autonomous System.

The discussions that occurred to support L3 advanced functionality (northbound boundary), and the QOS extension functionality both fell into the scope of Northbound and Southbound boundaries of this system.

My comments in the session

My comments in the session, while clouded with jet-lag were specifically around two concepts that are used when integrating other types of systems

1. In a simple (1-8) tenant environment integration with a northbound AS is normally done in a PE-CE model that generally centers around mapping dot1q tags into the appropriate northbound l3 segments and then handling the availability of the L2 path that traverses with port channeling, MLAG, STP, Etc.

2. In a complex environment (8+ for discussion) different Carrier Supporting Carrier (CSC) methods defined in IETF RFC 4364 Section 10 type A, B or C are used. These allow the mapping of segregated tenant networks together and synchronizing between distributed systems. This normally extends the tagging or tunneling mechanism and then allows for BGP to synchronize NLRI information between AS’s.

These are the standard ways of integrating between carriers, but also components of these implementations are used to integrate and scale inside of a single web scale data center. Commonly when you scale beyond a certain physical port boundary (1000is edge ports in many implementations, much larger in current implementations) the same designs for C2C integrations are used to create network availability zones inside a web scale data center.

Support of these IETF and IEEE standard integrations are necessary for brown field installations

In a green field installation, diverging from IETF and IEEE standards on the north bound edge while not a great idea, can result in a functional implementation. In a brown field implementation where OpenStack Neutron will be integrated into an existing network core. This boundary layer is where we move from a controlled system into a distributed system. The cleanly integrate into this system, IETF and IEEE protocols and standards have to be followed.

When we diverge from this standards based integration at the north edge of our autonomous system we lose the ability to integrate without introducing major changes (and risk), into our core. In my experience this is sufficient to either slow or stall adoption. This is a major risk, that I believe can be mitigated.

My thoughts on mitigating this risk

We need to at least map and track the relevant IETF RFC’s that define the internet standards for integration at the AS boundary. I know that many of the network vendor developers that contribute to Neutron have access to people who both have deep knowledge of these standards, and also participate in the IETF working groups. I would hope that these resources could be leveraged to at least give a sanity check, at best ensure a compliant northbound interface to other systems.

Side benefit of engaging IETF members in this discussion

The other side benefit of this is that inventions inside of Neutron can also be communicated as standards to the rest of the world in the form of net new RFC’s. In OVS this has already happened, as OVS has emerged to be a common component in many network devices, and the need to establish and reference a common standard has risen it’s head. I would think that inventions within Neutron would follow this same path.

1. OpenStack Ambassador Program – I’m one of the initial OpenStack Ambassadors. I have a focus on doing whatever is necessary to grow and scale the program. This started with a 7 am meeting on day one of the summit. Needless to say I am tired.

2. OpenStack Neutron Object Affinity – Policy management and abstraction and implementation in the network is super important to me. We are engaging in a bit of this at work (creating stacks of router -> switch -> firewall -> switch -> Load Balancer -> switch -> host (and all the stacks behind the host eventually…) can be translated and implemented through Neutron and map to implementation plugins below it.

3. OpenStack Training – This is the project that Sean and I started and has consumed a massive amount of my nights and weekends. There are many dependancies of this project all across many OpenStack projects. Sean and I are splitting up to cover as many relevant meetings as possible.

4. Connect with my peers in the community – twice a year we get a chance to gather with each other and connect. It is important to take this opportunity. Breakfast, Lunch, Dinner, Nights are all great opportunities for meeting and aligning. I’m taking all of these opportunities to increase engagement .

]]>http://www.colinmcnamara.com/my-goals-schedule-for-openstack-icehouse-design-summit-in-hong-kong/feed/1Contributing to OpenStack Training Guideshttp://www.colinmcnamara.com/contributing-to-openstack-training-guides/
http://www.colinmcnamara.com/contributing-to-openstack-training-guides/#commentsMon, 07 Oct 2013 22:52:09 +0000http://www.colinmcnamara.com/?p=1723OpenStack training is gaining steam and momentum. We are almost halfway through the work to finish the associate guide. The most import item for maintaining and accelerating the Training Guides is to get more people to contribute.

Last week at the SFbay OpenStack Hackathon we got a couple individuals through their first commit. It started with a live demonstration of the process for creating a new section in the training guide. Since we were holding the meetup via Google hangouts live, we had the entire thing recorded. I edited the content so now there is also a video walkthrough of the contribution process.

I hope this work continues to lower the barriers to entry to contributing. If it helps, please tell me on twitter – @colinmcnamara

]]>http://www.colinmcnamara.com/contributing-to-openstack-training-guides/feed/0OpenStack Training Guides Project Overview and Status 10-1-2013http://www.colinmcnamara.com/openstack-training-guides-project-overview-and-status-10-1-2013/
http://www.colinmcnamara.com/openstack-training-guides-project-overview-and-status-10-1-2013/#commentsTue, 01 Oct 2013 17:14:50 +0000http://www.colinmcnamara.com/?p=1718Over the past couple years, those of us involved with OpenStack have had goals creating a lasting and sustaining project by increasing contribution to and adoption of OpenStack. Over this time, there have been many efforts at increasing contribution. Most of these efforts have been targeted at large corporations like HP, Cisco, IBM, Dell and RackSpace as well as smaller focused companies like Mirantis, Piston and CloudScaling getting them to jump on board with developers.

These efforts have been largely successful, measured by both number of developers and lines of code.

While there has been continued increase in both contributors and contribution, the bulk of contributions come from larger corporations, not individuals or operators. Also, the focus is generally in the functional elements that tie into each corporations products. Many times items like training, enablement and documentation are left to a very small group of people. In my opinion this is a challenge we must overcome.

Enter OpenStack Training Guides, a project housed under the OpenStack-Docs project. The founding members are Sean Roberts from Yahoo, and Colin McNamara from Nexus IS. Over the past couple years Sean and I have been running experiments in the user group community on how to increase contribution.

My perspective has always been based off my experience with Linux in the late 90’s. By focusing on lowering barriers to adoption of OpenStack, a small percentage of adopters will become contributors. This also gets a higher quality user base (contributing bugs). This increased user base also continues to incent the large corporate members to increase their focus and integration into OpenStack as they see an increased market to sell too.

Project Goals

The goals of OpenStack Training Guides are simple -

Provide a structured training program to enable skill development for maintaining, consuming, and contributing to OpenStack

Align to the OpenStack Foundation certification program

Increase accuracy and usability of documentation and training by engaging user groups and community members across the world

Increase the amount of skilled engineers and developers in the hiring pool for OpenStack operators and developers

Project Progress

Work on OpenStack training informally started in early 2012 with experiments in the SFbay OpenStack meetup group. These experiments created a couple key contributors to OpenStack, most notably the Neutron contributor who runs OpenStack SDN at a highly visible Silicon Valley OpenStack user.

Further unofficial work continued in early 2013, as the foundation voted to approve an OpenStack foundation managed certification program. With that milestone achieved Sean and I officially started the OpenStack training guides project, under OpenStack docs on June 18 of 2013 with a goal of releasing with the Havana release.

In the Ninety days since we got the BluePrint approved a significant amount of work has occurred. Continuous Integration systems (Jenkins / Zuul) were configured to support quality review and publishing.

Current Status – 10/1/2013

Our current sprint was established in September, starting with the International sprint day. This weeks progress in the repo has been a little light, however we are waiting for a merge from Prahnav of content and scripts that are currently housed in the Aptira repo. Since many of the contributors in this project contribute on their own time (nights and weekends) we tend to notice weeks with lots of activity, and then light weeks. This last week being a light week.

Configuration scripts for single node resources created (needs to be merged)

Blocking Items

Adding new contributors to increase our BurnDown rate from 2 to 4 which will bring us in line with the Havana release schedule

The most critical item is the RST —> Docbooks translator. I (colinmcnamara) have made progress, estimated at 70% completion at using the pandoc tools to convert between the formats, though there are still build issues using the Maven plugins. If I can’t find a solution to this we will move forward with creating smaller include files with the source doc path as an XML comments to maintain forward progress.

Priority Items for this week

The priority items for this week are the following -

Merge in the content Prahnav created

Resolve the ReStructuredText to DockBooks-v5 issue

Create Training Guides presentation for delivery at Minneapolis OpenStack users group on 10/22

Have SFbay OpenStack MeetUp group validate the VirtualBox install and configuration documentation on Thursday at Yahoo

]]>http://www.colinmcnamara.com/openstack-training-guides-project-overview-and-status-10-1-2013/feed/0GeekDad Project – Making Jumbo Tumbling Towershttp://www.colinmcnamara.com/geekdad-project-making-jumbo-tumbling-towers/
http://www.colinmcnamara.com/geekdad-project-making-jumbo-tumbling-towers/#commentsMon, 30 Sep 2013 18:00:20 +0000http://www.colinmcnamara.com/?p=1688I’m a #GeekDad. That means that I do lots of cool projects with my kids. These projects generally involving making things, or more specifically making things fly and blow up. This weekend I chose to teach my son Chris some basics of wood working by building a Jumbo Jenga set on our own.

OVERVIEW

What is Jumbo Jenga (or Wobbly Towers, or Tumbling Timbers)? It is a GIANT fun sized version of the classic game Jenga. Specifically the pieces I built are roughly 2.5 times the size of normal Jenga. This puts the stacked Jenga tower at 3.3 feet tall and 7.5×7.5 inches wide, with a block dimension of 2x3x7.5.

This results in BIGGER fun, with BIGGER crashes, and a Jenga tower 5-6 feet tall before it crashes. It is kind of fun If you were to buy this online, you would pay $149.95 (and get free shipping). However you would take away the chance to make something with your children, and save a bunch of money. Luckily, this project is Quick, Easy and Cheap to do.

MATERIALS

The materials to build your Jumbo Tower (I’ll use that name to avoid any legal issues) is very simple. You need to go to your local building supplier and pick up 5 2x3x8 studs. I was able to pick these up at the Home Depot around the corner for $1.98 each, for a total of $10 bucks. This is quite a deal compared to $149.95 if you ordered them online.

TOOLS

The bare minimum tools that you need for this project are a saw, and some sand paper. A good hand saw, and a sanding block will set you back around 20 dollars. You will however spend a couple days sanding.

I chose however to use a Sliding Compound Miter Saw, and a Belt sander. You can do perfectly find with a hand held circular saw (30 bucks or so at Harbor Freight). I would however recommend getting the belt sander. It saves LOTS OF TIME.

Process

Step 1 - Have you kids get the lumber out of the truck. They need to learn what hard work is.

Step 2 - Have you kids sand down all four sides of the 2x3x8 studs. Remember to round the edges a bit too.

Step 3 - Cut the blocks into 7.5” pieces. I used a Compound Mitre Saw with block clamped on so I could burn through the lumber. I didn’t make Chris do this step, though I walked him through a couple cuts so he learns tool safety.

Step 4- Make a bench sander – I don’t have a bench sander. I do however have a wealth of redneck ingenuity. With that gift, my hand held belt sander, and couple clamps I made a redneck bench sander.

Step 5 - Sand the ends, and round the edges of your Jumbo Tower blocks. The easiest way is to sand the cut ends first, then lay each side pretty flat against belt while lifting the end up. This will put a nice smooth edge on each side, with some sharp corners. Roll each corner against the belt to smooth it out and you have a nice sanded lock.

Play your new game!

You are now the proud owner of a Jumbo Towers game. It cost you $10 dollars in materials, and three hours in time if you are slow like me. What you have a is an really fun party game for your game room, and a great experience with your kids. Enjoy!

Just over a month ago, the OpenStack Foundation announced the OpenStack Ambassador Program. The aim of this program is to create a framework of community leaders to sustainably expand the reach of OpenStack around the world.

While filling out the application, I realized that I was a bit more active over the past couple years then I have realized. I also figured, that this is something I should share on my blog, as social media is one of the key assets that I can bring to the program.

Below is my application to the program –

Q: Why are you applying to be an OpenStack Ambassador ?

I want to increase my impact on the community by lowering barriers to adoption and increasing contribution to OpenStack.I feel there is a parallel with my OpenStack experience to my experiences with Linux in the 90’s. Back then I found a platform that changed how I ran my ISP. But I wasn’t able to really get it going until I we started organizing into groups in the community and started lowering the barriers to entry as a community.I feel that I can use the lessons learned then and since to increase Visibility, Adoption and Contribution to OpenStack.

Q: How have you participated in the OpenStack community to date ?

I am an OpenStack ATC, with contributions to Folsom, Grizzly and Havana though I feel my most impactful contributions have been in the community not the code base.

I am active in SFbay OpenStack meetup group that Sean Roberts leads.

I have helped three OpenStack meetup groups get off the ground (Denver, Minneapolis, and RTP) and am in progress of getting a fourth one started in Alpharetta.

In two of those groups, Denver and Minn, I have flown out to give the initial talk for the kickoff meetup.

I developed and delivered multiple OpenStack presentation decks educating and enabling the community. Including “Surviving your first commit” which was delivered at the San Diego summit (and many users groups and events) and most recently “OpenStack for VMware admins” delivered at the #vBrownBag hall at VMworld San Francisco.

I’ve helped multiple storage Vendors understand the why and how to develop and release integrations into Cinder.

Sean Roberts and I are the founding members of the training-manuals blueprint in OpenStack-Manuals. I have been focused on increasing core and community members, getting the CI system integrated, and ScrumBan (Project Management)

I’ve increased visibility and understanding of OpenStack through social media and my blog – www.colinmcnamara.com

I’ve volunteered at the OpenStack booth at multiple conferences.

I co-chaired the Strategy Track for the upcoming OpenStack Summit in Hong Kong

Most importantly of all, I’ve had the pleasure of helping others join the community and get their first contribution in to OpenStack.

Q: What ideas do you have for your community that you haven’t had time or resources to implement ?

Holding consistent community delivered OpenStack training, lowering barriers to adoption of and contribution to OpenStack while increasing the pool of qualified engineers and developers.

Extending this same concept to intellectually gifted but economically disadvantaged individuals and regions around the world.

Spreading the word (separated from corporate FUD and occasional instances) about how to participate, extend, and replicate successful items that we are doing in the OpenStack community today.

Formalizing a support and mentoring program for getting OpenStack meetups started in new communities.

Q: How will you work with others to achieve your goals?

I feel the best way to impact a community is to spark and fan flames. I want to continue and increase this.How i will do this is best demonstrated by how and what i have been doing. Over the past two years I have worked with many individual around the world to achieve the goals of increasing visibility, knowledge, excitement, usage and contribution to OpenStack.I have worked with the following people most actively (Sorry if I miss anyone) to achieve that goal -

This morning Cisco announced it’s intention to buy Whiptail, a company that produces a series of high performance, scale out, unified (Fiber Channel, iSCSI, Infiniband and NFS) flash based arrays. The intention is to integrate the product into their Unified Compute System (UCS)

In their press release however, Cisco is positioning this acquisition as a high performance flash memory system. The described purpose of this system is to accelerate key applications similar to Fusion I/O.

.

Cisco’s statement from Blogs.cisco.com -

Cisco is evolving UCS to keep pace with the changes brought about by the Internet of Everything and the App Economy. Today, Cisco is announcing its intent to acquire WHIPTAIL. Based in Whippany, New Jersey, WHIPTAIL builds the highest performing and most scalable solid-state memory systems available today. Scalable from one node to up to 30 nodes, WHIPTAIL systems can deliver over four million IOPS and 360 Terabytes of raw capacity – a truly staggering amount of solid-state performance capable of providing the workload optimization required in the App Economy.

By making this acquisition, Cisco is enhancing the Unified Computing System (UCS) by bringing solid-state memory acceleration into the compute tier as a managed subsystem. WHIPTAIL is a perfect architectural fit for UCS because together the two combine a clustered architecture with fabric-based acceleration – all of which is automatable via the UCS Manager and UCS Director. The end result is to deliver optimized performance on top of UCS for emerging and business critical applications, such as virtualized, Big Data, database, High Performance Computing and transcoding workloads.

What this really is is Cisco entering the storage market

While the press release tries to skirt around the subject of storage, by positioning WHIPTAIL as an app accelerator. Those that are in the industry know that WHIPTAIL arrays are targeted at the all flash storage market accelerating mission critical high performance workloads.

By acquiring WHIPTAIL, and integrating into the Data Center Business Unity (DCBU) that makes UCS, Cisco can provide Systems, Storage and Networking from a single source. Combining WHIPTAIL with UCS Director (Cloupia), Storage Profiles (recent features within the last 18 months) and other new announcements Cisco can basically provide integrated Storage, Systems and Networking management all through UCS management console.

How this affects NetApp, EMC and VCE

First, let me make an important statement. I have many friends at EMC, VCE and NetApp. They are all well run companies with good products. The opinions written about their reaction is based on my interpretation of dynamics in the field, and not a representation of the individuals in their respective companies.

EMC (and by proxy VMware -

Like it or not, Cisco and VMware now in a fight for the network edge because of VMware (and parent company EMC’s) purchase of Nicira. The relationships in the field have been tense at best, hostile at worst. I expect EMC to continue to push towards the Software Defined Data Center vision and try to remain in control of large enterprise storage sales through their dominance with VMAX. I expect them to pump up their messaging around Extreme I/O, All Flash VMAX, and All flash VNX as they compete for hearts and minds in the field.

NetApp -

This morning many people at NetApp will cry a single tear into their coffee as they realize that Cisco buying them is no longer an option. Years of relationship building, and success in the field around FlexPod with Cisco will get thrown out the door as the Cisco sales force is provided with a Unified Storage option that is on the Cisco price list.

Sadly, I think this may signal a tipping point for NetApp. They are an amazing company with amazing people and great products. However as the last true non integrated storage vendor, taking Cisco off the prospective suitors list is not a good sign for their future.

VCE -

Awkward. Awkward, Awkward. VMware, Cisco and EMC are parent companies. Effectively the recent product announcements would be the same if your Mom was cheating with your neighbor, who’s husband was cheating with your dad, while your sister just opened up a tattoo and piercing parlor.

The parent companies are fighting and in a divorce. The next couple months will be a test of how independent VCE is, and whether they can survive their parents getting divorced. They have great people, and good products. I hope they succeed, but it will be hard.

How this affects the customer

The cliche answer is each company is focused on customer success. The reality is that each of these companies have individuals that represent them in social media, as well as teams in the field. My gut feel is that we will see an expansion of the current DC Networking debate (Cisco vs Nicira) expand into storage.

I think this is likely to drive a lot of confusion into the customer conversation as engineering and field teams start to understand how Storage fits into to Cisco’s product set, and companies that end up as competition start generating competing messaging.

In short, FUD, Confusion, and other craziness is likely to dominate in the upcoming months.

Learn More -

]]>http://www.colinmcnamara.com/cisco-enters-the-storage-market-by-acquiring-whiptail/feed/2OpenStack for VMware adminshttp://www.colinmcnamara.com/openstack-for-vmware-admins/
http://www.colinmcnamara.com/openstack-for-vmware-admins/#commentsMon, 09 Sep 2013 18:00:48 +0000http://www.colinmcnamara.com/?p=1662At this years VMworld (2013) in San Francisco I had the pleasure of recording a couple video’s. The first video that got recorded was for #vBrownBag. The topic I decided to cover was a Primer on OpenStack for VMware admins. In this short 10 minute video I cover the key drivers and applications that use OpenStack vs VMware, as well as a comparison of the technical components of OpenStack in terms a VMware administrator would be used to.

Recorded #vBrownBag from VMworld 2013

Higher Resolution SlideShare of Presentation

Thank you

]]>http://www.colinmcnamara.com/openstack-for-vmware-admins/feed/0PuppetConf 2013 video’s are onlinehttp://www.colinmcnamara.com/puppetconf-2013-videos-are-online/
http://www.colinmcnamara.com/puppetconf-2013-videos-are-online/#commentsThu, 05 Sep 2013 17:00:42 +0000http://www.colinmcnamara.com/?p=1655Last month I encouraged everyone to sign up for PuppetConf, a DevOps conference run by PuppetLabs.For those of you that weren’t able to attend in person, or you are like me and were interested in a bunch of different sessions that conflicted, PuppetLabs is kind enough to post the larger sessions to their YouTube channel, with the master list housed here – http://puppetlabs.com/resources/puppetconf-2013

Must See Keynotes

Jez humble is THE MAN. His book Continuous Delivery (paired with Poppendykes books on Agile team leadership) are the bible for running a DevOps team. Jez discusses how poaching from the hiring pool is an organizational failure, and what you can do to grow great people in your organization

Puppet in production at Webex – Reinhardt Quelle, Cisco/WebEx

Reinhardt was behind the OpenStack cloud at Cisco Webex, and is speaking at PuppetConf sharing how Webex uses Puppet to automate their infrastructure at Scale

This talk, which was repeated at VMworld features Nick Weaver, the creator or Puppet Razor. He talks about how puppet is extended through a new Open Source project – Project Zombie that is the secret sauce behind VMware Hybrid Cloud Service.

My goal over the past three OpenStack Design Summits has been to speak on topics that lower the barrier to entry to using and contributing to OpenStack. Topics like training, community and tools have all been top of mind to help enable the community.

This summit I have teamed up with some brilliant people to submit some presentations and workshops that I think will benefit the community immensely.

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

Continuing on the theme of OpenStack User Group coordination from the last OpenStack Summit, this session will be an interactive brain storming session to think about how all the user groups all over the world can collaborate together. We’ll discuss ideas like the following:

Common repository for hosting slides/recordings.

Coordinating joint user group meetings.

Joint sharing and awareness of talks from other user groups.

Guest speaker arrangements.

The idea is to discuss these in a panel format with audience partiicpation and continue to grow interest in all of these areas as more and more user groups come online across the world. We want to ensure all the user groups are succesful, and this is one way to help make that happen.

Continuing on the theme of OpenStack User Group coordination from the last OpenStack Summit, this session will be an interactive brain storming session to think about how all the user groups all over the world can collaborate together. We’ll discuss ideas like the following:

Common repository for hosting slides/recordings.

Coordinating joint user group meetings.

Joint sharing and awareness of talks from other user groups.

Guest speaker arrengements.

The idea is to discuss these in a panel format with audience partiicpation and continue to grow interest in all of these areas as more and more user groups come online across the world. We want to ensure all the user groups are succesful, and this is one way to help make that happen.

The idea of training people of all skills sets who attend the user group meetings has been around since the openstack user groups started. We discussed training in great detail at the last summit. There has been significant progress on the community side of training since then. We want to socialize what has been completed so far and what is next. Join us to contribute to the discussion and find out how you can get involved.

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

The OpenStack Associate engineer course is a two day course that will equip the trainee with the knowledge and skills necessary to install, deploy and manage a three node OpenStack installation following the RefStack architecture as defined in the OpenStack Basic Install Guide.

The course is delivered as a mix of instructor led and lab training enabling the student to create a working virtualized lab instances running on their local machine, or infrastructure in their own environment as connectivity allows.

This course was generated by the OpenStack user community, and is intended to be delivered and improved by both the user community and commercial entities in both self paced, and instructor led formats.

– See more at: http://www.openstack.org/rate/Presentation/openstack-associate-engineer-basic-install-and-operate-workshop#sthash.l2pwuBAK.dpuf

]]>http://www.colinmcnamara.com/openstack-summit-presentations-every-vote-counts/feed/0Getting Started with DevOps Reading Listhttp://www.colinmcnamara.com/devops-reading-list/
http://www.colinmcnamara.com/devops-reading-list/#commentsMon, 05 Aug 2013 17:00:58 +0000http://www.colinmcnamara.com/?p=1630One of the common questions that comes up in my DevOps conversations is “What are some books I can read to learn more about this?”. Each time someone asks me that question, I rattle off my top for titles, and some times send a follow up email with the top books that they should read.

This weekend, I took a page from my the DevOps process optimization handbook, automated my repeated manual process of creating reading lists and just wrote down the most impactful books on my shelf into a reading list. (the entire expansive reading list can be found here – Colin’s Reading List).

That list however is very expansive. What I’d like to share here is a a concise set a books that will get someone started in their DevOps transition.

The Goal: A Process of Ongoing Improvement Is a classic book told in the socratic method that covers the Theory of Constraints and it’s application to manufacturing environments. This mathematical theory is the one of the key concepts behind the systems view and optimization called Kaizen that is used in lean, TPMS (Toyota Production Management System) and DevOps.

Leading Lean Software Development

Leading Lean Software Development: Results Are not the Point , Mary and Tom Poppendieck’s latest book shows software leaders and team members exactly how to drive high-value change throughout a software organization—and make it stick. They go far beyond generic implementation guidelines, demonstrating exactly how to make lean work in real projects, environments, and companies.

Kanban / Scrum making the most of both

Kanban and Scrum – making the most of both,Scrum and Kanban are two flavors of Agile software development – two deceptively simple but surprisingly powerful approaches to software development.The purpose of this book is to clear up the fog, so you can figure out how Kanban and Scrum might be useful in your environment.

Learning to see – the Lean Institute

Learning to See: Value Stream Mapping to Add Value and Eliminate MUDA, Using manufacturing optimization techniques like Value Stream Mapping are key tools for a successful DevOps Transition. In plain language and with detailed drawings, this workbook explains everything you will need to know to create accurate current-state and future- state maps for each of your product families and then to turn the current state into the future state rapidly and sustainably.

If you are curious about what other books I recommend (which is a much more expansive list then this one) feel free to browse on over to my full reading list.

As always, if you found this helpful please tell me about it on Twitter!

]]>http://www.colinmcnamara.com/devops-reading-list/feed/1Are you automating using Puppet? Join me at PuppetConf to learn howhttp://www.colinmcnamara.com/are-you-automating-using-puppet-join-me-at-puppetconf-to-learn-how/
http://www.colinmcnamara.com/are-you-automating-using-puppet-join-me-at-puppetconf-to-learn-how/#commentsThu, 01 Aug 2013 17:00:15 +0000http://www.colinmcnamara.com/?p=1569One of my favorite learning events of the year is right around the corner – PuppetConf!!!

What is PuppetConf?

I think it is best to start with what is Puppet. In short and simple terms, Puppet is a tool used by System, Applications, and Network administrators to automate tasks of IT Operations.

PuppetConf is a place where all types can come and learn about the following

Continuous Delivery

Case Studies of people using Puppet

DevOps

Cloud Automation

Getting Started with Puppet

I’ll be at PuppetConf in San Francisco August 22-23rd. I hope you will too.

To that end, the fine folks at Puppet have provided a $200 dollar discount off the registration fee to anyone who reads my blog using the discount code: colinmcnamara

]]>http://www.colinmcnamara.com/are-you-automating-using-puppet-join-me-at-puppetconf-to-learn-how/feed/3Puppet syntax highlighting using TextMate2http://www.colinmcnamara.com/puppet-syntax-highlighting-using-textmate2/
http://www.colinmcnamara.com/puppet-syntax-highlighting-using-textmate2/#commentsMon, 03 Jun 2013 17:00:18 +0000http://www.colinmcnamara.com/?p=1545Do you use Puppet to manage your infrastructure? Are you like me and absolutely love TextMate2? If so you are probably pulling your hair out getting syntax highlighting working with TextMate2.

The original version of TextMate had a very wide array of bundles (called tmbundles) that extended its functionality. When TextMate2 was released, it did not have the ability to import older bundles written for version 1.

It does ship with a large amount of built in bundles, however Puppet and Chef don’t ship by default. So, instead of pretty highlighting calling out syntax errors that you have made (or showing how awesome as job you have done) you get a bland white output show below.

]]>http://www.colinmcnamara.com/puppet-syntax-highlighting-using-textmate2/feed/2Speaking at IPMA tomorrow on OpenStack, DevOps and Continuous Deliveryhttp://www.colinmcnamara.com/speaking-at-ipma-tomorrow-on-openstack-devops-and-continuous-delivery/
http://www.colinmcnamara.com/speaking-at-ipma-tomorrow-on-openstack-devops-and-continuous-delivery/#commentsTue, 21 May 2013 17:10:05 +0000http://www.colinmcnamara.com/?p=1538I’m hopping on a plane to Seattle this afternoon to speak at 2013 IPMA Forum at Saint Martin’s University.

I’ll be continuing my habit of tilting at windmills (ask @jdooley_clt if you are curious), with the goal of getting state and local agencies to adopt modern software development methodologies (Agile) on OpenSource platforms such as OpenStack.

What is IPMA

The Mission of the IPMA is to help maintain Washington State’s position as the nation’s premier IT state by continuing to advance the quality and professionalism of the Washington State Government IT community. The IPMA is dedicated to:

Promoting professional networking among state government IT managers, business leaders and IT industry leaders

Enhancing the IT community’s teamwork, collaboration and communications across Washington State agencies

Providing professional development opportunities for the state’s IT leaders, managers and technical staff that focus on:

Developing and enhancing key IT skills

Expanding leadership and managerial competency

Providing visibility to important IT technologies and their successful application

Colin McNamara, Nexus IS, is a seasoned professional with over 15 years’ experience with network and systems technologies

Session description:

In the transition to cloud computing, a new class of application design patterns have emerged. These application structures closely rely on public and private cloud platforms such as Amazon and OpenStack. In this session we will discuss the key drivers causing this shift, the platforms to support it, as well as key technical, organizational and cultural changes to support these cloud computing applications and platforms.

Lessons to be learned:

Understand how private cloud platforms like OpenStack fit in your e-commerce, mobile and “cloud” application strategy. Understand how the importance of Continuous Delivery tools in this ecosystem. Understand the organization and project management structures necessary to deliver and support next generation cloud applications

How can you get hold of me while I’m out here

The best way of getting hold of me while I’m at a conference is pinging me on twitter – @colinmcnamara or swing by the Nexus booth on the show floor and ask someone to hunt me me down.

]]>http://www.colinmcnamara.com/speaking-at-ipma-tomorrow-on-openstack-devops-and-continuous-delivery/feed/0Shaving my head for Cancer – Join the fighthttp://www.colinmcnamara.com/shaving-my-head-for-cancer-begging-for-donations/
http://www.colinmcnamara.com/shaving-my-head-for-cancer-begging-for-donations/#commentsThu, 14 Mar 2013 22:12:06 +0000http://www.colinmcnamara.com/?p=1527For the third year in a row I’m having my head shaved to stand in solidarity with kids fighting cancer, but more importantly, to raise money to find cures.

]]>http://www.colinmcnamara.com/shaving-my-head-for-cancer-begging-for-donations/feed/0Come learn with me – Network Field Day 5http://www.colinmcnamara.com/come-learn-with-me-network-field-day-5/
http://www.colinmcnamara.com/come-learn-with-me-network-field-day-5/#commentsWed, 06 Mar 2013 17:14:20 +0000http://www.colinmcnamara.com/?p=1519One of my favorite times of the year has come. It’s time for Network Field Day!

What is Network Field Day? It is a group of independent network experts that blog, tweet, podcast and occasionally do incredibly useful things with complex network environments. We get to grill the technology companies on their networking technologies, and share our thoughts with the world. (via blogs and the live stream available below).

Ethan Banks, CCIE #20655, is a hands-on networking practitioner who has designed, built and maintained networks for higher education, state government, financial institutions, and technology corporations.

Over the last twenty odd years, Greg has worked Sales, Technical and IT Management but mostly he delivers Network Architecture and Design. Today he works as a Freelance Consultant for F100 companies in the UK & Europe focussing on Data Centres, Security and Operational Automation.

Tom Hollingsworth, CCIE #29213, is a Senior Solutions Architect for a small VAR focusing primarily on K-12 education.

]]>http://www.colinmcnamara.com/come-learn-with-me-network-field-day-5/feed/1OpenStack presentations to vote for – Shameless panderinghttp://www.colinmcnamara.com/openstack-presentations-to-vote-for-shameless-pandering/
http://www.colinmcnamara.com/openstack-presentations-to-vote-for-shameless-pandering/#commentsFri, 22 Feb 2013 22:32:11 +0000http://www.colinmcnamara.com/?p=1517It’s that time of year again, (up until feb 25) time to figure out what presentations we want to see at the OpenStack summit. And yes, your votes help ensure that I’ll have a chance to present. Here are a couple presentations that I am in, as well as others that I think should be on the docket.

If you think these will be of value to the community, please use your vote to ensure that they will get on the schedule (click on the link tittles to get to the voting pages).

Let me tell you a dirty little secret. While OpenStack is a great project, it is extremely complicated for and indivdual with an engineering/operations focus vs a programming focus to get to their first code contribution.

My name is Colin, I am and engineer. Although I initially got involved with OpenStack in the context of operations, I quickly was drawn into actually contributing code to the project. What I found is that many of the tools and workflows used to contribute to OpenStack are completely foreign to those (like me) with an operations focus.

In this session I will go over the biggest challenges that I faced as an engineer contributing. And review the tools and techniques to that I used to get past them. This information will be presented with the goal of arming engineers just getting involved with the knowledge tools necessary to get to their first successful contribution and beyond.

Learning objectives

1. The importance of community – Leveraging the power of the meeting

2. Talking your employeer into supporting OpenStack and the CLA

3. Setting up your dev environments – getting beyond Devstack

4. Getting git, using the git repository for those that don’t code for a living

Kyle Mestery — Technical Leader at Cisco

Topic(s)

Community Building

Abstract

This session will be a panel where we will discuss how to build and run your own OpenStack Meetup. The panelists have varying experience around this, from Sean running one of the largest and longest tenured OpenStack Meetup groups, to Mark starting a brand new group in the RTP area. We will discuss what makes a succesful group, strategies for forming and running your own group, and how community organizers can work together to share content and enable others to be succesful with their own group

This session is a 201 level technical deep dive on the VMware/Nicira Network Virtualization Platform (NVP). NVP is a virtual networking platform powering many OpenStack production environments as the networking engine behind Quantum. In this session we’ll explore the distributed systems architecture of the NVP Controller Cluster, the core functionality and behavior of NVP’s primary system components, and the logical networking devices and security tools NVP produces for consumption. High availability deployments, and packet flows for common scenarios will be discussed. And finally, we’ll take a look at how the physical network fabric can be architected for NVP deployments.

Boris Renski — Co-Founder and EVP at Mirantis

Boris from Russia here… We do much OpenStack at Mirantis. Much customer ask us to make cloud controller is highly available. Also much customer is cheap and ask only free, open source stuff in their cloud. At Mirantis we like make customer happy, so we make puppet recipe to make very highly available OpenStack for free. In this talk I make simple demonstration that even a goat that had a lot of vodka can understand how is use open puppet recipes to make highly available OpenStack and pay zero rubles to anyone. Also, a goat.

Rainya Mosher — SDLC Manager at Rackspace Hosting

This presentation will address the unique challenges of working with software development teams who are dedicated members of the OpenStack Community. We’ll discuss:

Balancing review days with the development cycle

Delivering a product to customers on time while still meeting your OpenStack milestone commitments

Communicating to senior leaders and project managers that “delivery date” is a fuzzy term when working with the OpenStack review cycle

Keeping your software developers, engineers, and architects engaged with the Next Big Thing in OpenStack while maintaining the needs of a production system

]]>http://www.colinmcnamara.com/openstack-presentations-to-vote-for-shameless-pandering/feed/2What is my value to the OpenStack community, and why you should vote for me in this weeks elections – UPDATEDhttp://www.colinmcnamara.com/what-is-my-value-to-the-openstack-community-and-why-you-should-vote-for-me-in-this-weeks-elections/
http://www.colinmcnamara.com/what-is-my-value-to-the-openstack-community-and-why-you-should-vote-for-me-in-this-weeks-elections/#commentsMon, 14 Jan 2013 16:43:41 +0000http://www.colinmcnamara.com/?p=1497

First things first – Search your email for a email titled “OpenStack Foundation – 2013 Individual Director Election” from secretary@openstack.org . It will have a click through link with your userid/pass to The OpenStack Election Voting Page This is the email with your link to vote in the OpenStack Elections. As with democratic any process, voting is your chance to have your voice heard.

#### UPDATE ####

It has come to my attention that some people are having problems voting. If you haven’t received your voting email from the foundation please contact secretary@openstack.org and they will rectify the situation.

#### End UPDATE ####

As I am writing this article, I am sitting on a plane headed back home from Denver. I was out here helping Scott Lowe (EMC) and Shannon McFarland(Cisco) help kick off [present at] Denver’s OpenStack meetup group.

Colin Presenting at Minnesota OpenStack Meetup Group

The presentation I am gave is one that I have given quite a few times now. It is titled “Surviving your first commit – an engineers guide to contributing to OpenStack”. It is a collection of lessons learned in my own journey with OpenStack. It is an attempt to remove barriers to contributing, and also to encourage community participation and adoption. The first time I presented on this subject was at the OpenStack design summit in San Diego. The original purpose was breaking down barriers to utilizing and contributing to OpenStack. However as I was developing my message and collecting the lessons learned I found that the most important element to success was organizing communities.

This was a lesson that I learned in the late 90’s when I first got introduced to Linux. I remember the first time I received Linux install media. I tried to install it and failed. A couple months later, a couple of us friends got together and got through our first install. Shortly after I had the web hosting at the ISP I ran (DKAOnline) migrated to Apache on Linux. By coming together as a community of geeks, we were able to lower the barriers of entry of Open Source software and get to a point where I could use Linux and other OpenSource software to power my business. (While I moved to the SF Bay long ago, the roots of that users group still lives on in Fresno as the Fresno Open Source Users Group).

I have carried that lesson forward throughout my entire lifetime in technology. Helping others helps yourself. Contributing to, and encouraging the creation of active user groups around emerging technologies is key to both your and the project you are working on a success. Increasing the use of a project creates a cycle that results in more companies with real world problems contributing back patches and fixes, improving the project in a way that is increasingly more relevant to all.

All of this user community talk centers around a key thought –

“What value can I bring to the OpenStack community and the foundation if I am elected to the individual board”

There are lots of things I CAN do and want to do. But it is important to consider WHAT I AM and HAVE BEEN doing. Over the past year or so I have been doing the following –

1. Evangelizing and increasing the adoption of OpenStack in user community through social media and the communities that I participate in.

2. Teaching people and companies about how to best contribute to OpenStack

3. Supporting and encouraging individuals in my own company (Nexus) to contribute to OpenStack as well as using it in our platforms.

4. Using my influence to educate hardware manufacturers on why the should ensure that their products are cleanly integrated with OpenStack as well educating and encourage them to commit development resources to the project.

5. Providing feedback from an Operators perspective to the board and developers.

6. Playing “Johnny Appleseed” helping people set up OpenStack meetup groups throughout the Nation

Right now I am an active member of the community, and wish to contribute more. I believe that Open Software and Standards are key to our technology industry being successful in the long run. The success of OpenStack and the OpenStack Foundation is key to making that happen.

I am doing what I can as an individual member of the foundation, as well as a leader in my own business to make this happen. My ask to you, is to help me take my contributions to the community to the next level by electing me to the individual board.

The week of January 14-18 you have the chance to help me make this happen by voting for me to be an individual member of the OpenStack Foundation Board. If you believe in what I am currently doing, and want to help me do more, please do me a favor and cast your vote for me in the election.

Colin is a candidate in the 2013 Board Election.

Read the Q&A below and see if you want to Nominate Colin in this election.

Q

What is your relationship to OpenStack, and why is its success important to you? What would you say is your biggest contribution to OpenStack’s success to date?

A

I first got involved with OpenStack late last year as part of a cloud platform evaluation process, with the goal of bringing back certain workloads from Amazon. Over the course of 2011 and through 2012 I have been accelerating my involvement including

Contributing to Storage Qos (should hit in grizzly-2)

Currently working on DHCPv6 in Nova

Speaking at OpenStack summit – Surviving your first check-in, an engineers guide to contributing to OpenStack

Bi-Weekly hackathons with SFBay OpenStack

Helping kick off and speaking out the Minnesota OpenStack Meetup users group

Q

Describe your experience with other non profits or serving as a board member. How does your experience prepare you for the role of a board member?

A

My experience in other non profits has been primarily been in driving awareness and participation to certain charitable causes such as Wounded Warriors, Valley Food Bank, and others.

Board member experience includes a few technical adivsory boards and CTO boards.

Relevant experiences in both of those include ensuring balanced view points from all members, as well as advocacy for user communities (my perspect is weighted to more Ops then Dev).

My experience running a 90+ million dollar line of business in a past role also allows me to contribute in both advocacy for users of OpenStack, but also help in raising funds and awareness.

Q

What do you see as the Board’s role in OpenStack’s success?

A

The OpenStack board has to manage the balance between the Development Community that has added so much value to the project, with the corporate interests who are now both involved, and also commiting development resources. This all has to be balanced out with the eye on providing the best product for the community as a whole.

Q

What do you think the top priority of the Board should be in 2013?

A

I think the top priority for the board should be establishing a tempo of tone of collaboration between all parties participating. Whether they are manufacture, developer, or user. This will the foundation going down the rat hole of posturing and stalling that IETF has become.

If this is accomplished, I am positive that OpenStack will become the dominant Open Source cloud platform in the industry.

]]>http://www.colinmcnamara.com/help-nominate-me-for-the-openstack-election/feed/2EMC and VMware launching the Pivotal Initiative – The larger Picturehttp://www.colinmcnamara.com/emc-and-vmware-launching-the-pivotal-initiative-the-larger-picture/
http://www.colinmcnamara.com/emc-and-vmware-launching-the-pivotal-initiative-the-larger-picture/#commentsWed, 05 Dec 2012 19:47:36 +0000http://www.colinmcnamara.com/?p=1486Last night some of you may have received an announcement from EMC discussing their Pivotal Initiative. Many of you reading this may not exactly be clear on what, how or why EMC is doing. Let me take this opportunity to clear this up.

Changing IT consumption models – what is is driving the Pivotal initiative

First things first, we are in the middle of a couple SIGNIFICANT transitions in our industry. Many people call this cloud computing, but it is much larger then that.

What we are truly in the middle in is a shift of the consumption model of IT resources. For the past ten to fifteen years consumption and spend has been centered in and around IT organizations, generally under the command and control of a CIO. These organizations would work with the business, and sometimes the applications teams to capture requirements and create infrastructure to service their internal customers needs.

Over the past ten year so so (really since SOX) the focus of many it organizations has shifted from Information Technology, to Infrastructure Technology. Significant amounts of time focus and budget (commonly measured at north of 60-70%) of most IT organizations are spent just keeping the lights on. This creates limited additional value to the companies that they are part of, as most importantly, is now a key limiter in application development and release cycles (the true lifeblood of many companies – their ACTUAL product).

The rise in power and maturity of software development

This slow old IT worked in the past when software development life cycles were also old and slow. Classic SDLC based development shops would sometimes take years to release code. In this world, slow and bloated infrastructure operations organizations (classic IT) was not the roadblock. Everything was fine.

Over the past few years however, things like Agile Software Development has taken the world by storm. In this world, development teams release code weekly, if not daily. This code is of higher quality, and more feature rich then ever before.

What this has caused, is an imbalance between the needs of software development organizations and the capabilities of the IT organizations that support them. To meet that need, many development organizations looked to companies that could provide “Agile infrastructure”. Amazon Web Services (AWS) is one of these companies, and has seen extreme growth because they can serve this need.

Cloud Computing is the first shift – DevOps / NoOps is the next

Cloud computing, defined as the programatic, fractional consumption of elastic services (Amazon Web Services as ONE of the providers of this) is the first mega growth area that has benefited from this development shift.

It is however still not super simple for developer to write code for Amazon, or free OpenSource Amazon “Clones” like Eucalyptus and OpenStack. They have to have a level of knowledge of how to consume, manage, monitor and scale the services offered from these providers.

Remember, the goal of any development organization is to abstract complexity from the underlying layers of the App. IaaS tools and platforms with good web services interfaces are only the first step in making this happen.

Enter the PaaS stacks – the cloud for cloud

PaaS (Platform as a Service) is a layer that runs on top of cloud platforms. Generally PaaS stacks abstract all the unique features of any cloud platform, allowing a developer just just focus on writing good functional code. The PaaS stack then handles connections to database services, autoscaling, etc (example is CloudFoundry and BOSH).

This abstracts the cloud provider, and sets the foundation for true cloud mobility. This I believe is the future, and will be the path to large scale cloud adoption.

Where does Greenplum and Analytics fit into this?

Data Science is the next great application consumer. Organizations that employ data science as part of their business process create and consume EMENSE amounts of data. Needless to say, companies like EMC love this. You do eventually have to put that data somewhere…

What does EMC / VMware’s Pivotal initiative mean in this context

Open software and PaaS stacks are clearly the future consumption target for the evolved development team. The challenge is that this is because Open Source software is also free software (at least partly).Big publicly traded companies have a real problem executing on both developing, and integrating free software into their sales models.

In some companies, like EMC. I hear that it actually is impossible to release Open Source code written in certain business units. The problem with this is, that releasing certain software into the wild for free actually drives consumption of big expensive blinking enterprise hardware.

Creating a new company, filled with talent focused on PaaS and Open Source is Brilliant

1,400 employees are supposed to join this new company (Pivotal for now), focused on PaaS, Analytics at scale and Open Source. This positions Pivotal (or whatever it will be called) to be a dominant player in this shift in consumption models. It allows EMC to stay relevant as classic storage becomes passé and compute can in any location, ALSO including your data center.

Remember when EMC bought VMware

Nobody understood what was going on when EMC bought VMware. What they were really doing was ensuring that a new consumption model that would grow demand for their product sets would thrive. It also creating a landing spot for some of the brightest minds in the industry (Remember, Chad and the vSpecialist teams came out of there). Pivotal initiative provides the same play in the “new world” of IT.

What does this mean to you

This industry is changing, at an amazing pace. The classic world of IT is evaporating, and forming into something amazingly interesting and relevant. EMC has gone all in now, and committed and amazing number of resources to making true cloud application consumption “normal”. This will provide with each and every one of us new tools and techniques to use, and also learn.

With all shifts, some people and teams will rise, some will get left behind. My advice, pick up the old programming books, and start to think like a developer… because that is where our world is going.

]]>http://www.colinmcnamara.com/emc-and-vmware-launching-the-pivotal-initiative-the-larger-picture/feed/2Thanksgiving live blog at the McNamara house with @colinmcnamara and @netbbqhttp://www.colinmcnamara.com/thanksgiving-live-blog-at-the-mcnamara-house-with-colinmcnamara-and-netbbq/
http://www.colinmcnamara.com/thanksgiving-live-blog-at-the-mcnamara-house-with-colinmcnamara-and-netbbq/#commentsThu, 22 Nov 2012 15:28:13 +0000http://www.colinmcnamara.com/?p=1446 Every year I get in multiple conversations about grilling and smoking on my big green egg, and its Internet attatched/twitter enabled draft fan/temperature control system @netbbq. This Thanksgiving I figure I might as well document the process via a live blog (way more exciting then a conference huh?)

11/21 5:00PM turkey is brined and then thrown in a 5 gallon water cooler to soak up all the goodness. For the brine I used Alton Browns recipe.

11/22 7:41am removed the turkey from the brine. Filled a bag full of ice and threw it on the breast of the turkey. This is a dirty little trick to allow the legs and breast to come to temperature at the same time.

11/22 8:00am the stoker Internet connected BBQ controller is now in command of the oxygen levels entering the BGE.

11/22 8:56am our 22lb turkey is now sitting in its new home for the next 5 hours. I have noticed that the stoker isn't updating to @netbbq. I will have to troubleshoot that now that the turkey is roasting.

11/22 11:30am turkey temp alarm went off. Apparently the turkey will be done early. Time to let the bird rest for a 1/2 hour then carve.

11/22 12:12 pm Turkey carved and ready to go.

11/22 12:19pm quick video conference with family abroad and then down to eat our meal

11/22 2:16pm Started the day with Johnny Walker Blue, ended it with Blueberry Moonshine and football

]]>http://www.colinmcnamara.com/thanksgiving-live-blog-at-the-mcnamara-house-with-colinmcnamara-and-netbbq/feed/0Cisco rounds out its portal and automation portfolio by acquiring Cloupiahttp://www.colinmcnamara.com/cisco-rounds-out-its-portal-and-automation-portfolio-by-acquiring-cloupia/
http://www.colinmcnamara.com/cisco-rounds-out-its-portal-and-automation-portfolio-by-acquiring-cloupia/#commentsThu, 15 Nov 2012 17:37:21 +0000http://www.colinmcnamara.com/?p=1438What is the heck is Cloupia?

Cloupia is a company built around making converged data center infrastructure easy to build and manage. This take They accomplish this goal by delivery three major product sets -

Yup, you can control your virtualization and storage infrastructure management via your iPad. If you have seen Cloupia at the NetApp booth at a trade show, there is a 99% chance that this was how they demo’d FlexPod for you. This allows you to provision and manage virtual machines as well as manage certain functions like resizing data stores on your primary storage.

Cloud Ignite – Build a FlexPod or VSPEX in minutes

Let me share a dirty little secret. Data centers can be complicated places full of blinking spinning things that are connected together in weird and interesting ways. That being said, if you use converged reference architectures like FlexPod or VSPEX you can automate the heck out of it. Some of us use OpenSource tools like Puppet and Razor to decrease to build converged infrastructure. Others use tools like CloudIgnite to capture the different variables necessary to make your Pod “Unique” and the click a button to make infrastructure happen. Cloud Ignite makes that happen.

My question is whether Cisco will keep this reseller / integrator only, which is the current status of CloudIgnite. Or… will the integrate it closely with UCSM to allow the simplicity and scale of configuring a UCS to extend out to converged infrastructure like FlexPod and VSPEX.

What does this mean for CIAC for Cloud and Compute

Honestly, I am a little unclear on this. Currently, a few of the largest Cisco Intelligent Automation for Cloud and Compute integrations actually use Cloupia as an element manager. There is quite a bit of overlap on functionality, however they are VERY DIFFERENT PRODUCTS under the covers.

One thing that was interesting however, is who has been sending communications about this. The communications about acquiring Cloupia that I got did not come from the Automation and Cloud groups, it came from the leadership at SAVTG (the the makers of UCS, UCSM).

My gut feel is that we will see Cloupia more tightly integrated into UCSM and UCSM manager of Managers, very closely integrated into the UCS Platform. I think I need to have a discussion with Rodrigo Florez over a couple beers and drag it out. Rodrigo, can you ping me on twitter?

Will this work with OpenStack

A good friend of mine is the VP of Technical Operations at Cloupia (now Cisco I guess). I have been bugging him about this for a while, as well as bugging him to get VSPEX support in the product to match the FlexPod offering. I haven’t seen a working implementation from them in front of OpenStack, however I FULLY EXPECT CISCO TO FUND DEVELOPMENT OF MAKING THIS HAPPEN. This would be a brilliant move by Cisco of putting a consumable interface in front of OpenStack for the mid market similar to what Rodrigo is doing on the large scale side with CIAC.

Colin’s thoughts

I like the products that Cloupia makes, as well as the people that make it. Cloupia simplifies the management of both the virtualization layer of your converged infrastructure, but also the hardware elements underneath. This combined with a simple self service portal will provide a pretty awesome solution when integrated on top of UCS and other platforms.

Want to learn more?

]]>http://www.colinmcnamara.com/cisco-rounds-out-its-portal-and-automation-portfolio-by-acquiring-cloupia/feed/5Setting up Cobbler PXE auto-deployment for Ubuntu Server 12.04 Precisehttp://www.colinmcnamara.com/setting-up-cobbler-pxe-auto-deployment-for-ubuntu-server-12-04-precise/
http://www.colinmcnamara.com/setting-up-cobbler-pxe-auto-deployment-for-ubuntu-server-12-04-precise/#commentsMon, 12 Nov 2012 18:20:07 +0000http://www.colinmcnamara.com/?p=1415For those that don’t use it, Cobbler is a PXE installation manager for automating the deployment of systems and packages. It is an order of magnitude simpler then creating a custom PXE environment by hand.

In this case I am setting up my Cobbler environment to automatically deploy a base operating system and then hand off to Puppet for further configuration. After handoff Puppet will configure the systems in a multi-node OpenStack setup which will rebuild nightly.

The purpose of that entire system this is to do development testing of the Quantum Networking Service for the spring Grizzly release. I will document the process for that in a later blog post.

What we will do today, is get the base system up and running for deploying your server operating systems using Cobbler.

Installing Cobbler

First thing you need to do is install your base operating system. In this case since we are building out a lab environment to test OpenStack builds we should download and install ubuntu server 12.04 (precise).

Once this is up and configured with a static IP address, we need to install and configure cobbler

sudo apt-get install cobbler cobbler-web

This will install Cobbler, and the Cobbler web interface. Next we will run a sanity check of cobbler

sudo cobbler check

You may get some notifications of items that need to be addressed. Address as needed and run the check command to verify.

Next, you can set the username and password that you will use to manage the cobbler web interface. You can replace these items with whatever user / password you would like. In this case we have the username Cobbler and the password cobbler.

htdigest /etc/cobbler/users.digest "Cobbler" cobbler

After you have successfully run cobbler check you will need to synchronize cobbler by running the cobbler-sync command

cobbler sync

Importing Ubuntu Server ISO’s

Next thing we need to do is grab the ISO that we used to install the server we are on, and import it into cobbler. In this case we are making a folder to mount a NFS export called VMwareISO on a NAS at 10.0.76.2

sudo mkdir /mnt/VMwareISO

sudo mount 10.0.76.2:/volume1/VMwareISO /mnt/VMwareISO

Next we have to create an ISO mount point and mount the Ubuntu 12.04 ISO

sudo mkdir /mnt/iso

sudo mount -o loop ubuntu-12.04-server-amd64.iso /mnt/iso

You will get the following message, and that is all right.

mount: warning: /mnt/iso seems to be mounted read-only.

Next, we will import the ISO we just mounted into Cobbler.

sudo cobbler import --name=ubuntu-server --path=/mnt --breed=ubuntu

Configuring DHCP to point to your PXE server

This part will vary based on your lab setup. If you already have a DHCP server setup, then you need to set the “next-server ” option to the ip address of your Cobbler server.

If you want to run DHCP on the same server you are using for Cobbler you need to install and configure a DHCP server.

sudo apt-get instal isc-dhcp-server

Next we have to edit the configuration file /etc/dhcp/dhcpd.conf

sudo vim /etc/dhcp/dhcpd.conf

Now you need to add a statement configuring a DHCP scope on this server. In this case I am using the following options –

Once this is added, restart the DHCP server to pick up your configuration

sudo /etc/init.d/isc-dhcp-server restart

Creating a custom seed file and pointing the Cobbler to it

FYI, this step may not be completely necessary in the future. However there is currently a bug open with Cobbler where when you are using it with a Ubuntu 12.04 file where the client installation will bomb out. You will get an error stating “Bad Archive Mirror An error has been detected while trying to use the specified archive mirror”

If you dig through /var/log/syslog you will find a more descriptive error shown here below

Next we need to navigate to PROFILES and click on EDIT to edit ubuntu-server-x86_64 instance

You will see a screen with a bunch of form fields. You need to navigate down to the “Kickstart” option and change from

/var/lib/cobbler/kickstarts/sample.seed

to the file we have just changed sample.seed.precise

/etc/cobbler/ubuntu-server.openstack.preseed

As you can see, all we did was add .precise to the end of the sample.seed file name.

Click save, and now you should be ready to PXE install your first Ubuntu server. (if you use this file your username / pass will be ubuntu/ubuntu)

Whats Next?

In the next blog post in this series we will configure Puppet Master on our server and do a super dangerous thing – optimize our seed files to blow away our file system without any user interaction necessary. Needless to say we will all need to use this next one with caution….

]]>http://www.colinmcnamara.com/setting-up-cobbler-pxe-auto-deployment-for-ubuntu-server-12-04-precise/feed/9Simplifying scale out DataCenter design with UCS Manager 2.1http://www.colinmcnamara.com/simplifying-scale-out-datacenter-design-with-ucs-manager-2-1/
http://www.colinmcnamara.com/simplifying-scale-out-datacenter-design-with-ucs-manager-2-1/#commentsThu, 01 Nov 2012 19:18:14 +0000http://www.colinmcnamara.com/?p=1401I’ve been designing and deploying UCS since the product was released a couple years ago (technically I was involved in the pre-release so we will say since UCSM v 0.8). From the start I was constantly pushing up against scalability and design constraints of UCS. The benefits of the system outweighed the challenge, but these design constraints created some challenges in creating external systems to meet the needs of large UCS customers.

Don’t get me wrong, out of any server platform I prefer UCS. That being said there are a few area’s that have really caused headaches for me over the years.

Headaches solved with the release of UCSM 2.1

Headache #1 – Once I scale past a certain number of servers, I have to establish a new UCS domain

This has been a huge challenge for both large single data center instances, as well as multi data center instances (such as DR). In both these cases I would have to utilize tricks like placing mac address, wwn pools and other “unique” identifiers into a CMDB (Configuration Management Database) outside of UCS. And even when utilizing external CMDBs, there was a still a bit of design necessary to lay out UCS domains in a fashion that would support eventual integration in the future without overlapping configuration elements.

All of this work was done to ensure that if two servers were instantiated in two different UCS domains that they wouldn’t have conflicts if they wound up on the same segment. Handling this logically by bit swapping the UCS domain ID in certain resource pools wasn’t terribly complicated, but in my opinion unnecessary (though integration with CMDB’s can be very complicated).

This got even more complicated If you wanted to have a DR site. Making something simple happen like having a server that boots from SAN boot of the DR site SAN in an outage involved using external tools or scripts. In my opinion this is something that should be handled by UCSM or a manager of UCSM.

Headache solved – UCS Central Manager of Managers

For those in the now, this has been in the works for a VERY long time. In fact the early install (1000+ servers) that I mentioned above where we had to use external CMDB’s to glue UCS domains together in the first year of UCS generated this feature request.

UCS Central is in a sense a manager of managers. This allows you to aggregate pools and policies of multiple independent UCS domains into one central management platform.

It solves the problems –

resource conflicts across pools,

mobility of service profiles between UCS domains as well as

centralizing access logs

centralizing access to console servers

Headache #2 – Even when Cisco released code to manage c-series 19″ rack mounts under UCSM it still required a bunch of extra cables and equipment to make it work.

70% of the worlds x86 servers are in a 19″ rack mount form factor. Recently Cisco enabled them to be managed under UCSM and to have a data path that exits through the fabric interconnects. This allowed a couple key things to happen. First, it allowed a unified view of systems for an administrative staff for a data center. Second, it allowed a clean data path from, say a storage caching engine run on a a b-250 blade, to a compute node housed on a c240 rack mount. All of this communication would be contained within the fabric interconnects, and not have to exit northbound as it had in the past.

I was happy with this release. It allowed the c-series servers to be managed under UCSM with the same tools techniques and API’s that we manage the blades with. However the code was not updated to allow all that magic to happen over a single wire.

You would end up with beautiful cabling on the backs of your blade centers, and a giant mess of cables coming out of your rack mounts since you needed separate cables to support data path vs management plane. Call me a neatnick, but I like my racks to be pretty and clean (and not have to buy extra switches, cables and adapters).

Headache solved – Single wire management for ALL UCS servers

With the 2.1 release now all you need is a single Cisco Virtual Interface Card in your UCS 19″ rack mount (two if you want redundancy) to allow the full feature set that you have available on a UCS blade. For me this not only simplify’s my designs, but also allows flexibility in things like designing Hadoop and OpenStack Swift Object Storage clusters where redundancy is done at the application level and dual 10 gig interfaces are not needed.

Here is a dirty little secret. Even though you can abstract a bunch of storage functions into UCS, most server guys are still a bit impatient with their peers on the storage teams. There are many times when the server guys want to consolidate a bunch of boot disks into an array and connect them directly to the fabric interconnects.

Over time Cisco has been releasing support for additional protocols connected in this way, however it was not ubiquitous. This created problems because you could not create a standard topology that supported flexible protocol consumption in your network. You would end up with two to three variants of supported topologies. In my opinion this creates issues with operational procedures and leads to extension of outages and generally inefficient designs.

Headache solved – Flexible and consistent storage topology options no matter what protocol is being used.

With UCS 2.1 now, no matter what protocol floats your boat you can implement them in a consistent manner. This may include directly connecting Fibre Channel storage to your fabric interconnects and zoning them. Or it may include utilizing multi-hop FCoE (I’ll leave the argument to whether you SHOULD use this till later).

Either way, the most important thing to me is that no matter what the design requirements are. Now you have the tools available to meet them in a consistent fashion without changing your entire network and systems topology.

Colin’s Thoughts

Quite often there is lots of glitz and glamor when a new product is released. Press conferences are held where everybody looks at the shiny blinky things and oohs and aweess. However when new software makes things you already use every day work better, or allows them to do new things comes out nobody notices.

In this case the 2.1 release of UCSM takes a product that many people already have (Unified Computing System) and makes it do more. There aren’t going to be press conferences about this, but it is worth taking a closer look at. It will make my life easier, and I hope it does the same for you.

]]>http://www.colinmcnamara.com/simplifying-scale-out-datacenter-design-with-ucs-manager-2-1/feed/35Candy Apple Onions in the Nexus Breakroom – Yes, I am evil http://www.colinmcnamara.com/candy-apple-onions-in-the-office-breakroom-yes-i-am-evil/
http://www.colinmcnamara.com/candy-apple-onions-in-the-office-breakroom-yes-i-am-evil/#commentsWed, 31 Oct 2012 18:25:50 +0000http://www.colinmcnamara.com/?p=1383Have you ever done something completely evil to your coworkers? I just did. I was inspired by this post – http://www.lolriot.com/2011/11/11/lol-pics-28-37-images/onion-candy-apples/ Where a dad tricked his kids into biting into a luscious candy ONION. Needless to say the kids were appalled.

Which one of these is the candy apple onion?

*********UPDATED *********

It WORKED! Matt Jenson took a bite of a slice of Candy Covered Onion. Thankfully he was a great sport about it.

Happy halloween everybody. And remember, it is TRICK or TREAT. You don’t always get a treat….

We are in Day4 of the OpenStack Design and User summit for Fall 2012. Earlier in this week I have been bouncing between design sessions (figuring out what blueprints we want to work on next), speaking at my session, and learning in the users session.

Today my focus will be on building a swift object storage cluster (actually building it in the lab) with the swiftstack guys. Hopefully the swift specific stick time will get swift object storage more properly cemented into my brain.

If you are in person or streaming, please feel free to join my session at 2:40 on Surviving your first checkin. Right now, it is the second most attended session (second only to Dan’s Quantum discussion) for that time slot.

2:40pm

3:40pm

4:30pm

6:00pm

]]>http://www.colinmcnamara.com/openstack-summit-day-1-where-is-colin/feed/2Cisco announces it’s own OpenStack Distribution – What will the effect be on VMware?http://www.colinmcnamara.com/cisco-announces-its-own-openstack-distribution-what-will-the-effect-be-on-vmware/
http://www.colinmcnamara.com/cisco-announces-its-own-openstack-distribution-what-will-the-effect-be-on-vmware/#commentsMon, 15 Oct 2012 00:26:00 +0000http://www.colinmcnamara.com/?p=1348Cisco Releases their own Edition of OpenStack

Over the past year OpenStack has transformed from an interesting Open Source project used by some large web service providers, to the emerging standard for on premise OpenSource cloud platform.

There are many reasons for this rapid transformation from interesting project to cloud standard. In my belief the primary reason is that the major networking and systems hardware manufacturers have figured out that by supporting the development of OpenStack that now have a way to fight back against Amazon and EC2.

Why is Cisco releasing their own Distribution of OpenStack

A couple weeks ago Paul Perez (CTO of Cisco’s Server and Virtualization Business Unit) was speaking to Cisco’s top Data Center partners and revealed that Cisco has saved 40% on licensing costs in their development environments by utilizing OpenStack.

Cisco has realized dramatic cost savings already by utilizing OpenStack in places where they would have had to use VMware in the past. These places are extraordinarily large, and quite complex.

What does this mean for the VMware relationship?

“Will we compete against VMware as it relates to networking? Absolutely,” Chambers told CRN. “And when we compete, we don’t lose.” – John Chambers, CEO Cisco Systems

John Chambers made some strong statements in an interview with CRN last week about how Cisco will compete with VMware. It is not secret that tensions have been high ever since the Nicira acquisition, and they probably won’t relax in the short term.

The reality is however that Cisco and VMware still have much to gain in remaining strong partners. Taking Paul Perez’s statements as an example – while they saved 40% of licensing costs in development, 60% of those dollars were still spent on something like VMware.

The future of our customers data centers will most likely have multiple hypervisor stacks. Some of these stacks will highlight commercial platforms like VMware, Citrix and Hyper-V. While most likely development, Q&A and “Cloud” environments will feature OpenStack.

It makes complete sense for Cisco to maintain positive relationships with all of these software vendors since they are very likely to install right on top of Cisco’s UCS platform. And last time I checked, everyone at Cisco is quite happy every time a UCS ships out the door.

Who should you watch – Lew Tucker – VP and CTO of Cloud Computing for Cisco

As the CTO of SAVG, Paul Perez may be the one who talks a lot about OpenStack as it relates to Cisco in the classic press and events. However the man behind the curtain that everyone should be paying attention to is Lew Tucker.

Lew, a former CTO at Sun Microsystems has quietly assembled a powerhouse team at Cisco that has been writing functional code that allows OpenStack to closely leverage key Cisco networking technologies.

Not only has Lew and the team been writing great code, but they have also been taking both public and private leadership roles in the OpenStack foundation.

Case in point, one of the developers on Lew’s team was the one who pushed through my OpenStack Individual Contribution License, allowing me to submit my first piece of code (Storage QOS enhancement) to OpenStack.

What should you do next?

Frankly, you should do what I have been doing for the better part of a year. You need to focus on becoming knowledgeable about OpenStack. The most dangerous thing I have seen over this past year of escalating excitement is people talking, influencing, and making decisions without the proper knowledge.

I recommend getting it running in your internal lab environments and possibly finding a small corner or production to run it in (only if that corner is appropriate). Figure out how to deploy it, manage it and extend it. There more you poke around, the better prepared you will be.

]]>http://www.colinmcnamara.com/cisco-announces-its-own-openstack-distribution-what-will-the-effect-be-on-vmware/feed/1Come join me at Networking Field Day 4http://www.colinmcnamara.com/come-join-me-at-networking-field-day-4/
http://www.colinmcnamara.com/come-join-me-at-networking-field-day-4/#commentsWed, 10 Oct 2012 21:10:49 +0000http://www.colinmcnamara.com/?p=1319This week should be pretty fun for me. I was invited by Stephen Foskett (@sfoskett) to participate as one of 12 independent delegates going through deep dives from from the following technology vendors in the networking space.

You can participate too – All these presentation are available to you live in HD

If you are interested an any of these session,s you can follow along too by checking out the livestream found here – http://livestre.am/49BHg

A bit about Networking Field Day –

Gestalt IT’s fourth datacenter networking-focused Field Day event will be held on October 10 through 12 in Silicon Valley! This unique event brings together innovative IT product vendors and independent thought leaders, allowing them to get to know one another. It is a forum for engagement, education, hands-on experience, and feedback. This is Gestalt IT’s third year of Tech Field Day events – learn more at the Tech Field Day site!

Session Details -

Tuesday, October 16 – 2:40pm

Let me tell you a dirty little secret. While OpenStack is a great project, it is extremely complicated for and indivdual with an engineering/operations focus vs a programming focus to get to their first code contribution.

My name is Colin, I am and engineer. Although I initially got involved with OpenStack in the context of operations, I quickly was drawn into actually contributing code to the project. What I found is that many of the tools and workflows used to contribute to OpenStack are completely foreign to those (like me) with an operations focus.

In this session I will go over the biggest challenges that I faced as an engineer contributing. And review the tools and techniques to that I used to get past them. This information will be presented with the goal of arming engineers just getting involved with the knowledge tools necessary to get to their first successful contribution and beyond.

Learning objectives

1. The importance of community – Leveraging the power of the meeting

2. Talking your employeer into supporting OpenStack and the CLA

3. Setting up your dev environments – getting beyond Devstack

4. Getting git, using the git repository for those that don’t code for a living

5. Testing your code – what do you mean it doesn’t build?

6. How to give back, and get other people involved in the community.

]]>http://www.colinmcnamara.com/come-to-my-session-on-surviving-your-first-commit-at-openstack-summit/feed/2Breaking the 200 nanosecond barrier with Algo Boost on the Nexus 3548http://www.colinmcnamara.com/breaking-the-200-nanosecond-barrier-with-algo-boost-on-the-nexus-3548/
http://www.colinmcnamara.com/breaking-the-200-nanosecond-barrier-with-algo-boost-on-the-nexus-3548/#commentsWed, 19 Sep 2012 07:20:47 +0000http://www.colinmcnamara.com/?p=1299“We have 600 silicon designers at Cisco, but are not religious about merchant silicon. There are many products at Cisco that utilize it. However as a company we must control our own destiny” – Paul Perez, CTO of Cisco’s Server and Virtualization Technology Group (SAVTG

There has been a lot of discussion in the past couple years regarding networking manufacturers using market silicon vs developing ASICs in house. While there are valid arguments on both sides of the table, Cisco just made a strong argument for the creation on custom ASICs and controlling their own destiny.

Nexus 3548 – 190 nanosecond latency in a 48 port 10 gig switch

This argument comes to life with the Cisco Nexus 3548 low latency data center switch. Coming in a 1RU form factor and with 48 10 gig SFP+ ports it resembles the nexus 5500 line, however the guts are packed with some shiny new bits that make trades execute faster then the competition and HPC clusters sing.

Performance Numbers – RFC2544 testing

The punchline of Cisco’s statement is a 48 port switch that operates (in a non optimized mode) at 250 nanoseconds across all frame sizes. That is 653 nanoseconds faster then the closest 48 port low latency switch on the market, and 333 nanoseconds faster than the 24 port option. This is roughly 60% faster than the closest current competition in the market. All of these numbers are for the switch operating in normal (full function) mode. There is a second mode called “Warp” mode that focus’s the resources of the switch towards lowering latency down to 190 nanoseconds.

Technology powering the Nexus 3548 – Algo Boost

The key technology behind the ASICs in the 3548 is Algo Boost. Algo Boost adds another dimension to HPC network design, which historically has been defined by latency and bandwidth. This new dimension, which can be critical to HPC system performance is buffer management. Not only does the Nexus 3548 have very large (18MB) buffers, but it also has intelligent buffer management features that allow for monitoring and reporting of utilization and more importantly buffer congestion across the switch, increasing application performance.

Summit it up

Cisco is making many bets in the ASIC development space. The most recent bet with the Nexus 3548 allows Cisco to present an option to the market that is 60% faster then any of the current competition, and is likely to retain a speed advantage even against the upcoming Alta chipset from Intel. This combined with the proven performance of NX-OS should result in Cisco regaining a leadership position in the HPC and Algorithmic trading market.

]]>http://www.colinmcnamara.com/breaking-the-200-nanosecond-barrier-with-algo-boost-on-the-nexus-3548/feed/0VMware approved as a Gold member of OpenStack Foundationhttp://www.colinmcnamara.com/vmware-approved-as-a-gold-member-of-openstack/
http://www.colinmcnamara.com/vmware-approved-as-a-gold-member-of-openstack/#commentsSat, 08 Sep 2012 17:53:17 +0000http://www.colinmcnamara.com/?p=1287The last couple weeks has been interesting in how VMware will evolve its relationship with OpenStack. Many peoples eyebrows raised when they heard OpenStack mentioned during the keynote at VMworld. This caused the discussion about open source virtualization and how it impacts VMware to become top of mind for many.

One of the side discussions that came about was how the OpenStack board had put the application by VMware to become a gold member on hold. Many people quickly brought up thoughts of politics and scheming. Though from what I hear the main reason was because everyone is busy writing up by-laws and getting ready for the Folsom release.

Now it is time to put the drama to rest. VMware, as well as Intel and NEC were all approved as gold members on Friday by the board.

Welcome to the OpenStack Foundation VMware. We all look forward to seeing the contributed contributions from Nicira as well as the rest of the company.
]]>http://www.colinmcnamara.com/vmware-approved-as-a-gold-member-of-openstack/feed/0Scale Computing HC3 – Hyper-Converged Server, Storage, Virtualizationhttp://www.colinmcnamara.com/scale-computing-hc3-hyper-converged-server-storage-virtualization/
http://www.colinmcnamara.com/scale-computing-hc3-hyper-converged-server-storage-virtualization/#commentsMon, 03 Sep 2012 21:21:44 +0000http://www.colinmcnamara.com/?p=1275Companies using OpenSource virtualization in their unified products targeting the mid market

I have been saying that we will start seeing Open Source virtualization technologies in two different areas of the market (signs the world is changing). These are web service providers and the small to medium business (SMB)

The web service providers have already long ago chosen open source hypervisors for their platforms (Amazon = Xen, Many others KVM, etc etc). Where Open Source Virtualization hasn’t shown up yet is small business (250 users and below).

Scale computing started as a storage company with scale out unified storage. They have been shipping a consumer friendly implementation of GPFS (a horizontally scalable files system that is notoriously hard to configure) since 2009. They have been nipping at the heals of Dell and HP’s low end product lines, coming in with a competitive product at a fraction of the price.

They are lowering the technical barrier to entry of open source virtualization platforms by putting a pretty and accessible wrapper around open source virtualization software and scale out storage software with its new Hyper-Converged virtualization platform – HC3 (hypercube for the nerds out there) product.

What is a hyper-converged platform

Quite simply a hyper-converged platform is a server with storage, virtualization and networking software all rolled up into one. Think of a FlexPod or VSPEX all squished into one box.

Open Source Virtualization used in Scale HC3

Guess what, you can run a virtual machine on many technologies including, but not limited to VMware. One of these technologies which is widely used is Kernel-based Virtual Machine. Here is the kicker, it is Open Source. That means that it is free. It is Open Source, which generally means it is hard to configure for the common admin. That is where the next piece comes in.

An actual usable portal in front of Open Source virtualization

This is the missing piece in most Open Source or free implementations of server or virtualization. KVM is awesome, GPFS works well. Both are extremely hard for your average desktop / phone / network / single IT guy for all things that exist in small companies to implement.

The portal in HC3 puts a nice GUI wrapper on this technology and makes it consumable in a click click next fashion. Is it vSphere? Absolutely not. What it is, is a way to handle simple things running a windows VM, or taking a backup, or replicating your data somewhere else.

Why am I talking about Scale?

Full disclosure, I met their CEO and CTO at an industry experts discussion a while back. The context of the discussion was wide ranging, but generally focused on the evolving cloud computing and virtualization markets. Even though they run a software/hardware company, and I manage a line of business at a systems integrator, many of our views lined up.

One of the views that we share is that small business will start consuming hyper-converged (server, storage, network all on one server) virtualization solutions based off of open source or free technology. The key worlds to pay attention to here are using open source or free technology.

Fast forward to the week of VMworld. I was invited by Stephen Foskett to participate in the #techfieldday round table at VMworld. Low and behold, there are the guys from Scale ready and armed to have a nuts and bolts technical discussion of their product internals and go to market. Needless to see I was able to get my nerd on.

The reason I think Scale is worth talking about is because they already penetrate the low end of the commercial market (SMB) with their scale out storage product. And now are shipping a product that takes free or open source technology and packages it in a way that the common man can implement a small infrastructure on it. Not only do I think the direction they are taking their company is right, but they are shipping products to make it happen.

Who should be worried about Scale?

Dell has made a business out of making low cost servers. In the past few years companies like SuperMicro and SGI(Rackables) have taken that tittle as their own. Dell has evolved by acquiring low cost storage and networking solutions and selling them into the SMB market.

I think that the fact that Dell acquired (and not developed) its low cost storage offerings will paint it into a corner as Scale and other companies like it compete in the field with a solution that is equivalent in function, but a fraction of the cost. Eventually as these as these hyper-converged solutions gain acceptance other manufacturers like Cisco and HP may start to see a bit of competition, but are likely to

]]>http://www.colinmcnamara.com/scale-computing-hc3-hyper-converged-server-storage-virtualization/feed/0My name is Colin, and I support OpenStackhttp://www.colinmcnamara.com/my-name-is-colin-and-i-support-openstack/
http://www.colinmcnamara.com/my-name-is-colin-and-i-support-openstack/#commentsTue, 21 Aug 2012 04:08:37 +0000http://www.colinmcnamara.com/?p=1241Since April of this year (2012) I have been living a secret life. During the day I have been living my normal life, running the Data Center practice at a national VAR driving sales and integration of converged infrastructure built on Cisco, VMware, Citrix, EMC and NetApp.

My nights and weekends however have been spent doing something completely different. It started with casual reading and exploration. Brand new VM’s running odd little hypervisors started sprouting up in my labs. Next thing you know I was spending late nights at strange peoples offices hacking away at something that I think represents a major shift in our industry.

My name is Colin, and I am working on OpenStack

Shift happens…

The major shift I am talking about has been called many things, including “cloud computing”. This is possibly the single most ill defined and overused term of all time. What I am talking about is CIO’s voting with their wallets. They are voting to move significant workloads to low cost fractionally consumed Infrastructure as a Service (IaaS) offerings that most people call “Cloud”.

Not all “Cloud / IaaS” offerings are the same, but many of them share the following features. They generally utilize open source hypervisors such as Xen or KVM. Many of them utilize Open Source VM and object and storage platforms such as Gluster and Swift. Almost all public clouds are extensive users of systems automation (DevOPS), and expose SOAP or RESTful API’s (most commonly Amazon’s EC2 API). The use of all these items significantly lower the capital and operational expense for operating a cloud environment.

Why are most Public Cloud (IaaS) cheaper then on Premise Clouds (IaaS) ?

The one trend that these public IaaS offering share when compared to the IaaS platforms you see commonly deployed on a customer premise today. The most significant trend is that almost exclusively you will find Open Source (free) software used instead of commercial software and hardware packages. In an enterprise customer it is not uncommon to find up to 50% of the cost of a server spent on virtualization software licensing.

Not only do current on premise IaaS solutions carry a high cost for virtualization software, but it is normal for 300% (3x) the price of servers to spent on shared storage. This is normally done to accomplish the goals of Live Migration (or vMotion) as well as supporting a high number of IOPS.

These costs are unnecessarily high, but the Open Source tools necessary to lower them are not accessible by most customers

This is why I am working on OpenStack. The goal of OpenStack is to provide a high quality, Open Source cloud operating system to the world. This will lower the barrier of entry for using these open source technologies, allowing customers to create private clouds that are price competitive with the “Amazons” of the world. I not only believe in that goal, I am both actively organizing teams and contributing code to make this a reality.

Herb Kelleher, Co-Founder Southwest Airlines (one of my favorite companies) made a difference in this world by executing a vision of “Democratizing the skies”. The people contributing to OpenStack are executing on a similar vision – “Democratizing access to the Cloud”. I have been doing this privately up till now, but consider this my public statement.

My name is Colin, and I support OpenStack

]]>http://www.colinmcnamara.com/my-name-is-colin-and-i-support-openstack/feed/0VMware’s Acquisition of Nicira – VMware confirming the hypervisor is deadhttp://www.colinmcnamara.com/vmwares-acquisition-of-nicira-vmware-confirming-the-hypervisor-is-dead/
http://www.colinmcnamara.com/vmwares-acquisition-of-nicira-vmware-confirming-the-hypervisor-is-dead/#commentsTue, 24 Jul 2012 00:19:01 +0000http://www.colinmcnamara.com/?p=1225VMware recently announced its intent to acquire Nicira for 1.2 billion dollars. This acquisition is of interest because it clearly signals VMware’s abandoning of the Hypervisor as a differentiating platform.

Yes, the hypervisor is commodity now. Whether it is ESXi, Hyper-V, Xen or KVM. You can take multiple operating systems and virtualize them on one CPU, share memory as well as do a live migration between hosts on all platforms.

Not only have commonly used features listed above become available across many platforms, but in some cases open source technologies such as KVM be more extensible then VMware’s product.

What is VMware thinking?

For the past couple years, VMware has been investing in technologies outside of the hypervisor. Some of this include Cloud Automation, Virtual Desktop, Application security and control, and Operations and Management.

These investments are smart, because it transforms the conversation from a discussion about hypervisors (which is a hard one to win on its own merits) to a discussion about the value add of all of the products that VMware offers.

Bundling of products and services is business 101 when your product becomes commoditized. Cisco does it by integrating all aspects of their product lines, EMC does it by offering a range of integrated Tier 1-4 storage and backup solutions, and VMware does it by offering significantly more then a hypervisor.

I believe that this is the first step in VMware integrating their services into NON-ESXi hypervisors. Here is why –

A few key concepts have emerged to be true in the virtualization market

2. Customers are willing to utilize “Free” hypervisors such as Hyper-V or KVM / Xen for at minimum second or third their applications. And in some cases such as cloud service providers they are using these (specifically KVM and Xen) for their tier one virtualization.

3. The Cloud Service provider market is booming. It is not possible to compete on cost in that market while running on anything but Open Source hypervisors.

What the acquisition of Nicira gives Nicira is an integration point between all of the rich applications that have been developed since ESXi and Hyper-V, Xen and KVM (OpenStack).

Nicira already has plugins and to each of those platforms, providing not only a richly developed code base, but a conduit for linking disparate clouds together.

Does this signal an end to the honeymoon between VMware and Cisco?

The partnership between VMware and Cisco has been one based on mutual benefit, not mutual exclusivity.

The fact of the matter is that Cisco has been working with multiple Hypervisor’s for a long time now. ESXi, Hyper-V, Xen and KVM are all supported platforms on their Unified Computing Platform (UCS). Just the same as VMware was working with HP and Dell long before Cisco started making servers.

What this shows is that Cisco is presenting itself as a network and systems platform for Virtualization software manufacturers to leverage. Nicira has networking functions, but then so does the distributed vSwitch. At the end of the day, logical switching still needs physical devices. This is still a place where Cisco can and will play – As an arms dealer to software manufacturers

Summing it all up

The hypervisor is dead, for VMware to stay relevant it has to continue innovating higher up the stack. The acquisition of Nicira will allow VMware to have a play in the service provider market and do just that.

It also provides an avenue for VMware to port it’s higher level applications to run on multiple hypervisors. This has the potential to slow flow of customers to free platforms like Hyper-V and OpenStack.

While this isn’t the best thing for the Cisco relationship, the reality of the matter is that VMware and Cisco have an “Open” relationship where both companies work with many others. I don’t see this relationship changing in the long run.

]]>http://www.colinmcnamara.com/vmwares-acquisition-of-nicira-vmware-confirming-the-hypervisor-is-dead/feed/3Netflix Pinterest Instagram outage is not Amazon’s faulthttp://www.colinmcnamara.com/netflix-pinterest-instagram-outage-is-not-amazons-fault/
http://www.colinmcnamara.com/netflix-pinterest-instagram-outage-is-not-amazons-fault/#commentsSat, 30 Jun 2012 17:59:19 +0000http://www.colinmcnamara.com/?p=1217Amazon had an extended outage last night in their Virginia data center. This outage was a resulted of power interruptions due to storms in the mid Atlantic area.

This outage created some very visible impact on public applications such as Netflix, Pinterest and Instagram. The first reaction by many on the Internet of course is to point out the flaws in a cloud computing model, specifically relying on Amazon EC2 and S3.

The funny thing is that these applications would have gone down whether they were hosted with Amazon, or delivered from a private infrastructure. The reason they went down is because key infrastructure needed to deliver these apps was all running in one place. Amazon would call this place an “availability zone”, in a private cloud you would call this a data center.

The key flaw here is not that these companies are utilizing a cloud provider. It is that where and how these applications are delivered is abstracted from development teams. In a private cloud offering, operations teams and sysadmins normally take into account segmentation of fault domains and do their best to ensure that a single outage will not take down a business critical application.

Now days, more and more application teams are requesting resources through an API. The same checks and balances that existed in the past, where a third party thought about how the system survives in an outage do not always exist. The result of this is systems that can easily scale programmatically to millions of uses in minutes, but cannot survive a simple power outage in one of many data centers in which the applications reside.

What is the lesson here? The lesson is not to avoid deploying apps in public clouds. The lesson is that the same application architecture flaws that caused Netflix, Pinterest and Instagram to go down are becoming prevalent in privately managed infrastructure (private clouds) also.

It is very easy to allow your development teams interaction with the IT infrastructure provider to completely devolve to interaction with an API as instances are created. What is needed is to training and education of development teams on aspects of high availability application design and/or integration of systems architects into these teams to ensure the applications will survive simple infrastructure outages like the ones that hit Amazon yesterday.

]]>http://www.colinmcnamara.com/netflix-pinterest-instagram-outage-is-not-amazons-fault/feed/1Three incredibly handsome men from Cisco, Nexus and EMC talk about VSPEXhttp://www.colinmcnamara.com/three-incredibly-handsome-men-from-cisco-nexus-and-emc-talk-about-vspex/
http://www.colinmcnamara.com/three-incredibly-handsome-men-from-cisco-nexus-and-emc-talk-about-vspex/#commentsFri, 29 Jun 2012 04:33:46 +0000http://www.colinmcnamara.com/?p=1208It is not often that you can get three great looking guys to sit down together and talk without the ladies hurtling themselves towards them. Luckily this was filmed at Cisco live where there are practically none present. Without this distraction Josh Atwell, Fred Nix and I (Colin McNamara) were able to sit down and have a short discussion about Cisco and EMC’s joint reference architecture, VSPEX.

In all seriousness it was a blast to spend time chatting with these guys, and it highlights the value that VSPEX brings to our customers. Just like on the couch VSPEX only can only be delivered when Cisco, EMC and their mutual channel partners, in this case Nexus jointly deliver a flexible, reference architecture based solution.

So what is this Open Networking Environment? Is is just another marketecture slideware play, or is the more meat to it? Lucky for us there is actually some meat to this release. So much so that two of the three area’s you can actually see working deep down in the Cisco booths at the world of solutions.

The first, onePK is something that I am personally very happy to see (Rick Davis and I proposed this to Cisco’s Routing Group during the Partner Technology Advisory Board in 2008). OnePK is a unified SDK (Software Development Kit) across IO, IOS-XR and NX-OS. What this will allow is simplification of deployment of configurations, changes, and operations and maintenance flows across a diverse suite of network products.

Personally, this is is a close tie for the 1000v on OpenStack as my favorite part of this release. Over the past few years I have built the some of the largest Data Centers on earth. Automating the deployments for these installations requires entirely to much Expect scripting, and still leaves a lot of custom coding to push and pull changes and statistics out of running equipment. OnePK should drastically simplify this process, and allow me personally to consolidate a large amount of deployment code into one simplified interface standard.

Cisco’s OpenFlow Controller and OpenFlow Agent

It is official, Cisco is creating an OpenFlow controller. What is even more interesting, is that I heard a rumor that if you ask nicely and go deep into the Cisco booth at the world of solutions you may actually get a sneak peak at one in action.

Cisco Virtual Overlay Solutions – 1000v on everything

This gets really interesting. One thing many people may have not noticed is that Cisco has been contributing code like crazy to the Quantum Networking Stack for OpenStack. Anytime I see a manufacturing committing that amount of code, you know something is up.

Well, now it is public, official, and we can all talk about it. Cisco is taking the Nexus 1000v and making it WAAAAY more useful. They are developing hooks for XEN, KVM, Hyper-V as well as continuing VMware support. That isn’t the cool part though. What they are doing is utilizing the network hypervisor to control and redirect flows across a many provider environments.

In the end what this will allow is the extension and linkage of cloud environments across disparate network and virtualization vendors. In short, linking your clouds through software only.

My thoughts

This release is a big milestone for Cisco. The Open Networking “movement” is significant to Cisco in a similar way that Linux was significant to Sun. Sun went out of business (acquired by Oracle) because they didn’t embrace “Open” movements and layer on their unique value add. I believe what Cisco announced today is a very smart move, and will allow Cisco to stay meaningful as Networking, Systems and Clouds take a more Open Source / Stack / Flow flavor.

Want to learn more?

OpenStack Quantum

http://wiki.openstack.org/Quantum

Omar Sultan (Cisco) should be posting interesting content on the Cisco DC Blog

]]>http://www.colinmcnamara.com/cisco-open-networking-environment-onepk-openflow-and-openstack/feed/4Cisco Live Charity Fun Run – Benefiting Wounded Warriors + Product Give Aways!http://www.colinmcnamara.com/cisco-live/
http://www.colinmcnamara.com/cisco-live/#commentsFri, 08 Jun 2012 17:00:43 +0000http://www.colinmcnamara.com/?p=1166Are you attending Cisco Live? Or do you live in the San Diego area and want to join us?

We are getting together for an easy 5k (3.1 mile) run on the San Diego Water Front. We would love for you to join us at 8:00 am on the morning of July 12th at Cisco Live in San Diego.

Once you have liked and commented so we can track who is who you can gain additional entries by donating here.

Each dollar donation is equal to an additional entry.

There will be a random drawing at 5 PM on June 14th 2012. More points increase your chances of winning! GOOD LUCK!

]]>http://www.colinmcnamara.com/cisco-live/feed/0DON’T Thank the veteran you know todayhttp://www.colinmcnamara.com/dont-thank-the-veteran-you-know-today/
http://www.colinmcnamara.com/dont-thank-the-veteran-you-know-today/#commentsMon, 28 May 2012 19:22:21 +0000http://www.colinmcnamara.com/?p=1151Thanking the veteran you know is a nice gesture, but misses the mark. The veteran you know most likely has found gainful employment. He has a stable home life, wife and children who love him. He isn’t worried about next months rent check bouncing, or skipping meals because there isn’t enough money to pay for food for the last couple days of the pay period.

A former Marines Perspective

When someone thanks me for my service it is appreciated, but unnecessary. Out of an 8 year contract, I spent 2 years 8 months on active duty. This including being activated after 9/11 and the last time we decided to go play in the Iraqi sandbox. It was a horribly hard for me and my family, I will carry scars and shrapnel with me for the rest of my life. However I had sworn an oath to serve. And that is what I did.

Now however life is much different. While I carry forward many USMC traits, the hardships that I faced back then are far and distant memories. When someone “thanks me for my service” I appreciate the gesture, however it misses the mark by many years.

When I really needed someone to thank me for my service

When I got back from my initial training in the mid 90’s and was so hungry I would flirt with the girls at Taco Bell to get free food, and couldn’t even pay my rent. I needed thanks and support then.

When I told the owner of the Reseller I worked at after 9/11 that it was likely I would be activated, and he laid me off for my honestly. (Yes I know that is illegal, however suing your employer never works out well). Sadly this situation happened to many of my friends in the reserves. We all went from six figure jobs in Silicon Valley to nothing.

When the landlords of my apartment tried to evict my wife from her one bedroom apartment while I was deployed. I needed the thanks and support.

When Bush hung a banner saying mission accomplished, and six months later I got to come home. To no job, mounting bills, a wife and small child looking to me to figure out what to do. I could have really used the thanks and support then.

A few people giving thanks and support when it was needed made a HUGE difference.

When got back from my first activation in 97′ I was starving, about to be homeless and down to my last straw. I was walking to Taco Bell (Where I used to flirt with the girls to get free food) to put in an application so i wouldn’t get evicted from my apartment. As I was walking up to the store a car pulled up in the drive through, and recognized me. We had worked together in High School, he was in the Army National Gard and I had joined the Marines. I told him about my situation and it turns out he managed the systems build team at a local Value Added Reseller. Two days later I had a job on the midnight shift building PC’s. Him thanking me for my service resulted in me not living on the streets. More so without that CHANCE to prove myself I would not be in the technology field today.

The second person who gave thanks and support when it was drastically needed is Ed Chen. I had done a bit of consulting with his group at Openwave Systems over the years before my activation for Iraq. When the owner of the VAR I was working for laid me off just prior to activation Ed was a great friend and kept his ear open for an permanent opening at Openwave the year I got back to the real world. Ed kept me in his thoughts for the year I was gone. Once I was back and adjusted to the real world he had teed up an opening with his boss for me running the VOIP systems worldwide. His thanks and support made a huge difference in me and my families life.

What you should do to show thanks

As we wind down our military from the wars of the past decade here are a large number of Veterans re-integrating themselves into civilian life. For a hardened military veteran this is possible one of the hardest transitions to make successfully. When you see that Vet living on the street, that is a person who didn’t make it.

The single most valuable thing you can do to say thanks to a Veteran is to give one a chance. We all have openings in our mail rooms, pulling cables, racking servers and switches. These low level jobs are a perfect entry point for these new Veterans integrating back into the real world. Given the chance many of them will work extra hard to learn and grow, and pay you back ten-fold on your investment. Someone gave thanks and gave me a chance. I encourage you to do the same and thank that Veteran that you don’t know.

What is VSPEX? There has been a bit of speculation floating around the industry the past couple of weeks regarding what EMC will be releasing. I have heard rumors spreading from the extreme of EMC create a new super version of VPLEX, to EMC creating a product wholly and completely competitive with Vblock (mentioned here by CRN last week talking about a VMAX based VPLEX). A few people, notable the folks at the register (VSPEX article here) have come quite a bit closer to the truth by calling out that a new reference architecture positioned to address the mid-market in a similar fashion to NetApp’s FlexPod offering.

The truth about VSPEX

VSPEX is a pre-validated set of reference architectures currently focused around Server Virtualization (cloud computing if you want to church it up) as well as end-user computing (VDI/VXI/Application Presentation Solutions). It fills the gap in EMC’s solution offerings that exists between a design and build your own solutions and their converged infrastructure product (VCE Vblock).

The idea behind VSPEX is to provide a customers complete virtualization solution, while allowing for flexibility in the selection of compute, virtualization and data protection (backup) technologies that best fit their needs and current environments.

VSPEX is designed to allow for that flexibility of choice while removing the risk and lowering deployment times by deliver pre-validated tested reference architecture, as well as logical build guides similar to what is used to deploy and manage Vblock. The end result of this is to have a solution that scales to known capacities and can be deployed in a rapid reliable manner with build guides created by EMC and it’s ecosystem partners.

What components create VSPEX

Networking – VSPEX will be featureing Cisco’s Nexus 5548 Unified Port Switch as the networking component. This is the standard 10 gig switch used in Cisco’s other Validated Designs and is the standard platform for an aggregation layer for most Cisco based Data Centers.

Compute – Cisco Unified Computing System will be featured as the compute platform for VSPEX. For the mid market solutions that feature VNX storage Cisco B-Series blade centers will be recommend. For SMB focused solutions featuring VNXe storage Cisco’s C-Series servers will be featured.

Server Virtualization – Both VMware vSphere or Microsoft Hyper-V based validated and tested designs will be released. This is very interesting as both NetApp and EMC now have included Hyper-V as a supported scalable hypervisor. I personally have been a huge fan of VMware over the years, however I think that signs like this show that Hyper-V is increasingly penetrating both EMC and NetApp’s customer base.

End-User Compute (VDI) – Two solutions will be initially released. VMware View or Citrix XenDesktop. Both of these products are featured in Cisco’s VXI architecture and will now be featured in VSPEX. Is this the year of VDI? I would say yes.

Data Protection – The often forgotten, but always need backup and recover solution tied into any virtualization project will be EMC Avamar and or Data Domai

How flexible is VSPEX

VSPEX is surprisingly flexible. The components specified in the architecture are explicitly called out as starting points to design around. You are allowed to substitute components as long as you stay within the sizing parameters and guidelines specified in each architecture document.

Personally this solves a big challenge that happens when selling a Vblock solution. Vblocks tend to be very rigid what your are allowed to do with them, and what and how something can be installed. This presents a challenge when a customer wants the speed of installation and single number support of a Vblock, but has requirements that drastically diverge from what Vblocks were designed for. With VSPEX these customers can still have the safety and speed of delivery of a tested and validated solution, while benefiting from the flexibility of a reference architecture based solution.

Is VSPEX competitive with Vblock

There is technically a bit of overlap in the solutions, however the two offerings are really targeted to completely different customer and workloads. While there are some smaller Vblocks out there, the majority of Vblocks sold have been focused towards higher end solutions at Enterprise and Service Provider customers.

VSPEX is targeted to fill the gap that Vblocks tend to miss, which is the Mid-Market and SMB space. That Mid-Market space is the same space that NetApp has had so much success with the FlexPod offering. I expect VSPEX to provide these Mid-Market and SMB customers options that were not available to them from a Vblock solutions.

What is my verdict on VSPEX

EMC has brought together a strong reference architecture offering for the mid market with VSPEX. They are moving the ball one step forward by providing many of the benefits of Vblock in a flexible reference architecture delivered to Mid-Market customers. This is needed by EMC, Cisco and it’s channel partners and I fully expect it to be embraced by their joint customers.

What is VSPEX? There has been a bit of speculation floating around the industry the past couple of weeks regarding what EMC will be releasing. I have heard rumors spreading from the extreme of EMC create a new super version of VPLEX, to EMC creating a product wholly and completely competitive with Vblock (mentioned here by CRN last week talking about a VMAX based VPLEX). A few people, notable the folks at the register (VSPEX article here) have come quite a bit closer to the truth by calling out that a new reference architecture positioned to address the mid-market in a similar fashion to NetApp’s FlexPod offering.

The truth about VSPEX

VSPEX is a pre-validated set of reference architectures currently focused around Server Virtualization (cloud computing if you want to church it up) as well as end-user computing (VDI/VXI/Application Presentation Solutions). It fills the gap in EMC’s solution offerings that exists between a design and build your own solutions and their converged infrastructure product (VCE Vblock).

The idea behind VSPEX is to provide a customers complete virtualization solution, while allowing for flexibility in the selection of compute, virtualization and data protection (backup) technologies that best fit their needs and current environments.

VSPEX is designed to allow for that flexibility of choice while removing the risk and lowering deployment times by deliver pre-validated tested reference architecture, as well as logical build guides similar to what is used to deploy and manage Vblock. The end result of this is to have a solution that scales to known capacities and can be deployed in a rapid reliable manner with build guides created by EMC and it’s ecosystem partners.

What components create VSPEX

Networking – VSPEX will be featureing Cisco’s Nexus 5548 Unified Port Switch as the networking component. This is the standard 10 gig switch used in Cisco’s other Validated Designs and is the standard platform for an aggregation layer for most Cisco based Data Centers.

Compute – Cisco Unified Computing System will be featured as the compute platform for VSPEX. For the mid market solutions that feature VNX storage Cisco B-Series blade centers will be recommend. For SMB focused solutions featuring VNXe storage Cisco’s C-Series servers will be featured.

Server Virtualization – Both VMware vSphere or Microsoft Hyper-V based validated and tested designs will be released. This is very interesting as both NetApp and EMC now have included Hyper-V as a supported scalable hypervisor. I personally have been a huge fan of VMware over the years, however I think that signs like this show that Hyper-V is increasingly penetrating both EMC and NetApp’s customer base.

End-User Compute (VDI) – Two solutions will be initially released. VMware View or Citrix XenDesktop. Both of these products are featured in Cisco’s VXI architecture and will now be featured in VSPEX. Is this the year of VDI? I would say yes.

Data Protection – The often forgotten, but always need backup and recover solution tied into any virtualization project will be EMC Avamar and or Data Domai

How flexible is VSPEX

VSPEX is surprisingly flexible. The components specified in the architecture are explicitly called out as starting points to design around. You are allowed to substitute components as long as you stay within the sizing parameters and guidelines specified in each architecture document.

Personally this solves a big challenge that happens when selling a Vblock solution. Vblocks tend to be very rigid what your are allowed to do with them, and what and how something can be installed. This presents a challenge when a customer wants the speed of installation and single number support of a Vblock, but has requirements that drastically diverge from what Vblocks were designed for. With VSPEX these customers can still have the safety and speed of delivery of a tested and validated solution, while benefiting from the flexibility of a reference architecture based solution.

Is VSPEX competitive with Vblock

There is technically a bit of overlap in the solutions, however the two offerings are really targeted to completely different customer and workloads. While there are some smaller Vblocks out there, the majority of Vblocks sold have been focused towards higher end solutions at Enterprise and Service Provider customers.

VSPEX is targeted to fill the gap that Vblocks tend to miss, which is the Mid-Market and SMB space. That Mid-Market space is the same space that NetApp has had so much success with the FlexPod offering. I expect VSPEX to provide these Mid-Market and SMB customers options that were not available to them from a Vblock solutions.

What is my verdict on VSPEX

EMC has brought together a strong reference architecture offering for the mid market with VSPEX. They are moving the ball one step forward by providing many of the benefits of Vblock in a flexible reference architecture delivered to Mid-Market customers. This is needed by EMC, Cisco and it’s channel partners and I fully expect it to be embraced by their joint customers.

]]>http://www.colinmcnamara.com/vspex-emcs-flexible-reference-architecture-explained-vspex/feed/1I woke up shaved bald and with one eyebrow…http://www.colinmcnamara.com/i-woke-up-shaved-bald-and-with-one-eyebrow/
http://www.colinmcnamara.com/i-woke-up-shaved-bald-and-with-one-eyebrow/#commentsThu, 05 Apr 2012 18:00:52 +0000http://www.colinmcnamara.com/?p=1087And it was all for a great cause.

This is the second year that I have been a “shavee” for the St. Baldrick’s Foundation. This volunteer-driven charity funds more in childhood cancer research grants than any organization except the U. S. government.

This year I was able to not only involve myself and my friends in this charity, but I was also supported by the owners of Nexus who sponsored the bar tab at the Irvine St. Baldrick’s event.

I want to not only thank everyone who supported this great charity, but of course invite anyone else who is passionate about supporting cancer research to join in the fight and donate through my participant page linked below.

]]>http://www.colinmcnamara.com/i-woke-up-shaved-bald-and-with-one-eyebrow/feed/2I used to be fat – How I beat the bulgehttp://www.colinmcnamara.com/i-used-to-be-fat-how-i-beat-the-bulge/
http://www.colinmcnamara.com/i-used-to-be-fat-how-i-beat-the-bulge/#commentsFri, 13 Jan 2012 19:00:08 +0000http://www.colinmcnamara.com/?p=1038 I used to be obese. I tipped the scales at 290 pounds, and could barely walk up a flight of stairs without getting short of breath.

The picture on the left was taken back then. I was on staying in a hotel 110 nights a year and at on a flight at least twice a week. Living the jetset life was bringing me to a quick end to my life.

While I can talk all day about the journey from fat to skinny(er) I’d like to take the chance to share a couple key tools that helped me shed the pounds.

Getting back on the bike

I didn’t get to be 290 pounds in one day. I gained it one day at a time, while sitting at a desk typing on a computer (while eating something incredibly yummy). The human body is an amazing machine that reacts well to physical activity.

I personally found that getting back onto a bicycle provided a way for me to burn some calories while doing something that is very enjoyable. It also provided a physical activity that wasn’t as hard on my joints (which at 290 is a big risk) as running.

I ended up dragging one of my old racing bikes out of the garage, however that is not necessary for everybody. You burn just as many calories on a 150 dollar Walmart bike as you do riding 5000 dollar custom bike. What is important is that you are out being active, not what you are being active on.

What can’t be measured can’t be improved

Nobody wants to hear that you need to track your calories and weight to lose weight. But here it is – You have to track your calories and weight. Sorry, I know it sucks but you have to do it. Losing weight is simple math. Take in less calories then you need each day and you lose. Eat more then you need and you gain. How do you find out what your magic number of calories per day is? It is simple. You have what is called your Basal Metabolic Rate (BMR) and the calories burned during daily activity. Put those both together and that is your calorie budget for the day. Now you just have to find some tool to track it.

I have used a tool called the daily burn tracker for a couple years now. There is a free option that allows you to track via a webpage, and also a low cost iPhone app that allows you to look up foods and log them throughout the day. This allows me to keep an eye on my food intake, and make sure that calories aren’t sneaking up on me.

After measuring the calories you put into yourself, it is important to measure the results. When I started losing weight I just used a spreadsheet to track my progress. As time moved on I got introduced to the Withings scale.

I have to say, this scale is awesome for the inner geek in you. It measures your weight, your body fat, and your BMI (Body Mass Index) all automatically. Not only does it do that, but it upload your statistics via WiFI to a personal private account on www.withings.com . After your data is there you can set it up to sync to other services, (such as dailyburn listed before) or to twitter if you are up for some public support and/or embarrassment.

One other item that Withings makes, and I use it he blood pressure monitor.

This plugs into your iPhone or iPad and automatically takes your blood pressure and resting pulse. This is uploaded to the same interface that you use to view your weight and fat percentages. I find that it provides yet another window into the state of my health, and also provides a great feedback loop when I am training to hard (resting heart rate in the morning will be elevated).

You have to find balance and enjoy yourself

It is easy to become myopic in focus and become consumed with hitting a calorie goal each and every day.

While it is good to be focused it is important to remember that becoming fat didn’t happen in a day, it took time. The same is true with getting skinny. It is a long road, and it is ok to have fun for a day, enjoy some drinks and a good meal in moderation and have a good time. Just remember, the next day to get back on track, capture those calories and continue on the road to the skinny you.

Hadoop is an open source framework for processing and querying big data on clusters of commodity hardware. It was originally developed by Yahoo in 2006 as a clone of Google File System (GFS) and MapReduce framework used to store web search indexes and crawl data for the search engine Nutch.

In the last few years however developers have embraced MapReduce (the ability to map key pairs, and reduce them into small byte size computing chunks to distribute across hybrid storage/processing nodes), and have begun developing a vast array of applications that can utilize the distributed storage and compute capacity.

My Background with Hadoop

Back in 2006 I working for a startup in San Diego that did high dimensional mathematical analysis of financial transactions to quantify identity theft risk. Over the time I was there we went from an scale up batch system to serving 10,000 transactions a day to a scale out web service (today you would call it a cloud) that served millions of transactions a day all served under 250 milliseconds each.

To scale to that size under such strict latency requirements it was necessary to experiment with and implement some pretty cutting edge open source technologies. I cheated off the notes of Jeremy Zawodny at Yahoo almost daily (thanks Jeremy, your knowledge and tools totally saved my butt many times). At that same time Jeremy’s team started doing some interesting work around distributed computing with Hadoop. Needless to say this was a technology I had to try. Hadoop was extremely young at the time, however for certain analytics workloads I was able to use 10 PC’s to outperform a half million dollars in compute and fibre channel storage.

Flash forward 6 years – Hadoop is all grown up

Over the past six years not only has Hadoops file system (HDFS) and processing (MapReduce) capabilities matured, but a suite of applications has been developed. These include tools to managed Hadoop clusters, large scale log analysis tools, scale out analytics packages and large scale distributed database applications.

The list of clients using hadoop has grown too. This ranges from Yahoo, Ebay and Facebook to enterprise customers like Fox, TMobile, Equifax and the New York Stock Exchange using Greenplum (Project R running on Hadoop). No longer is Hadoop a tool for a select few, it is now the next logical extension of the standard web service LAMP stack, and increasingly useful for Data Warehouse workloads.

Tuning the foundation – Hadoop and MapReduce

Many times when people talk about tuning parallel compute clusters like Hadoop, SunGrid or LSF they forget the obvious. They forget that the squeezing performance is about managing the delicate balance between applications and infrastructure. When tuning that balance, you have to first segregate applications that directly access the hardware resources, and applications that access these apps. To create a frame of reference think of the relationship between Apache, MySQL and Disks in a LAMP architecture.

When dealing with Hadoop Distributed File System (HDFS) and the MapReduce jobs that run on it there are three primary dimensions of tuning. These are dimensions are –

3. Balancing I/O systems in slave nodes such as memory, server side flash, and spinning disk.

Optimizing NameNode and Job Tracker server performance

The NameNode in a Hadoop cluster is used to track the locations of the different file shards distributed across all slave nodes in the cluster. It is also used to house metadata for certain applications that reside in the Hadoop cluster. This puts specific strain on CPU, Memory and Network interfaces.

CPU / Network Interface

Certain processes inside of the name node do not take advantage of the multitude of cores available on today’s servers. The biggest offender in this case is the RPC server which processes network requests in a serial manner. Utilizing the fastest CPU as possible in conjunction with low latency network adapters such as Mellanox MNPH29D-XTR 10 Gig NIC, and low latency fabric switches such as the Nexus 5548. Optimizing the CPU and Network interface has significant effect on minimizing bottlenecks due to serialization delay of RPC requests.

Memory

NameNodes can use a lot of memory when servicing HDFS alone. The addition of layered applications on top of HDFS that utilize the NameNode as well as the increase in file numbers in HDFS only increase the importance of sufficient amounts of high speed memory.

Optimizing transfers between nodes in the HDFS cluster

Certain types of jobs such as sorts and greps (the basis for index generation) move significant amounts of data between nodes in the Hadoop cluster. Since the inception of Intel’s Nehalem processor family, single gigabit interface have presented bottlenecks when transmitting and receiving data. This inserts “slack time” in the cluster minimizing the time that slave node is actually processing data. The net result of this equals either slower job completion / response times or the unnecessary addition of additional nodes to the cluster (increasing your cost per job/transaction).

Impact of server bandwidth on job completion time

To illustrate this point please reference this test done by Intel on their own Hadoop cluster with a first generation Nehalem processor. Even then a single gigabit interface was not sufficient to service a node. In this case doubling the bandwidth to two gigabit by bonding interfaces together rebalanced the node. However if you follow Moore’s law, nodes utilizing Sandy Bridge CPU’s (due to release some time in 2012) will need four plus gigabit of network during a data transfer to avoid unnecessary wait times. Luckily this generation of server will have 10 Gig adapters built into the motherboard.

Network bandwidth and design

HDFS and the many of the applications that reside on top of it have the notion of a Rack ID. This can be used for fault isolation. For example if you had A/B racks on different power feeds you could ensure that redundant data shards are stored on nodes in different racks, and therefore increase the systems tolerance of faults.

This Rack Id can also be queried by higher level applications to ensure that jobs requiring high bandwidth data transfers are localized within say a pair of Nexus 5500’s with 10 gig fabric extenders. This would minimize the utilization of typically oversubscribed uplinks north of the access layer ensuring again, that nodes are not si