So in the spirit of friendly competition, @ShanleyKane and I are going to see who can get the most people to sign the ‘Meatcloud Manifesto’, take a picture with it, and post it to flickr (or the photo sharing site of their choosing), and tweet a link with the tags of the conference you are at (#structure10 or #velocityconf) and #meatcloud just for good measure.

It’s should look something like this:

We hold these truths to be self-evident

Or this:

So if you are a builder of things, and you love some APIs, show your support for Velocity Conf and sign the manifesto.

We have a few interesting developments coming and a couple big projects we can’t quite speak freely about yet, but we provide strategic consulting and implementation assistance, especially for large organizations looking to invest in internal IaaS resources or to differentiate themselves as public IaaS providers.

So far, I’ve been getting up to speed on our projects and the tools, in addition to learning some things that I’ve typically been somewhat removed from, like layer 2 networks and other details most developers (and even many sysadmins) take for granted in their day to day.

The bottom line is Cloudscaling is working on pushing the boundaries of ‘Infrastructure is Code’. We can agnostically evaluate and implement solutions using the best tools and track the evolution of the space. We have a team with both breadth and depth up and down the technology, from the datacenter to virtualization, from hardware to APIs.

I’m really excited to be part of the team (although there’s some greatpeople not on that page, like Lew Tucker ex-Sun Cloud CTO who just joined our board of advisors) and I’m expecting big things and a great year from us.

Look for some systems management and cloud related thoughts from me on the cloudscaling blog…

Share this:

Like this:

The nice thing about standards is that there are so many of them to choose from.
–Andrew S Tanenbaum

standard -noun

something considered by an authority or by general consent as a basis of comparison; an approved model.

an object that is regarded as the usual or most common size or form of its kind: We stock the deluxe models as well as the standards.

a rule or principle that is used as a basis for judgment: They tried to establish standards for a new philosophical approach.

an average or normal requirement, quality, quantity, level, grade, etc.: His work this week hasn’t been up to his usual standard.

SQL was first developed at IBM in the early 70s.

Many of the first database management systems were accessed through pointer operations and a user usually had to know the physical structure in order to construct sensible queries. These systems were inflexible and adding new applications or reorganizing the data was complex and difficult.

ANSI adopted SQL as a standard in 1986, after a decade of competing commercial products using SQL as the query language.

SQL became ‘the standard’ because it was open, straightforward, relatively simple and helped solve real problems.

TCP/IP emerged as the standard after a proliferation of competitive networking technology for largely the same reasons.

(another interesting story of emergent standards is POSIX, but apparently no one posts about it in any detail online, and you can only read about it if you are willing to part with $19… you know, the marginal cost of producing a PDF and all.)

People often compare cloud computing to a utility like electricity, one big happy grid of computational resources. Often those same people champion the call for ‘standards’, which makes me wonder if they have traveled much.

The call for standards is usually trumpeted with a need for ‘interoperability’ and avoiding lock in. We all know how well SQL standards prevent vendor lock for databases.

In discussing the evolution of standards with @benjaminblack, I pointed out that TCP/IP was more ‘standardized’ than SQL. His perspicacious response noted that with TCP/IP ‘if you don’t interop you are useless’ and ‘if databases had to talk to each other, they’d interop, too’.

Interoperability arising from a standard is a lie. The order is wrong. Interoperability comes because everyone adopts the same thing, which becomes the standard. Don’t confused a ‘specification’ with a ‘standard’. SQL became the defacto standard long before it was ‘officially’ a standard. SQL implementations will never be fully interoperable and truth be told there are often real advantages in proprietary extensions to that standard. TCP/IP became the defacto network standard and interoperable because that’s just the natural order of things. Interoperability will happen because it must, or else it won’t. Interop cannot come from a committee.

Interoperability is even more of a lie when it comes to cloud computing. If we are talking about IaaS (infrastructure as a service) then the compute abstractions for starting, stopping, and querying instances are almost trivial compared to the work of configuring and coordinating instances to do something useful. Sysadmin as a Service isn’t part of the standards. This is so trivial that you can find open source implementations that abstract the abstractions to a singleinterface. (Seriously, libcloud is just over 4K lines of python to abstract a dozen different clouds. At this point, supporting a new cloud with a halfway decent API is a day or two at most) The storage abstractions are in their infancy and networking abstractions are nearly non-existent in the context of what people consider cloud infrastructure. The APIs and formats are a distraction from the real cloud lock in, which is all the data. You want to move to a new cloud? How fast can you move terabytes between them? Petabytes?

Which brings me to PaaS (platform as a service), otherwise known as ‘locked in’ as a service. PaaS has all the same data issues, but without any common abstractions whatsoever. I mean sure, you could theoretically move a non-trivial JRuby rails app from Google App Engine to Heroku, but let’s be honest, sticking your face in a ceiling fan would probably be more fun and leave less scarring. That’s an example that is possible in theory, but in most cases, PaaS migration will mean a total rewrite.

Finally, SaaS (software as a service), which I love and use all the time, but I can’t convince myself that every web app is cloud computing. (Sorry, I just can’t.) Again, data is the lock in, please expose reasonable APIs, but standards don’t make any sense.

Committee-driven specifications get some adoption because most people like it when someone else will stand up and take responsibility for leading the way to salvation. CORBA and WS-* aren’t the worst ideas ever (I give that prize to Enterprise Java Beans) but they aren’t always simple or straight forward in comparison to other solutions. Adopting an official standard is good for three things, first, providing some semblance of interoperability, second, stifling innovation and finally, giving power to a standards body. For cloud computing, a standard in the name of interoperability is essentially solving a non-problem and calcifying interfaces pre-maturely.

Frankly, I’d rather double down on more innovation. Standards will emerge.

You want to make a cloud standard? Implement the cloud platform everyone uses because it is simple, open and solves real problems.

(Thanks to Ben Black for his feedback and for telling the same story a different way last year.)

Share this:

Like this:

There are only two mistakes one can make along the road to truth; not going all the way, and not starting.

–Buddha

Andi Mann posted ‘Myopic DevOps Misses the Mark‘ earlier today and after reading it, I wanted to put my thoughts out there, particularly since I had hoped some of what I consider his misconceptions would have been cleared up before this post.

To be fair, Andi does ask some good questions and has clearly spent his share of time thinking about ops in general, so hopefully I can make some attempt to address them as well.

To start with, Andi asserts that DevOps is mostly about developers. I’m not entirely certain what makes him think that, but it is patently false and the majority of people involved are heavily from an operations background. That said, I do believe semantics matter, and it might just be the name itself that leads people to that conclusion.

Maybe NeoOps, or KickassOps would have been better… but it is probably too late for that now.

I may be mistaken, but I believe the credit for the term DevOps belongs to Patrick Debois when he organized the first DevOpsDays last year.

Patrick is a bit of a Renaissance man, playing many roles in the process of software delivery along the way. I’m not particularly a fan of labeling people, but Patrick has self identified himself as a sysadmin on more than one occasion. I’m also not particularly a fan of certification, but Patrick’s CV lists certifications like ITIL and SANS, that I’d wager are almost exclusively taken by people in Ops/admin roles. The glaring exception is SCRUM, and I know for a fact Patrick has fought tooth and nail to get the Agile community to recognize the role of systems administrators in the process of delivering value.

Of anyone involved in what has apparently transitioned from ‘a bunch of good ideas’ to ‘a movement’, I probably have the most dev centric background.

Kris Buytaert – Another Belgian Renaissance Man and a system administrator

I’m sure I’m missing lots of people, sorry, maybe we need a poll

Andi keeps saying DevOps is developer centric, and I think the problem (besides maybe the name) is the fact that there is code involved in automation that isn’t a shellscript. Of course, I’m only speculating because he doesn’t actually articulate what makes him think this, but let’s move on to his questions.

Andi makes assertions about lack of control, process, compliance and security. This is ludicrous, bordering on negligent. I’ve seen Puppet deployments on 1000s of machines in what can only be classified as ‘the enterprise’ and I will guarantee those machines are more tightly controlled, compliant and secured than 99% of the machines in most organizations claiming to embrace ITIL. A solid Puppet installation is closer to a functional CMDB than anything I’ve seen in the wild with the advantage that it is both auditing and enforcing the configuration on an ongoing basis. DevOps automation and ITIL are not mutually exclusive and can coexist. (I’m not going to really get into what I think about most of ITIL… but this should help.)

More Specific Questions (most of which are predicated on the misconception that ops somehow goes away, but there are some other bits worth addressing):

Who handles ongoing support, especially software update for the unrestrained sprawl of non-standard systems and components?

Ops. Unrestrained sprawl of non-standard systems is a bad assumption. First of all, the slow moving ITIL loving enterprise tends to have as much or more problems with heterogeneous systems as anyone, second of all, when you start to model and automate systems it makes the problem of the heterogeneity both more apparent and more manageable. No one I know advocates anything but pushing towards simple homogeneous systems whenever possible. No one is pretending support and software updates go away.

Ops of course, but with the added benefit of an automated infrastructure with semantics relevant to the questions being answered.

Who handles integration with common production systems that cannot be encapsulated in a VM, like storage arrays (NAS, SAN), networking fabrics, facilities, etc.

Yep, Ops. VMs are nice because they are typically only an API call away, but there are tools for doing API driven provisioning on bare metal and they will only get better, but… VMs are just the bottom of abstraction mountain. The API driven abstractions of storage and networking fabric are coming. That isn’t the reality today, but it will happen, and relatively soon.

Who handles impact analysis, change control and rollback planning to ensure deployment risk is understood and mitigated?

This is a good one, because frankly I don’t think Ops can do this in isolation anyway. This is a cross cutting concern involving Ops, Dev, Product Management and the other business stakeholders, but change control and rollback are orders of magnitude easier to reason about and accomplish with DevOps approach.

Who is responsible for cost containment and asset rationalization, when devops keeps rolling out new systems and applications?

Similar to the last question, but with the added misconception that DevOps means rolling out random stuff just cause. I know I’ve personally made this point explicitly, but the whole point is to enable a business, and cost containment and asset rationalization are obviously cross cutting concerns of that business.

Who ensures reporting, compliance, data updates, log maintenance, Db administration, etc. are built into the applications, and integrated with standard management tools?

Ops doesn’t really do this now. What is the definition of ‘ensure’? Ask nicely? Write up documents? Beg? Get mad? At worst, attempts to do this are often at the root of ‘the wall of confusion’ between Ops and Dev. Again, I’m not sure where Andi got the idea DevOps = ‘cowboys without any concern for anything but deploying stuff as fast as they can’. What are the ‘standard management’ tools? As much as anything, maybe that is what DevOps is replacing, because most of them are embarrassingly poor. The best way to accomplish everything on this list is to expose sensible internal APIs. When we can get to the point that we have reasonable conventions, integration with the next generation of ‘standard management tools’ will be trivial. That might strike you as a dev centric perspective, but really it just means that the present is isn’t evenly distributed.

DevOps for the win, with the help of tools that can actually model, audit and enforce all those things programmatically.

I’m sure Andi means well, but I’m not clear why he got the impressions he did of what DevOps means or is trying to accomplish. I did the best I could. (Twitter ‘lives in the now’ so that link will probably only be useful for a few days.) I guess if you use the word ‘API’ people won’t process anything further because you are obviously a cowboy developer. C’est la vie…

Finally, Andi finishes with a list of things he would like to see. The irony here is everything on his list is DevOps:

Including ops during the design process, so applications are built to work with standard ops tools.

Devops!

Taking ops input on deployment, so applications will go in cleanly without disrupting other users

Devops!

Working with ops on capacity and scalability requirements, so they can keep supporting it when it grows

Devops!

Implementing ops’ critical needs for logging, isolation, identity management, configuration needs, and secure interfaces so the app can be secure and compliant

Devops!

Giving ops some advance insight into applications, especially during test and QA, so they can start to prepare for them before they come over the wall

Tear down the wall! DevOps!

Allowing ops to contribute to better application design, deployment, and management; that ops can do more for the release cycle and ongoing management than just ‘manipulating APIs’

See, there is hope for Andi yet! (I just hope he has a good sense of humor about the title… and would be willing to discuss this over a nice meal if he comes through Salt Lake or we end up in the same city soon.)

Share this:

Like this:

There’s something happening here
What it is ain’t exactly clear
There’s a man with a gun over there…
–Buffalo Springfield

Alrighty then, what is this DevOps stuff and what does it mean to me…

First, first off, I came to Ops as a developer (and to be honest I came to be a developer because I didn’t like my prospects or the pay rate to do pure mathematics, but that’s a long story for another day).

If you are going to work with computers at all and have some curiosity and aptitude, chances are you are going to learn a bit about how they work. At the first place I was paid to program, I was a one man wrecking crew in every sense of the word. I was in charge of everything from server configuration to all the programming. I was just out of school with a degree in Mathematics and a minor in computer science. I did everything wrong but I made it all work with what I knew and force of will. I solved problems with books, google and tinkering. There were mailing lists and forums, but they were often insular and reluctant to answer questions or dismissive. Pain is an excellent teacher and that was over a decade ago.

There was a short period where my path could have gone down either road, sysadmin or developer, but as fate would have it my choices and circumstances took me through grad school and from there I became more and more inculturated into the developer tribe.

At some point, working as a developer for a SaaS ecommerce platform startup, through arrangements that I had little control over, I got to experience first hand a dysfunctional relationship with an operations team and essentially found myself taking responsibility for details that would traditionally belong to that side of the ‘wall of confusion’.

In my time working with Luke and peoplefrom the Puppetcommunity, I learned a ton. I learned more about the work and culture of system administrators and I also learned a lot about being a developer (in addition to a more lessons than I care to enumerate about business, relationships, marketing, sales, venture capital and spinning plates but I digress).

In my journey, I was also fortunate to make the acquaintance of a number of interestingandtalentedpeople at the Salt Lake Agile Roundtable, and this had me in the habit of thinking about technology in terms of people and workflows.

I began thinking about Puppet and the systems tools ecosystem, in the context of the people and the processes. Some of those thoughts were recorded in this blog. I started articulating, sharing and experimenting with those ideas. I found others in the communities of practice that had similar ideas. We all started talking and sharing and building infrastructure and making things happen and now we are here.

To me DevOps is two distinct things that feedback on each other, and then a third that I think is really different.

First, there is the recognition that developers and operations can and should work together. In my opinion, this is being driven by the rise of web delivered business value. When the servers aren’t up, the nifty application doesn’t exist. Too many teams have too much turbulence on both sides of that. This is a serious problem and costs companies millions of dollars every year. I like to think I have made more contributions to solving this problem than I ever made to causing it. This cooperation seems to be the main focus of what I read other DevOps people talking about. Communication, community of interest, manage flow, boundary objects, yada yada… great stuff!

Second, infrastructure and system administration is evolving. The explosion in the open source tool ecosystem is awe inspiring. From provisioning, to virtualization, from configuration, to orchestration, something has undeniably accelerated in the last few years. More and more, from end to end, infrastructure is code. APIs driving and manipulating systems from bare metal to running services. That process looks more and more like software development, split from undifferentiating physical labor in the datacenter. The ‘sysadmin’ no longer has to rack and stack and cable, in addition to being an expert in every OS, application stack, the Voip phones and the printers.

People are arguing that this is not new. That’s somewhat true, and similar positions could be supported for nearly any aspect of computer science, programming or technology. I think that is missing the point a bit. While some of this might not be new in principle or practice, the acceleration is real and those people have to recognize this is not how most people think about and manage their systems. The infrastructure is an application. The sooner more people think like that, the happier they will be. I’m not advocating forget what it means to be a system administrator. Own that domain and know where you come from, but recognize and leverage all the applicable tools and lessons from software development without concerns for notions of tribal identity. In my opinion, there is more to this than ‘just good at their jobs‘, because I still meet system administrators who haven’t heard of Puppet or see why anyone would want or need something like that. The past and present aren’t evenly distributed any more than the proverbial future. We take for granted that things that are obvious to us are obvious to everyone.

Telling someone a truth they aren’t ready to understand is the same as lying to them.

Which finally brings me to the big thing that I think DevOps represents, a community of practice. There might not be anything technically new, what is new is a lot more people talking and sharing. People may have automated system administration tasks forever, but they also hard coded lots of specific details and assumptions about their infrastructure, and they mostly did their work in secret. Lessons were learned and forgotten because the details weren’t transmitted beyond a generation of implementation. There wasn’t (and to some degree isn’t) a common language for patterns of common problems and solutions, but we’re working on it. This community is emerging globally and perhaps appropriately coming together through the very medium which they support, nurture and protect with their hearts and minds. A global community of peers empowering itself to improve the craft through learning and teaching. People with a passion for infrastructure. The difference is not that we can automate systems and work together with other people, the DevOps difference is we want you to be able to do it too.

Open Source, Cloud Computing, Agile, Systems Administration, a perfect storm of ‘nothing new’ with DevOps in the middle of it.