This blog comments on a variety of technology news, trends, and products and how they connect. I'm in Red Hat's cloud product strategy group in my day job although I cover a broader set of topics here. This is a personal blog; the opinions are mine alone.

Grace Hopper - Wikiquote - RT @monkchips: "It's easier to ask forgiveness than it is to get permission" - that was grace hopper too? no way. #c…

A VC: Return and Ridicule - "This notion also plays into Clayton Christensen's framework for disruptive innovation. Many of the most disruptive technologies started out as what Clay calls "toys". The PC is a great example of that."

Thursday, April 25, 2013

Platform-as-a-Service offerings, such as Red Hat's OpenShift, provide a great way for government agencies to provide separation of roles (so that the scope of contractors, for example, can be narrowly defined). Red Hat public sector technology strategist Gunnar Hellekson also discusses how overall cloud adoption in government is proceeding, Red Hat Enterprise Linux FIPS 140-2 certification, and cloud in the Department of Defense.

Gordon:
Lots of exciting things happening in the government space around cloud
computing in general and PaaS specifically. Why don't you tell us a little bit
about what's happening around DISA and the DoD?

Gunnar:
Yeah, this is exciting for us. I know that you've talked about OpenShift a
great deal on your show and certainly I've been talking a lot about OpenShift
and the value of having a platform as a service, the value of having an open
source platform as a service. DISA, the Defense Information Systems Agency, who
acts‑you can think about them as the IT organization for the DoD. DISA stood up
this program called STAX and STAX is a platform as a service.

A
little while ago they stood up their first attempt at a platform as a service,
they wrote it themselves, and it functioned pretty good. Then we had a meeting
with them and we gave them the roadmap for OpenShift, and they said, well,
that's our plans for the next two or three years, so let's see about getting OpenShift
in our organization. Indeed that's what they did. Now STAX is available to
anyone in the DoD and it's really, really exciting, actually, to have OpenShift
now available to anyone with a CAC card. It's pretty cool.

Gordon:
Maybe you could tell our listeners a little bit more about OpenShift and why
it's particularly interesting in the government.

Gunnar:
Sure. OpenShift, the first problem that it solves is it makes it really easy
for developers to stand up a development environment. That is often a huge
problem in the DoD. If you can imagine not just a Fortune 500 company but a
Fortune 1 company trying to get anything done the bureaucracy is just
unimaginable. It's not unusual for it to take six months for someone to stand
up a server. In that environment if you're developer, especially if you're a
young developer working with some of the new languages like Node.js or Ruby or
Python and so on, waiting six months for a server is just‑well, it's out of the
question.

What
OpenShift allows them to do is basically have a pre‑certified, pre‑approved,
standardized platform that they can stand up and allows them to just go on and
go ahead with their work. That would be great if it was just helping the
developers, but in the DoD you have this huge demand on the operational side of
the house. You have security standards that you need to meet, you have
contractual and procurement standards that you need to meet, and OpenShift
actually helps on that side as well.

Because
OpenShift allows the operations folks to define, OK, here's what Python looks
like, here's what Ruby looks like, here's what Node looks like. They can very
quickly stamp out a new version whenever a developer wants it. Really it keeps
both parties really, really happy. The operational guys have the
standardization that they need and the developers don't have to wait six months
for a server.

Gordon:
One of the ways I look at platform as a service, OpenShift, is this really nice
abstraction layer, in that it keeps the stuff the developers care about
separate from the things that the operations, the architects, arguably even the
procurement people care about. That kind of, I don't know, firewall or level of
abstraction or the wall between those can sometimes be very useful.

Gunnar:
Oh, that's exactly right. That's especially true in government where almost all
work is done by a contractor at one point or another. When you're a contractor
it's very tempting to get your hooks into as much of the system as possible
because that ensures subsequent work in later contracts. Right?

If
I'm the guy who builds the system from everything from the plug and the wall up
to the keyboard, well then I'm uniquely qualified to go take care of that
system for the rest of its useful life. That's great for the integrators and
great for the contractors. Not so great for the procurement folks and the
government folks who really want a more competitive environment for their IT
systems.

As
you're saying, having this abstraction layer, having something divided between
this is what's mine‑which is the OpenShift platform‑-and this is what's yours-‑which
is the code that's running inside it. Having that division makes it a lot
easier for procurement folks, actually, who seem to like it the most. It's
procurement folks who like it, because they can now write the platform into
their contracts and say, great, you can give us this capability but you're not
allowed to give us anything that plugs into the wall, because we're going to
provide that in a platform that we've already built, already approved, and
already certified.

It's
neat to see OpenShift‑-and this is one of the reasons why I think it's so,
frankly disruptive in the DoD is because it is a tool that actually allows the
government to change the way that it interacts with its contractors in a way
that something as old and boring as Linux doesn't necessarily do.

Gordon:
In a way I think this is a little bit funny, of course, because a lot of the
early talk around PaaS, particularly the online services, were around this
whole idea of DevOps and you didn't need to separate the responsibilities. You
look at something like Netflix and you don't really have much in the way of
dedicated, separate operations staff. Here we are with PaaS and the government
really being used because it can, if you want it to, on an on‑premises solution
like OpenShift Enterprise, really can actually enforce those differences in
layers.

Gunnar:
Yeah, that's right. A lot of the DevOps work that's being done today‑-and a lot
of it is outstanding and provided a huge inspiration for OpenShift, obviously.
But a lot of this work is around-‑I want to call it single purpose enterprises.
It was developed in an environment where you have one application that was,
that that's all the company did was build this one application. In that context
it becomes relatively simple to do a DevOps approach. Once you reach something
with the kind of sophistication and the number of moving parts as, say, the DoD
as an example.

Hugely
complex organization, a whole bunch of competing missions, a bunch of competing
contractors, competing procurement shops, competing program offices. Suddenly
being able to say, well this is what is operations, this is what is
development, and let's stay out of each other's swim lanes, that becomes a lot
more valuable. We can take all the lessons learned from the DevOps world and
apply them to a more enterprise‑y context, I guess. That's how I think about OpenShift.

Gordon:
Maybe we can talk briefly about adoption of cloud in government in general. You
read periodically, seemingly when the tech press needs some exciting headline
to get people to click on, that adoption of cloud in government isn't going as
quickly as their former CIO mandated to happen. What's your perspective on the
ground?

Gunnar:
When Vivek Kundra first put down the cloud first policy‑-a lot of people don't
realize it--it's the policy of the federal government to put something in a
cloud first and only if you couldn't possibly put it into an existing cloud are
you allowed to go buy new hardware. That rule has been in place for a number of
years and helped the, there's a grand federal data center consolidation that's
underway. By 2015 they're actually going to shut down 800 data centers around
the country.

Obviously
if you're going to do that you need to adopt cloud. But actually the big winner
in the cloud first policy was virtualization. Was just, I'm consolidating a
data center, I need to virtualize to take good advantage of my hardware. That's
where we were for a while, as people were looking askance at public clouds and
trying to figure out, well, what kind of workloads are allowed to be in there.

Since
then the government's actually developed a set of rules for how and when you're
allowed to use a public cloud. That's called FedRAMP. That process has actually
mildly successful because it actually, it provides a set of relatively
unambiguous rules, and it provides a process for approving a particular public
provider for a government workload. That was absolutely necessary. Without that
I don't think we'd see the kind of cloud adoption that we see today.

You
have FedRAMP in place now, and what's interesting is‑I'm trying to remember who
the analyst was. I think it was Simon Wardley who talks about cloud adoption
being something that moves very, very slowly and then all at once. Just last
week we had Terry Halvorsen, who's the CIO for the Navy. He put out guidance,
policy guidance to his deputies that said that not only are we going to go to
cloud first, we're actually going to go to public cloud first.

That
is, unless you have a super‑good reason for not putting your workload out on
Rackspace or Amazon or another public cloud provider, you better do it. Because
the Navy can't afford to keep buying servers. Then he wrote for an internal
publication called CHIPS, he actually wrote almost a case study on how they
moved the Navy's website up to Amazon. With folks like the Navy adopting public
cloud at this pace, you can imagine that many of the other agencies are going
to follow right behind.

As
soon as I say that, I'm also going to add my traditional caveat which is,
trying to describe the US government as a single entity is a fool's errand.
We're talking about literally thousands of IT shops, and they don't certainly
don't move in lockstep. While we have Navy maybe reaching out in front in the
adoption of cloud services, you've got other agencies who are still trying to
figure out what the best approach to virtualization is. There's a broad
spectrum and it's going to be a multi‑year story.

Gordon:
Maybe you could tell our listeners about some of the new things that Red Hat's
doing, or they're coming down the pike?

Gunnar:
Yeah, so this is actually exciting news. A lot of people, at least folks in the
government space know about this. We are huge supporters of the FIPS process,
this is the Federal Information Processing Standard. There's one standard in
particular, FIPS 140‑2, which tells everyone how they are supposed to implement
cryptography. If I'm trying to keep something secret on a machine, I can't just
write any software I want. I have to take that software and have it scrutinized
by a third party and make sure that when I say I'm using the SHA‑2 256
algorithm that that's in fact the algorithm that I'm using.

We
have actually been certified under FIPS a number of times with RHEL and just
recently we rounded out the FIPS certifications for RHEL 6, so now people can
have encrypted SSH sessions, encrypted networking, encrypted disc, and be
assured that it's actually meeting the federal standards. Super‑excited and not
a little bit relieved to finally have those certifications in our pocket. It's
really great.

Gunnar:
No, no, this is great. I think that maybe the last thing I'll leave you with
is, back in 2008 there was a lot of talk about open government. When the Obama
administration came in everyone was talking about open government and how open
source could help open government. People were skeptical about it, maybe, and
just this week we got two proof points for folks to let them know how
successful open source has been in government.

The
first is that Black Duck released their annual survey of industry, more than
800 respondents to this and these are folks like director, CIO level.
Government actually came out for the first time this year as the number one
adopter of open source software, which I think is super cool. The second thing
that came along was the government actually using open source to improve its
mission. NASA ran a hack‑a‑thon for the International Space Station last week
and they had, in this hack‑a‑thon, over 9,000 participants around the world,
which I think is just staggering and a great example of what the government can
do when they not only use open source but actually adopt open source methods to
accomplish their missions. It's real exciting.

Gordon:
Well that sounds great, Gunnar. Thanks for spending time with us.

Thursday, April 18, 2013

Debate continues over the relative merits of private, public, and community clouds for enterprises. The debate (thankfully) has mostly shifted from dogma to deeper discussions around factors such as costs, bridging of legacy application types, data gravity, and service levels. And, in general, there's a widespread (if not quite universal) recognition that cloud and IT broadly will be hybrid in one or more respects.

By contrast, many of us who have been following and working in the cloud space for a number of years haven't ever expended a whole lot of cycles mulling if and how cloud computing (in the sense of public cloud services—mostly SaaS) would be adopted by smaller businesses, SMB.

The case seems compelling. For myself, I find it almost a ritual when writing about SMB to preface any comments with the observation that most SMB (with the caveat that there's no single definition and it covers a broad range) have relatively little in the way of dedicated, much less specialized, IT people and therefore they value simplicity and integration over functionality. Which is why SMB has been a traditional area of Microsoft strength, for example.

Public cloud services seem tailor-made for these types of organizations. We can debate the relative costs of a large bank operating its own servers versus letting Amazon Web Services do it. We can reasonably ask whether an organization with a five or six figure-sized sales force might not be better off running their own CRM system rather than using Salesforce. A 200 person services firm? Not so much.

Therefore, it's a bit surprising that the data doesn't really back up these assumptions.

Consider first, a presentation by Chris Chute and Ray Boggs of IDC at this year's Directions 2013 conference. Entitled "The SMB Cloud Story: An Unexpected Journey" (yes, cute title), it showed off data from IDC's 2012 SMB study that was counterintuitive. Consider these three finding that grabbed my attention:

Smallest businesses still cautious about public clouds

Small business even more concerned about security than a year ago

Micro business, less than 5 employees, are most resisting to BYOD

To (over-)generalize, the smaller a business is—and therefore the less capable it typically is of putting in place systematic policies and procedures around backup, security, and so forth—the less likely it is let someone else take care of those things. One can reasonably argue about how the BYOD finding fits in, but there's no disputing the overall direction of the data. For me, the real money shot is the IDC slide I posted above.

Other data shows similar results. Just this morning, writing on the GigaOm Pro Blog, David Linthicum notes that: "According to Smart Company, an Australian-based publication, cloud computing gives small businesses a 106% productivity boost." So far so good. However, the same study notes that only 16 percent of the businesses surveyed said that they use cloud computing in business. While optimistic about cloud computing use by SMB going forward, Linthicum suggests that "the issues around cloud computing adoption by small businesses include a lack of understanding of cloud technology, and which cloud computing flavors (IaaS, PaaS, or SaaS) are right for them."

IDC's Boggs also noted that lack of knowledge was a problem. He also suggested that small business owners can also be control freaks in some cases and therefore unwilling to relinquish perceived control. From a less psychological perspective, he observed "their industry is where their identity is" and that vendors therefore need to embrace vertical thinking. As someone who worked with many VARs (value-added resellers) in the 1990s, the importance of approaching SMB from an industry perspective rings true to me. And it's not something that's happened systematically in the cloud space to date. Perhaps community clouds and SaaS offerings will shift things more in this direction.

Ultimately, as I wrote in a 2004 research note I wrote about an IBM SMB initiative: "[T]hat market presents a challenge for large IT vendors because its needs are more diverse. It has far fewer financial resources and technical skills, and it proceeds at a less consistent pace in its IT projects than the Fortune 500." It's not a slam dunk to move SMB rapidly to cloud, even if it once seemed that way.

Why I've Left the Media Business - "Instead of inventing a new business model, media companies keep trying to tweak the old one. By that I mean they keep trying to invent new kinds of advertising. It’s a pointless exercise. They’re like blacksmiths who are responding to Henry Ford and his automobile by trying to create a better horseshoe."

The NYSE Community Cloud Success Story Points to Another Cloud Model @BizTechMagazine - "Community clouds aren’t new; Red Hat cloud evangelist Gordon Haff was praising the model as the future in a CNET article back in 2010. Some people consider community clouds a subset of private or hybrid clouds, but Hollis argues in a post on his blog that the community cloud deserves recognition as its own distinct model of cloud computing, and that other industries should consider a community approach before going solo:"

Wednesday, April 17, 2013

I've frequently said and written that cloud computing isn't about a single point product or solution. It's about delivering capabilities across hybrid infrastructures in open and portable ways. Red Hat's coming at this vision from a number of different angles with a broad portfolio. The downside is that it's a rapidly evolving area and we probably haven't been as good as we could have been at explaining how the various moving parts connect and otherwise interact.

I took the occasion of the OpenStack festivities in Portland, OR this week to put together a fairly long blog post that delves into how the "cloud-specific" parts of our portfolio mesh. (This isn't the whole story; for example, Red Hat Enterprise Linux provides important foundational technologies for our cloud offerings in addition to providing a consistent runtime across a wide range of infrastructure.) I encourage those interested to check out the original post on our press web pages, but I wanted to hit some of the highlights here. I've touched on some of these topics previously but it's time to revisit them as projects, products, and my thinking about their relationships have all evolved.

A huge amount of activity is taking place within this layer as it evolves to supporting hybrid application models spanning both traditional enterprise applications and new-style cloud workloads. Red Hat Enterprise Virtualization is focused on the former. OpenStack is focused on the latter; a massive developer community is helping it become the best platform for stateless, modular, cloud-oriented applications. I'm working on a paper, intended for publication in a couple of months, that discusses these different workload styles in much more detail.

Hybrid cloud operations management

Enterprises also need features like chargeback, policy, orchestration, reporting and automation. And, they don't want OpenStack to become a stand-alone silo, isolated from their existing and future enterprise virtualization platforms and public clouds. Red Hat CloudForms is focused on solving both of these problems.

Consider first the need for cloud operations management tools. CloudForms tools can discover, automate, monitor, measure and govern virtualization and cloud infrastructures. Operations management is fundamentally about service lifecycle management, which provides for provisioning, intelligent workload management, metering, cost transparency and the retirement of resources when they are no longer required. In December 2012, Red Hat acquired ManageIQ with the aim of rapidly bringing these capabilities (shipping in the ManageIQ EVM product today) to the CloudForms platform.

The second need—avoiding new silos—requires hybrid cloud management, which spans heterogeneous platforms, whether on-premise or public clouds, and maintains application portability across those environments. Providing such capabilities was a guiding principle of CloudForms from the beginning: Cut across islands of technology, preserve existing IT investments, and create common resource pools across a broad set of infrastructure.

Our plan is to aggregate this functionality and deliver it within a single product later this year.

Platform-as-a-Service

We also plan on integrating CloudForms with OpenShift Enterprise for operational management of Platform-as-a-Service (PaaS) environments. OpenShift provides secure multi-tenancy within operating system instances using a variety of “container,” security containment (SELinux), resource management (Cgroups), and global namespace technologies. CloudForms can augment these capabilities with the ability to provision, monitor and scale the nodes themselves on OpenStack, Red Hat Enterprise Virtualization and other platforms.

Application lifecycle management

We also intend to continue our work to tightly marry application lifecycle management to other aspects of open hybrid cloud management. Red Hat Network Satellite has a long track record of helping customers manage large-scale Red Hat Enterprise Linux deployments using standard operating environments for efficiency and consistency. As operational management evolves to meet the needs of hybrid cloud infrastructures, the application lifecycle management provided by Satellite will evolve to handle both today's and tomorrow's applications.

In closing

I've given you something of a whirlwind tour. Myself and others at Red Hat are working on making this info available in a variety of forms and levels of depth but it's a topic that I get asked about frequently enough that I thought it merited this relatively brief update.

Tuesday, April 16, 2013

I'm developing a theory that you're breaking some sort of union rule if you try to hold a cloud computing event without putting Netflix' Adrian Cockcroft on the agenda. (Though when I mentioned this theory to Adrian, he assured me it was OK so long as he was at least mentioned in a presentation or two.) In any case, he was on stage at the Linux Collaboration Summit in San Francisco this week to talk about "Dystopia as a Service."

Most of Adrian's talks examine various aspects of Netflix' computing architecture. It's an architecture that's both massive and almost entirely based on Amazon Web Services. It also offers a great example of what a cloud architecture should look like. Some specifics are doubtless unique to Amazon. And others unique to Netflix. But it's also true that many of the basic patterns and approaches that Netflix follows are useful study points for any "cloud native" application architecture. (Hence Adrian's ubiquity at cloud events.)

These patterns include things like making master copies of data cloud-resident, dynamically provisioning everything, and making sure that all services are ephemeral. This contrasts with the traditional IT pattern of having mostly heavyweight, monolithic services that you individually protected with all manner of reliability and availability mechanisms from N+1 power supplies to failover clusters. Bill Baker (then at Microsoft) wryly put the contrast between the traditional scale-up IT pattern and the scale-out cloud pattern thusly: “In scale-up, servers are like pets. You name them and when they get sick, you nurse them back to health. In scale-out, servers are like cattle. You number them and when they get sick, you shoot them."

However, for this post, I'm going to focus on one particular point that Adrian raised that hasn't been so widely discussed. That's the tension between efficiency and robustness (or anti-fragility as Adrian called it).

The basic idea is this. Maximizing efficiency typically involves doing things like replicating "the best" as patterns and minimizing variability. You standardize ruthlessly—one operating system variant, unified monitoring, "copy exact" (to use Intel terminology) from one region to another, common configurations, and so forth. The problem is that an environment that has been ruthlessly standardized is also a monoculture. And monocultures can be catastrophically affected by singular events such as security exploits, software bugs triggered by data or a date, and DNS or certificate issues of various kinds.

Although the specifics vary, we see the tradeoffs associated with maximizing efficiency in other domains as well. For example, it's generally recognized that today's highly tuned and lean supply chains are also highly vulnerable to disruption. Writing after the Japanese tsunami, the Chicago Sun Times wrote:

“When you’re running incredibly lean and you’re going global, you become very vulnerable to supply disruptions,” says Stanley Fawcett, a professor of global supply chain management at Brigham Young University.

The risks are higher because so many companies keep inventories low to save money. They can’t sustain production for long without new supplies.

Subaru of America has suspended overtime at its only North American plant, in Lafayette, Ind. Toyota Motor Corp. has canceled overtime and Saturday production at its 13 North American plants. The two companies are trying to conserve their existing supplies.

There are techniques in every field to intelligently reduce the impact of various types of events. However, there remains something of a tradeoff between efficiency on the one hand and robustness on the other, given the need to get away from monocultures as much as possible. Adrian described Netflix as using "automated diversity management" and "diversifying the automation as well" (by using two independent monitoring systems).

Of course, every organization will have to decide for themselves just where and how to introduce diversity. (Famously, Netflix is all-in on a single cloud provider—Amazon Web Services—however much they introduce diversity elsewhere and this has contributed to outages at times.)

Some diversity will arise naturally as organization introduce new technologies, such as new virtualization platforms, that they will continue to run alongside existing ones. Similarly, most IT departments today, for better or worse, don't ruthlessly standardize to the degree that cloud providers do. Thus, a certain degree of "organic diversity" comes naturally.

However, it's worth remembering—as organizations increasingly adopt some of the practices in use by public cloud providers—that the ultimate goal isn't necessarily complete standardization even when such is practical. Today, IT is hybrid just because that's the way it evolved. But even as organizations transform into much more of an architected-for-cloud world, it's worth remembering that hybrid IT can also be a good architectural practice for keeping bugs and other shocks from becoming epidemics.

Are small private colleges in trouble? - Magazine - The Boston Globe - "In an analysis of the financial records of 1,700 US colleges and universities, the Boston-based consulting firm Bain & Company estimated that one-third of them were on an unsustainable financial path, with operating costs increasing faster than endowment returns and other revenues could cover them. This is a problem the colleges can no longer solve, as they once did, by simply increasing tuition."

Wednesday, April 03, 2013

Diane recently joined Red Hat as our Cloud Ecosystem Evangelist, initially focusing on our OpenShift Origin Platform-as-a-Service (PaaS). In this podcast, Diane discusses why open source has become so important for PaaS (although PaaS didn't get started that way), how community-driven innovation benefits even those who never look at a line of source code, and what's in the works for OpenShift Origin. Diane also touches on some of the development efforts underway such as making OpenShift Origin easier to install and easier to use in concert with Infrastructure-as-a-Service.

Diane
Mueller: Thank you very much. I'm very, very pleased to be here and
to be leading the community development initiatives around Origin as it feeds
upstream products at Red Hat, such as OpenShift.com, our online public PaaS,
and OpenShift Enterprise. I think there are lots of possibilities here, and I'm
really, really pleased to be part of the Red Hat team now. I'm looking forward
to doing lots of interesting things with all of our other open‑source communities
as well. The OpenStack folks will have a very big presence shortly at the
OpenStack Summit, coming up the 15th of April through the 18th in Portland,
Oregon. We're going to be hosting our very first OpenShift Origin open‑source
community day and mini hackathon. If you're around, please join us on the 14th
of April. Go to Eventbrite and register for that, because I think that'll be
pretty exciting, to have some of our customers, a number of the contributors,
the Red Hat engineers. Dan Walsh, who's one of the SELinux guys, will be
speaking. We've got a couple of really cool folks coming to do some hacking and
creating some V2 cartridges. It's just very exciting times to be part of the
open‑source team here at Red Hat.

Gordon:
Great. That sounds like a great event. Sign up quickly, because I
understand it is filling up in a big hurry. Diane, you've been in the PaaS
space for a while. As you know, it's interesting that PaaS started as these
very non‑open‑source online services. Why does open source matter in the
context of PaaS? Why are people so interested in having an open source based
PaaS?

Diane:
I think that's a really awesome question. Proprietary closed source
systems, whether they're PaaS or other types of applications or operating
systems, really have a limited value when you're working with such a large
ecosystems of providers, whether they're cloud providers, SaaS service
providers or other folks. The idea that you can get true inter‑operability with
a closed source or proprietary offering is really from a bygone era. I think
what we've seen is the collaboration in the community development. We all
understand that open source is creating the cogs in the wheel that we all need
to use. When you build a platform as a service, which is an integral part of
any cloud, whether it's public or private strategy, you don't want to be
reinventing the wheel at every new company you go to or every new vendor you
work with, having a ubiquitous, inter‑operable community driven project such as
Origin which feeds not just the Red Hat community but many, many other
companies that are using that product in their production environments.

Where
closed source fails is that regardless of how big you are, whether you're Apple
or Google, you can never hire all of the best and the brightest to work on your
project and continue to keep them interested in maintaining that code base
forever or for the life of that technology.

Where
open source is great is what you do is you have people who are committed, who
are interested, who volunteer their time. Some of them are sponsored by their
companies to work on it. Some of them are sponsored by vendors to work on it,
because they need this very important cog in order to have a marketplace to
make money in, and to build applications in.

Gordon:
Of course for Red Hat's enterprise customers, even if they want to look
at source code as a benefit as well, because we have OpenShift Enterprise which
is our enterprise on premise offering as well as our online service, the
community development model that contributes to OpenShift Origin of course
feeds into those other things as well. All of our customers, even if they don't
directly care about open source, benefit from this collaborative development
model.

Diane:
I think what a lot of people are seeing Red Hat do is make all the right
moves in terms of creating a converged, open cloud play. When I say converged,
we're now playing a very major role in supporting the OpenStack development. I
think probably we're the number two, maybe the number one, not quite yet number
one, contributor to OpenStack. Origin gives us a platform as a service layer.
We're becoming a major player in providing cloud infrastructure, and that cloud
infrastructure, combined with a platform as a service, really makes us able to
deliver to our enterprise customers and to the open source community of
collaborators, a service that is truly open. It really is the next generation
of open cloud, which is going to be, in my humble opinion, the convergence of
both the PaaS and the IaaS.

I
would suspect, and this is my theory and I'm sticking with it for a while, that
in another year or two you won't hear me saying PaaS or platform as a service
anymore. Because it will become part and parcel of an open cloud, hybrid
service model, and you will be seeing Red Hat be able to deliver on that
promise and on that vision because we are now doing so much work in the open
source world and collaborating with our brethren and our sisters at all of the
other companies that are actually working toward that vision too.

I
think that's one of the amazing things, to be here at Red Hat at this time and
this moment in time is because we're seeing that tipping point where everybody
has realized that an open cloud, a federated cloud, and inoperable cloud is
really where we're going to be at for the foreseeable future.

There's
no room for proprietary cloud anymore.

Gordon:
If somebody wants to get involved in OpenShift Origin development,
working with the code, thinking about how they can stand things up at their own
companies or just for their own avocation, what's the best way to proceed?

Diane:
Well, I mentioned it at the beginning. We are having a community day in
Portland on April 14th. If you can get yourself there, especially if you're
coming to the OpenStack summit, we're going to be doing deep dives into the
architecture of OpenShift, into the security, into OpenStack and OpenShift
integration. We'll be doing some Origin internals work. We're going to take a
deep dive with a SELinux guru from Red Hat engineering team, Dan Walsh. We'll
be teaching everybody how to extend OpenShift Origin by building their own
cartridges.

We've
got a new architectural model for cartridges, so we're going to be diving into
that. If you cannot come to the April 14th, come visit us either at
github/openshift ‑‑ you can find lots of the information, the documentation
there ‑‑ or come to redhat.com and just do a search on OpenShift and Origin and
you will find all of the resources you need to get started.

There's
a great wiki out there, how you can become a collaborator with us on this
project, and we're happy to have you come on board. Just give me a buzz or
follow me at pythondj, which is my Twitter handle, and I'll get right back to
you.

Gordon:
Diane, I know you've been at Red Hat for all of about a month or so.

Diane:
A month, yes, a whole, exciting month. Yes.

Gordon:
What's coming up? What are your plans over the next four months, six
months, year?

Diane:
Well, I think that what I'm seeing is the OpenStack and OpenShift
convergence is one of the core things that I'm focusing on, is that the two of
them play so nicely together and they deliver an open cloud in a way that Red
Hat will obviously have some enterprise offerings to continue that and support
that, But also the ability to take Origin to the next level, to truly drive and
create an engaged developer community around Origin and make that stand on its
own as well. That's one of my tasks, and is near and dear to my heart as an
open source person for many years.

I
think that's one of the things that I'm trying to do is lower the barrier to
entry to using Origin on its own, as well as to bring people who are currently
using, maybe, the OpenShift Enterprise or openshift.com and have them realize
that there are really easy ways for them to come and contribute and extend
openshift.com or create cartridges and work right on the core with us and be
part of the community.

That's,
I think, the crux of it is, one of the things about platform as a service,
which is very near and dear to my heart, is that it scratches an itch that I've
had for many years, to paraphrase Eric Raymond. I think this is one of those
projects and platform as a service that helps developers get out of the way of
ops and be able to do things and get back down to the business of coding.

There's
a lot of community development there. There are people who love to work on PaaS
and cloud architectures. There's lots of cloud architecture and technologists
that love to work on PaaS, but there's also a huge community of users who want
to extend and create cartridges, which is our metaphor for creating something
similar to the Heroku buildpacks, the languages and the extensions you need to
plug‑and‑play your projects.

There's
lots of opportunity to work together and build out the next generation of PaaS
platforms, because I think what Origin represents is, we came to the game about
a year ago. 2012, mid‑year, is when Origin became an official open‑source
project and made it there. We're about a year into the game now, and we're
about ready to do our first official release, though it's been used by many
people, including OpenShift.com.

There's
just such momentum going now in terms of getting the community together and
making this into a very vibrant, collaborative community, where people feel
like they can actually make a contribution and they can be committers, and they
can take the things that their enterprises, if they're using an enterprise,
needs, and add that into the core or add that as a cartridge, and help extend
and build this into a truly robust PaaS that has lots of great applications
around any enterprise or any small to medium‑size business that might need to
get their apps going to the cloud.

Gordon:
Thank you, Diane, and welcome aboard. It sounds like you're going to be
busy over the next year.

Diane:
Oh, yeah. You can find me pretty much; I'll be doing a lot of talking at
conferences and doing a lot of community day works and hosting a lot of mini‑hackathons.
So if you're interested in Origin and Open Source and Red Hat OpenShift, give
me a buzz, I'd be happy to talk to you about it, and always a pleasure to be
with you here today. Thanks, Gordon.

I'll be spending the rest of the week at Cloud Connect Santa Clara (at which I'll be speaking on Wednesday and Thursday). Today was devoted to the Deploycon companion event, which Krishnan Subramanian, Principal Analyst at Rishidot Research, and friends kicked off last year in response to their perception that their wasn't enough specific focus on Platform-as-a-Service (PaaS).

I'm not really going to report on the event in this post. Probably a #deploycon twitter search is the best place to get an overall sense of what went down. Rather, I'd like to hit on a few topics that piqued my interest or that surfaced questions that I'd like to dig into more deeply in the coming weeks or months. Think of these as, not complete thoughts, but as interlocking fragments and threads that could benefit from further examination and teasing apart.

What is PaaS anyway? (Definitions. Shudder. I Know.)

The thought of revisiting PaaS definitions kicked off exasperated consternation among various panelists and audience members. But bear with me.

Without turning things into an academic definitional debate, there are some legitimate questions here. What is a PaaS itself as distinct from APIs being consumed? What of something like Kinvey's "backend-as-a-service" for mobile? Is that a mobile PaaS? What of something like Force.com, which analyst Judith Hurwitz describes as a PaaS anchored to a SaaS environment?

My tentative answer to these questions is that a PaaS is a PaaS—something that provides an abstraction for developers—while services (including backend services) are, well, services that a PaaS application can consume. (That said, I'm open to the idea that PaaS environments might be constructed in ways that are optimized for either specific vertical or horizontal (e.g. mobile) application types.

PaaS doesn't do enough

On the one hand, this was an unsurprisingly PaaS-friendy crowd. And there was little disagreement with the contention that "PaaS is 100% relevant to enterprise IT now," as one person put it.

At the same time (with the caveat that most enterprises aren't really ready to absorb more that today's PaaS today), there's an opportunity to do a lot more. Sinclair Schuller of Apprenda made this point in the vein of making it easier to write cloud-aware applications. Cloud-aware referring to service-oriented, stateless, modular, etc.

An interesting question to me, apropos backend services and so forth, is how to make it easier to write applications that are more fundamentally based on consuming services from all over. Is this a function of the PaaS or of client-side tooling? I bit of both I suspect although the two are not unrelated to each other. (At a minimum, a PaaS needs to support popular client-side and other tooling as Red Hat's OpenShift does with Eclipse, Jenkins, Maven, and so forth.)

Polyglot? Open?

So last year. OK. That's a bit flip. But even if the details could be debated, there was no one about to stand up on stage and claim such things don't matter. Every PaaS is trending more multi-language and multi-framework (even if they didn't start that way). See my post about OpenShift and polyglot here. And, to whatever degree a given PaaS is actually open, you're sure to find that work in its marketing literature.

Operational models and visibility

I suspect my recent post about PaaS as an abstraction would have been controversial among some panelists throughout the day. From my perspective, part of the value of PaaS is as black-letter abstraction. What's above the line is yours. What's below the line is ours.

However, to give one example, Carsten Puls of Engine Yard noted that: "At first, customers want to get going. Understanding what's going on under the hood isn't that important. As grows, want more control and go under the hood. Managing that bsalance through lifecycle is important."

I suspect that part of the disconnect is specifying who the "customer" here is. I'd maintain that for most Web/Java developers, the above-the-hood view is fine throughout the lifecycle. But, if you expand "customer" to mean "the IT organization," all you're really saying is that just having a hosted PaaS isn't enough; you need a private or hybrid PaaS that allows ops to get as far under the covers as needed.

How high level can we make things?

Larry Carvalho asked me if PaaS could make it so business users (not IT) could develop useful applications. I'm a bit skeptical; it's an idea with a generally unsuccessful history. (Unless you count spreadsheets.) But, maybe. To the degree that we can make services more easily consumable and more easily interconnected. And to the degree that we can package even higher levels of abstraction. Something like OpenShift's cartridge system could possibly evolve into such a mechanism.

Enough for now

All of these thoughts (and more) need more fleshing out. But those were some of my top-of-mind takeaways from the day.

About Me

I'm technology evangelist for Red Hat, the leading provider of commercial open source software. I'm a frequent speaker at customer and industry events. I also write extensively on and develop strategy for Red Hat’s hybrid cloud portfolio.

Prior to Red Hat, as an IT industry analyst, I wrote hundreds of research notes, was frequently quoted in publications such as The New York Times on a wide range of IT topics, and advised clients on product and marketing strategies. Among other hobbies, I do a lot of photography and enjoy the outdoors.