Tag Archives: Core

Post navigation

There’s no point in sugar-coating this: selecting API and code sections for core requires making hard choices and saying no. DefCore makes this fair by 1) defining principles for selection, 2) going slooooowly to limit surprises and 3) being transparent in operation. When you’re telling someone who their baby is not handsome enough you’d better be able to explain why.

The truth is that from DefCore’s perspective, all babies are ugly. If we are seeking stability and interoperability, then we’re looking for adults not babies or adolescents.

Explaining why is exactly what DefCore does by defining criteria and principles for our decisions. When we do it right, it also drives a positive feedback loop in the community because the purpose of designated sections is to give clear guidance to commercial contributors where we expect them to be contributing upstream. By making this code required for Core, we are incenting OpenStack vendors to collaborate on the features and quality of these sections.

This does not lessen the undesignated sections! Contributions in those areas are vital to innovation; however, they are, by design, more dynamic, specialized or single vendor than the designated areas.

The seven principles of designated sections (see my post with TC member Michael Still) as defined by the Technical Committee are:

Should be DESIGNATED:

code provides the project external REST API, or

code is shared and provides common functionality for all options, or

code implements logic that is critical for cross-platform operation

Should NOT be DESIGNATED:

code interfaces to vendor-specific functions, or

project design explicitly intended this section to be replaceable, or

code extends the project external REST API in a new or different way, or

code is being deprecated

While the seven principles inform our choices, DefCore needs some clarifications to ensure we can complete the work in a timely, fair and practical way. Here are our additions:

8. UNdesignated by Default

Unless code is designated, it is assumed to be undesignated.

This aligns with the Apache license.

We have a preference for smaller core.

9.Designated by Consensus

If the community cannot reach a consensus about designation then it is considered undesignated.

Time to reach consensus will be short: days, not months

Except obvious trolling, this prevents endless wrangling.

If there’s a difference of opinion then the safe choice is undesignated.

10.Designated is Guidance

Loose descriptions of designated sections are acceptable.

The goal is guidance on where we want upstream contributions not a code inspection police state.

Guidance will be revised per release as part of the DefCore process.

In my next DefCore post, I’ll review how these 10 principles are applied to the Havana release that is going through community review before Board approval.

The OpenStack Core definition process (aka DefCore) is moving steadily along and we’re looking for feedback from community as we move into the next phase. Until now, we’ve been mostly working out principles, criteria and processes that we will use to answer “what is core” in OpenStack. Now we are applying those processes and actually picking which capabilities will be used to identify Core.

While you will want to jump directly to the review draft matrix (red means needs input), it is important to understand how we got here because that’s how DefCore will resolve the inevitable conflicts. The very nature of defining core means that we have to say “not in” to a lot of capabilities. Since community consensus seems to favor a “small core” in principle, that means many capabilities that people consider important are not included.

The Core Capabilities Matrix attempts to find the right balance between quantitative detail and too much information. Each row represents an “OpenStack Capability” that is reflected by one or more individual tests. We scored each capability equally on a 100 point scale using 12 different criteria. These criteria were selected to respect different viewpoints and needs of the community ranging from popularity, technical longevity and quality of documentation.

While we’ve made the process more analytical, there’s still room for judgement. Eventually, we expect to weight some criteria more heavily than others. We will also be adjusting the score cut-off. Our goal is not to create a perfect evaluation tool – it should inform the board and facilitate discussion. In practice, we’ve found this approach to bring needed objectivity to the selection process.

So, where does this take us? The first matrix is, by design, old news. We focused on getting a score for Havana to give us a stable and known quantity; however, much of that effort will translate forward. Using Havana as the base, we are hoping to score Ice House ninety days after the Juno summit and score Juno at K Summit in Paris.

These are ambitious goals and there are challenges ahead of us. Since every journey starts with small steps, we’ve put ourselves on feet the path while keeping our eyes on the horizon.

Specifically, we know there are gaps in OpenStack test coverage. Important capabilities do not have tests and will not be included. Further, starting with a small core means that OpenStack will be enforcing an interoperability target that is relatively permissive and minimal. Universally, the community has expressed that including short-term or incomplete items is undesirable. It’s vital to remember that we are looking for evolutionary progress that accelerates our developer, user, operator and ecosystem communities.

How can you get involved? We are looking for community feedback on the DefCore list on this 1st pass – we do not think we have the scores 100% right. Of course, we’re happy to hear from you however you want to engage: in intentionally named the committed “defcore” to make it easier to cross-reference and search.

We will eventually use Refstack to collect voting/feedback on capabilities directly from OpenStack community members.

These categories summarize critical values that we want in OpenStack and so make sense to be the primary factors used when we select core capabilities. While we strive to make the DefCore process objective and quantitive, we must recognize that these choices drive community behavior.

With this perspective, let’s review the selection criteria. To make it easier to cross reference, we’ve given each criteria a shortened name:

Shows Proven Usage

“Widely Deployed” Candidates are widely deployed capabilities. We favor capabilities that are supported by multiple public cloud providers and private cloud products.

“Used by Tools” Candidates are widely used capabilities:Should be included if supported by common tools (RightScale, Scalr, CloudForms, …)

“Used by Clients” Candidates are widely used capabilities: Should be included if part of common libraries (Fog, Apache jclouds, etc)

Aligns with Technical Direction

“Future Direction” Should reflect future technical direction (from the project technical teams and the TC) and help manage deprecated capabilities.

“Stable” Test is required stable for >2 releases because we don’t want core capabilities that do not have dependable APIs.

“Complete” Where the code being tested has a designated area of alternate implementation (extension framework) as per the Core Principles, there should be parity in capability tested across extension implementations. This also implies that the capability test is not configuration specific or locked to non-open technology.

Plays Well with Others

“Discoverable” Capability being tested is Service Discoverable (can be found in Keystone and via service introspection)

“Doc’d” Should be well documented, particularly the expected behavior. This can be a very subjective measure and we expect to refine this definition over time.

“Core in Last Release” A test that is a must-pass test should stay a must-pass test. This make makes core capabilities sticky release per release. Leaving Core is disruptive to the ecosystem

Takes a System View

“Foundation” Test capabilities that are required by other must-pass tests and/or depended on by many other capabilities

“Atomic” Capabilities is unique and cannot be build out of other must-pass capabilities

“Proximity” (sometimes called a Test Cluster) selects for Capabilities that are related to Core Capabilities. This helps ensure that related capabilities are managed together.

Note: The 13th “non-admin” criteria has been removed because Admin APIs cannot be used for interoperability and cannot be considered Core.

We heard overwhelmingly at the Hong Kong summit that defining core should be a major focus for the Board.

The good news is that we’re doing exactly that in CoreDef. Our challenge is to go quickly but not get ahead of community consensus. So far, that means eating the proverbial elephant in small bites and intentionally deferring topics where we cannot find consensus.

This meeting was primarily about Joshua and I figuring out how to drive DefCore quickly (go fast!) without exceeding the communities ability to review and discuss (build consensus!). While we had future-post-worthy conceptual discussions, we had a substantial agenda of get-it-done in front of us too.

Here’s a summary of key outcomes from the meeting:

1) We’ve established a tentative schedule for our first two meetings (12/3 and 12/17).

We’ve started building agendas for these two meetings.

We’ve also established rules for governance that include members to do homework!

2) We’ve agreed it’s important to present a bylaws change to the committee for consideration by the board.

This change is to address confusion around how core is defined and possibly move towards the bylaws defining a core process not a list of core projects.

This is on an accelerated track because we’d like to include it with the Community Board Member elections.

3) We’ve broken DefCore into clear “cycles” so we can be clearer about concrete objectives and what items are out of scope for a cycle. We’re using names to designate cycles for clarity.

The first cycle, “Spider,” was about finding the connections between core issues and defining a process to resolve the tension in those connections.

This cycle, “Elephant,” is about breaking the Core definition into

The next cycle(s) will be named when we get there. For now, they are all “Future”

We agreed there is a lot of benefit from being clear to community about items that we “kick down the road” for future cycles. And, yes, we will proactively cut off discussion of these items out of respect for time.

4) We reviewed the timeline proposed at the end of Spider and added it to the agenda.

The timeline assumes a staged introduction starting with Havana and accelerating for each release.

We are working the timeline backwards to ensure time for Board, TC and community input.

5) We agreed that consensus is going to be a focus for keeping things moving

This will likely drive to a smaller core definition

We will actively defer issues that cannot reach consensus in the Elephant cycle.

6) We identified some concepts that may help guide the process in this cycle

We likely need to create categories beyond “core” to help bucket tests

Committee discussion is needed but debate will be time limited

7) We identified the need to start on test criteria immediately

Board member John Zannos (in absentia) offered to help lead this effort

In defining test criteria, we are likely to have lively discussions about “OpenStack’s values”

8) We identified some out of scope topics that are important but too big to solve.

We are calling these “elephants” (or the elephant in the room).

The list of elephants needs to be agreed by DefCore and clearly communicated

We expect that the Elephant cycle will make discussing these topics more fruitful

We discussed (but did not resolve) that it could be possible to have people run RefStack against public cloud end points and post their results

We agreed that RefStack needs to be able to run locally or as a hosted site.

10) We identified a lot of missing communication channels

We created a DefCore wiki page to be a home for information.

Joshua and I (and others?) will work with the Foundation staff to create “what is core” video to help the community understand the Principles and objectives for the Elephant cycle.

We are in the process of setting up mail lists, IRC, blog tags, etc.

Yikes! That’s a lot of progress priming the pump for our first DefCore meeting!

* We picked “DefCore” for the core definition committee name. One overriding reason for the name is that it has very clean search results. Since the word “core” is so widely used, we wanted to make sure that commentary on this topic is easy to track against the noisy term core. We also liked 1) the reference to DefCon and 2) that the Urban Dictionary defines it as going deaf from standing too close to the speakers.

While the current thinking of a testing-based definition of Core adds pressure on expanding our test suite, it seems to pass the community’s fairness checks.

Overall, the discussions lead me to believe that we’re on the right track because the discussions jump from process to impacts. It’s not too late! We’re continuing to get community feedback. So what’s next?

These discussions are expected to have online access via Google Hangout. Watch Twitter when the event starts for a link.

Want to to discuss this in your meetup? Reach out to me or someone on the Board and we’ll be happy to find a way to connect with your local community!

What’s Next? Implementation!

So far, the Core discussion has been about defining the process that we’ll use to determine what is core. Assuming we move forward, the next step is to implement that process by selecting which tests are “must pass.” That means we have to both figure out how to pick the tests and do the actual work of picking them. I suspect we’ll also find testing gaps that will have developers scrambling in Ice House.

Here’s the possible (aggressive) timeline for implementation:

November: Approval of approach & timeline at next Board Meeting

January: Publish Timeline for Roll out (ideally, have usable definition for Havana)

March: Identify Havana must pass Tests (process to be determined)

April: Integration w/ OpenStack Foundation infrastructure

Obviously, there are a lot of details to work out! I expect that we’ll have an interim process to select must-pass tests before we can have a full community driven methodology.

There is still confusion around the idea that OpenStack Core requires using some of the project code. This requirement helps ensure that people claiming to be OpenStack core have a reason to contribute, not just replicate the APIs.

It’s easy to overlook that we’re trying to define a process for defining core, not core itself. We have spent a lot of time testing how individual projects may be effected based on possible outcomes. In the end, we’ll need actual data.

There are some clear anti-goals in the process that we are not ready to discuss but will clearly going to become issues quickly. They are:

Using the OpenStack name for projects that pass the API tests but don’t implement any OpenStack code. (e.g.: an OpenStack Compatible mark)

Having speciality testing sets for flavors of OpenStack that are different than core. (e.g.: OpenStack for Hosters, OpenStack Private Cloud, etc)

We need to be prepared that the list of “must pass” tests identifies a smaller core than is currently defined. It’s possible that some projects will no longer be “core”

The idea that we’re going to use real data to recommend tests as must-pass is positive; however, the time it takes to collect the data may be frustrating.

People love to lobby for their favorite projects. Gaps in testing may create problems.

We are about to put a lot of pressure on the testing efforts and that will require more investment and leadership from the Foundation.

Some people are not comfortable with self-reporting test compliance. Overall, market pressure was considered enough to punish cheaters.

There is a perceived risk of confusion as we migrate between versions. OpenStack Core for Havana seems to specific but there is concern that vendors may pass in one release and then skip re-certification. Once again, market pressure seems to be an adequate answer.

It’s not clear if a project with only 1 must-pass test is a core project. Likely, it would be considered core. Ultimately, people seem to expect that the tests will define core instead of the project boundary.

Of course, this will also be a topic at the summit (Alan and I submitted two sessions about this). The Board needs to move this forward in the November meeting, so NOW is the time to review and give us input.