Posted
by
timothy
on Thursday July 16, 2009 @07:37PM
from the those-slides-are-a-bit-dense dept.

An anonymous reader writes "Twelve years ago OpenBSD developers started engineering a release process that has resulted in quality software being delivered on a consistent 6 month schedule — 25 times in a row, exactly on the date promised, and with no critical bugs. This on-time delivery process is very different from how corporations manage their product releases and much more in tune with how volunteer driven communities are supposed to function.
Theo de Raadt explains in this presentation how the OpenBSD release process is managed (video) and why it has been such a success."

No foaming at the mouth tantrums that someone is using your code and not kissing your fat ugly ass in reverence.

Over the years I've learned that BSD developers are engineers while GPL developers are ideologues - ie. wackos and nutcases.

Thank god BSD is well on their way to ridding themselves of GCC and already have the amazing LLVM compiler tech building the system. The efforts the GNU crowd has done to keep open source developers locked into their compiler is sickening from anyone who likes to believe the open source world is some sort of technological marketplace of ideas compared to the Microsoft world.

Every BSD project I've followed or participated with has been a positive experience due to those types of licensed projects attracting engineers who just want to write good code and want their code to be available and free to everyone to make good use of it.

No foaming at the mouth tantrums that someone is using your code and not kissing your fat ugly ass in reverence.

You definitely don't know Theo de Raadt.

The efforts the GNU crowd has done to keep open source developers locked into their compiler is sickening from anyone who likes to believe the open source world is some sort of technological marketplace of ideas compared to the Microsoft world.

Yeah, how dare they make a superior product, couldn't they have made GCC suck a bit more so the alternatives wouldn't look so bad?

Every BSD project I've followed or participated with has been a positive experience due to those types of licensed projects attracting engineers who just want to write good code and want their code to be available and free to everyone to make good use of it.

Same here, and the same goes for GPL'ed projects. In all cases, its the users (and the ocassional Slashdot troll) who make them look bad. Well, except for Theo's yearly foaming-at-the-mouth, but he's such a talented engineer we're ready to let that one pass.

You do realise we're discussing an article about Theo de Raadt, right? From what I've read, he's just as much of a BSD kook and idealist as RMS is. Just he gets less flak for it because he (currently, at least) is much more respected for his *engineering* contributions than RMS is.

Over the years I've learned that BSD developers are engineers while GPL developers are ideologues - ie. wackos and nutcases.

Wow, so I'm a wacko and a nutcase, and not real engineer? Sorry, not buying it.

Thank god BSD is well on their way to ridding themselves of GCC and already have the amazing LLVM compiler tech building the system. The efforts the GNU crowd has done to keep open source developers locked into their compiler is sickening from anyone who likes to believe the open source world is some sort of technological marketplace of ideas compared to the Microsoft world.

Yes, because everyone is just so completely *required* to use gcc. You can't use icc, Sun cc, MSVC, or anythin

How exactly does GCC "lock in" anyone into using "GNU tech"? How exactly is GCC "not truly free"? The only way in which GCC limits you in licensing your code is that you can't build your own shiny new proprietary compiler on top of GCC, and that is a good thing. The "document format" in question here is C. GNU is not claiming ownership of C.

And yes, LLVM is nice, for many thing. By the same toke, it's no panacea, either.

Wow, just wow. First, that is not a tantrum. Second, he is 100% correct. Trying to alter someone else's copyright notice is a gigantic legal fuckup. Third, all he asks is a lack of modification of copyright notice, no ass-kissing. Fourth, you are a troll.

That video can serve as a lesson to others on how to manage a project for an extended period of time and keep things consistent and predictable.

I'd say to limit this "lesson" to an Open Source project, not just any project. His points are good strategic choices, which are well reasoned, even if some of them are incomplete or not fully explained in the length of the talk. He makes caveats throughout the presentation to exclude traditional choices found in commercial enterprise level development due to the he

This is not just the kernel, it's all remote holes in the default installation. Meaning there have only ever been two (known) vulnerabilities whereby a vanilla install of OpenBSD can be compromised. With the exception of those two holes, any version of OpenBSD is still totally secure today.

Even so, it looks better than something like a typical Linux distro (only two remote kernel vulnerabilities in the last six months, plus however many application vulnerabilities you got from all of the extra stuff that's installed by default)

Obviously i agree it will be better, but most distros have ssh disabled by default, so the only count of remote vulnerabilities is kernel

only two remote kernel vulnerabilities in the last six months

No security conscious distros ship with vanilla kernels, typically its an old (2/3 versions 6-9months) secured kernels that has had most vulnerabilities ironed out, obviously some vulnerability come out that affect all recent versions or all 2.6.x versions but generally counting vulns on the vanilla kernel is a very bad measure, tbh thats why i asked i have no idea how muc

Linux (1991--present): The code base has never forked. The release process has remained largely in the hands of Alan Cox and Linus Torvalds throughout its history, and except for some cosmetic differences, patch submission and integration has been handled the same way. Most people consider the two head developers and various major contributors to be, on the whole, pretty nice guys, though the snafu with loading binary blobs, and the driver architecture supporting 'non-free' elements in kernel-space was notable for the high level of frustration on all sides.

OpenBSD (1994--present): Forked from NetBSD (1993--present), who forked from 386BSD (1992--1994), that originally derived its codebase from BSD4 (1977--1995). The history of BSD is a blood-bath of politics leading to forks; Most of the developers of the *BSDs are variously referred to as "difficult, abrasive, etc.," although Theo, to his credit, has had a major change in reputation over the past several years.

Historically, the BSD variants have enjoyed a smaller uptake in the market and casual open source contributors find it difficult to get involved because of cultural/political differences. They also tend to fragment, as noted by the number of variants, which further weakens their position. Linux, on the other hand, likely enjoys a much broader userbase and more contributions due to its more relaxed community standards and the general approachability of its core team. I would say the "release process works", but by feature count, contributions, and hardware support, the process is full of fail. Does that mean it's a failed project? No--I'm just saying that the differing priorities and political/cultural values held by the core developers has had an overwhelming impact. Businesses might appreciate the consistency of the release schedule and the relatively bug-free nature of those releases, but looking at market share it's pretty clear those are not the priorities for most businesses.

Alan Cox hasn't really been an important figure in Linux for like 10 years.

10 years? I disagree, it hasn't been that long, it'd say 5 or 6 years, since 2.5 started and akpm became the Linus' right hand. And while he has not been as active as he used to be, he still contributes quite frequently (50 changes in 2.6.30, 1032 in the last 10 versions), and he is quite active in the mailing lists. And the kind of work he does is not exactly easy, in the last year he has been fixing the tty locking, a long overdue task that not many hackers (if any) dared to do. He has also been a quite active libata/ide contributor (including new drivers), maintains the 8250 serial driver and edac related things, an sends patches that touch many other places of the tree. He has not the reponsibility he used to have, but i wouldn't say he is not an important figure

They have different philosophies. I really don't know where you're going with that post because isn't very accurate. You can't compare the "Linux Kernel" with OpenBSD's whole. A kernel is pretty much useless without a "userland." OpenBSD, FreeBSD, NetBSD are all operating systems. Linux, sorry to say, is not.

If you want to compare BSD versions to Linux versions, then you'd have to compare with (in no particular order):-Gentoo-Debian - Ubuntu - Xubuntu - Xandros - (how many more are there?)-Slackware-RedHat-Ubuntu.... because I can't even keep track

So, you have a million confusion projects going on based on the code all, called "Linux". How many versions of "OpenBSD" are there out there? Umm, ONE. Sure, someone could go and make their own userland and such, but it cannot be called OpenBSD. So, before you go on a rant about how many times BSD has been forked, please get your facts straight.

They have different philosophies. I really don't know where you're going with that post because isn't very accurate.

You just said it: They have different philosophies. I'm answering the question of why, and what's come out of those approaches historically.

OpenBSD, FreeBSD, NetBSD are all operating systems. Linux, sorry to say, is not.

I think you're confusing the terms "operating system" and "distribution".

So, you have a million confusion projects going on based on the code all, called "Linux".

No, I believe they call themselves things like "Redhat" or "Gentoo", etc.

So, before you go on a rant about how many times BSD has been forked, please get your facts straight.

Sir, a full exploration of all of the facts and an exhaustive comparison between all the Unix variants has been the subject of many books, panel discussions, conventions, and academic discourses, and has yet to be fully explored. I think that a high-level overview is both more productive, and better suited, for a humble posting on an electronic forum.

He's not confusing operating system and distribution. The D in BSD is for "Distribution", they practically invented the term.

It's a new idea, though, to have a distribution that wasn't responsible for the kernel. And several terms, like platform, and operating system, were inveted to differentiate from distribution as a result. IMHO, companies(corporate clients) do not confuse platform, operating system, or distribution, they can only evaluate(assign a value to) a distribution, a set of software tested t

Boot the Linux kernel and nothing else. What can you do with it? Not very much, therefore, just the kernel is not an operating system.

Your original post states that the Linux code base has never been forked you imply that OpenBSD has. I don't think OpenBSD has been forked after its creation. Who really cares what the code's history was before it became OpenBSD? This article is about OpenBSD release engineering. If it were about BSD release engineering, you would have a point.

A kernel is pretty much useless without a "userland." OpenBSD, FreeBSD, NetBSD are all operating systems. Linux, sorry to say, is not.

Stop trying to redefine the term "Operating System". The rest of what you said might have merit, but once you tried to force your (wrong) interpretation of "Operating System" onto others I lost interest. Please explain to me and others how Linux is not an operating system.

So, before you go on a rant about how many times BSD has been forked, please get your facts straight.

a kernel, (in the past also called a "nucleus: or "core"), is the central part of the operating system that manages resources and allows other programs to use those resources. In operating systems, say OS for the IBM mainframe or VMS for VAX, not only is there included that core but also utility programs for systems administration tasks.
So Linux by itself is just a nucleus or kernel or core, while FreeBSD, DragonFly, NetBSD, and Mac OSX include not only a core but utilties to form a complete OS
easy fo

They are all GNU/Linux. You can't compare BSD to the Linux kernel, but you can compare it to the Linux kernel plus the GNU userland.

The fact there are different distros that share slightly off-sync versions of a common base continuously forking and merging back makes for a more interesting history than the, as the GP aptly described, BSD fork bloodbath.

BSD is for those who want to write free software, while GPL is for those who write free software and want it to be free forever. They may be called ideologue

Your post is off-topic from the video and the Slashdot article. This isn't a comparison about how Linux compares versus OpenBSD. The video, if you watch it, is about how the OpenBSD team manages their releases, meets their agreed upon release dates, and makes sure that each release is a quality product.

The points he discusses in his video revolve around conducting adequate testing of the product and having the developers use the to-be-released system rather than throwing something out as a release and moving on. His points about managing the release process are just as valid if they were applied to manufacturing and releasing cars, paper products, or skateboards as they are to operating systems.

The video, if you watch it, is about how the OpenBSD team manages their releases, meets their agreed upon release dates, and makes sure that each release is a quality product.

Yes, and I'm noting that various cultural and political influences that come from the core developers have a substantial impact on all of the above, and then comparing those influences in similar projects (ie, Linux).

His points about managing the release process are just as valid if they were applied to manufacturing and releasing cars, paper products, or skateboards as they are to operating systems.

And I don't think anyone's going to argue there's a different corporate culture at Ford than Toyota and it translates directly to the products those respective brands produce.

This is somewhat of an apples/oranges comparison. Linux proper is principally the kernel, while the development teams for most *BSD variants manage both the BSD kernel and the userland. While it may be the case (and I don't know for sure honestly) that there are no viable forks of the Linux kernel, that really doesn't provide a fair basis for comparison.

I would suggest that a BSD variant (OpenBSD, FreeBSD, etc) is much more analogous to a Linux distribution than just the Linux kernel. When you frame it that way, I think it is safe to say that there is much more fragmentation in the Linux world than the BSD world.

Most of the developers of the *BSDs are variously referred to as "difficult, abrasive, etc.," although Theo, to his credit, has had a major change in reputation over the past several years.

I've never heard that referring to anyone in the BSDs but Theo himself. When was the last time you heard complaints about NetBSD or the FreeBSD core team?

They also tend to fragment, as noted by the number of variants, which further weakens their position. Linux, on the other hand [...]

...is even more fragmented. How many Debian derivatives are there? RedHat? What about Gentoo, LFS, etc.? There's probably more similarity (and shared code) between FreeBSD and OpenBSD than between Ubuntu and Slackware.

Cut the BSDs some love. They deserve it, and there's plenty to go around.

The original BSD code base was maintained by UC Berkeley and a bare bones system that was used as the basis for many industrial operating systems (e.g. SunOS). It was never meant to be a full fledged operating system for all usages, so different groups forked in order to target special niches. Similarly System-V would be considered forked (e.g. Solaris). Generally one considers both a base design, as neither were mature enough or managed in way to solve all of the purposes that were spawned.

386BSD was a port of 4.3BSD to x86 and when development ceased then NetBSD and FreeBSD were created simultaniously to continue development.

It was only the NetBSD/OpenBSD clash that was a political/cultural difference. All others were natural progressions given the maturity of the industry, communication technology, and specializations required. The primary reasons that Linux became successful was (a) the BSD lawsuit, (b) IBM. The SVLUG was one of the earliest user groups and its archives site members stating that they switched communities due to concerns at the time. Still, both were equally popular until IBM became involved in the late 90s promoting it with their illegal spray painting all over San Francisco. As IBM was a hardware company, the GPL was more attractive than the BSD license due to restricting competitors (Sun) from leveraging IBM's contributions. Before IBM's commitment and promotion of Linux, which was followed by other big vendors like SGI for similar reasons, FreeBSD was arguably more popular (e.g. it was adopted by EBay, Yahoo!, and other startups).

Marketshare? What does market share have to do with this? OpenBSD is for security. Secure out of the box. Joe six pak has no need for security. So OpenBSD is not for them. FreeBSD has stability, standardization and has been consistent since almost it's inception. If you like BSD and need it to run on obscure hardware then NetBSD is for you. If you wish a stable desktop Linux or one of it's flavors. Linux is also a good server. Mac OS X is great if you want user friendly and can be customized if need be. Win

I disagree. The "forks" from original BSD weren't really forks. They were Berkeley giving up on it and letting others take over.

Most of the various BSD's are "forks" because they have different purposes. OpenBSD is security oriented, NetBSD is intended to run on vritually everything that has a CPU, FreeBSD was intended for more mainstram use.

The only real "schism" I can think of is when Matt Dillon broke off and formed DragonFly BSD. Everything else was pretty much some guys saying "I'm gonna go off and do this instead".

There may not be any real Linux "forks", but that's because Linus has tried very hard to make Linux "one size fits all", and that has resulted in its own set of problems (see the various scheduler wars, for instance.. they were bloody). There are also any number of "branches" in which different patches are applied to the mainline kernel for different purposes.

I disagree. The "forks" from original BSD weren't really forks. They were Berkeley giving up on it and letting others take over.

Berkeley "gave up" exactly once, in 1995. And it wasn't because they made room for others, but because of USL v. BSDi, a lawsuit that probably created the conditions for Linux to rise to power in the first place. Linus himself once said that had there been no legal ambiguity regarding the BSD code base, he probably wouldn't have started a completely new project from scratch.

Second, since you may be unaware of what a "fork" means, it's simply a point where developers take the existing code and then begin independent development on it. With the exception of Minix and Linux, every UNIX-like operating system has its code base derived from the original Unics in some fashion. Every UNIX variant EXCEPT Minux and Linux has forks that trace back to that.

I disagree. The "forks" from original BSD weren't really forks. They were Berkeley giving up on it and letting others take over.

Most of the various BSD's are "forks" because they have different purposes. OpenBSD is security oriented, NetBSD is intended to run on vritually everything that has a CPU, FreeBSD was intended for more mainstram use.

First of all they weren't forks directly from Berkeley, they all forked from a dead OS called 386BSD that had a lot of development problems.

Second, everything I've read on the topic indicates this was very much personality-driven and related to 386BSD politics. The "reasoning" behind each BSD was something that was developed later.

In an ideal world, I suppose, 386BSD would have been managed better and there would be no forks.

In an ideal world, I suppose, 386BSD would have been managed better and there would be no forks.

In your "ideal world" I suspect we would all be rather less well off.

I've never understood the appeal of one-size-fits all. Why is it the premise of so many off-the-cuff comments in every venue of discussion?

So far as I can see, it accomplishes two things: makes it easy to criticize others for not getting along, and relieves the commentator of having to learn or understand systems theory, which is subtle and difficult. If only the whales had not split off from the carnivorous ungulates, evolution, in the ideal world, would have accomplished so much more. Put into a real context, the idea barely parses.

Within the prokaryote kingdom, there is a great deal of horizontal gene transfer. Within the BSD clade, there is a great deal of horizontal transfer (of ideas and code) whenever the need arises.

The most profound fork is probably the GPL from the long-standing conventions of public domain, which the BSD license more nearly mimics.

I don't see much difference between the scope of source code and the scope of human interpersonal relationships. In an ideal world, we would all be better off if either A) all information was private, or B) all information was public. Turns out, some people have information they don't wish to share (for a list of reasons which includes every human motivation) so the GPL lacks universal appeal. Turns out, some people have information which they don't wish other people not to share, so neither does the BSD license have universal appeal.

Having the two license camps puts a crimp on horizontal transfer, but it hasn't caused the world to stop turning. Is it fundamentally a bad thing to implement an idea twice, beginning from two different sets of premises? Only if your goal is world domination. For maximizing insight, diversity rocks.

I could continue, but I'm sure the choir has already figured this out, and the sinners are set in their ways.

At the end of the day, fork has become a term of social derision founded upon a monolithic Garden of Eden which never existed, and wouldn't have been a paradise even if it had.

If the only reason to fork is that two parties can't get along (X, libc are possibilities, but I don't know enough of the story) then forking is a mite unseemly, much like a failed marriage. Do open source communities fork more often than any other walk of life? I suspect not. And no, I'm not counting whiner attrition, where one or two guys copy a code base into their own tree, make a dozen patches, and are never heard from again. Does IBM fork every time a deadbeat is fired or quits?

Many of these projects have accomplished things through volunteer collaboration that twenty years ago few would have believed possible, yet they are mostly criticized in retrospect for the occasional loud public spat prior to a parting of ways, by people who are deeply in touch with their inner primate.

Those of us in the results oriented camp are less inclined to praise the false nirvana of pretending to agree when you really don't.

For an interesting comparison, consider the disputes over the years within NASA over the "smaller, faster, cheaper" engineering meme.

A couple of points. First not everyone regards the binary blobs as a truly horrible situation - we are talking about linux here and not hurd. Linux is not a gnu project and most of those frustrated "on all sides" were from the outside looking in without contributing a single line and were even at times working at cross purposes (eg. RMS demanding that gcc stop working on linux only optimisations becuase that wouldn't nelp hurd).In the second case I think you are trying to compare success vs exceptional s

The reasons, mechanics and social workings of our process have never been detailed outside the project, but now will be, hopefully providing some insight to others who face delays and quality issues with their own product lines.

He's clearly talking about Microsoft here, but why would he want to help them?

the poster is making the assertion that it works, a lot of people would say their release cycle is a terrible burden on the project.

1. code freeze happens every six months meaning you don't get to finish off features and fixes which might have been of huge benefit. it would make much more sense to base your release cycle around features and improvements, then some arbitary number of days.

2. openBSD EOL's it's releases so quickly, that only in the very rare instance that a business is willing to pay through the nose for inhouse support will you be able to see your system patched.

3. 6 months is way WAY too short of a time for a whole new release. 12 months (if you have to go with the retarded time based release) would be much less of a drain on resources as there is a certain amount of work that must go into a release wether it's got useful upgrades or not.

i've used openbsd in production environment, and it doesn't cut it in hardware support or speed. it's firewall was nice, but i've got that in freebsd now which is a far better OS.

I call bullshit on all of that, and I do have a couple OpenBSD systems installed in a commercial setting.

1.) if you wait for the coders to finish up the "cool", uh, sorry "desperately needed" features, you could just as well put the release date on Independence Day, 2025. Having a fixed date forces the coders to concentrate on the essential, instead of the "cool" stuff.

2.) yes, you need to upgrade rapidly. However, your point is misleading. Upgrading OpenBSD has, in all the many upgrades I have made, been no more problematic than, say, running "apt-get update && apt-get upgrade" on Debian.

3.) it's not a "whole new release". It's minor version numbers every six months. And six months can be a damn short time in the security world.

i've used openbsd in production environment, and it doesn't cut it in hardware support or speed.

So you're lamenting why, exactly? If the release cycle isn't even your main problem?

And, the canard that de Raadt is an asshole is plain wrong. To those who follow OBSD for anything other than a short period of time will know what his, the team's refrain is: We make this OS for us, not for you! Your benefit is an unintended consequence. We don't want to be the most popular, we make this OS for us, not for you! You want Linux. We don't talk, we code. We don't suggest bs fea

Secure systems, for a start, should have the ability to control and restricts information to a fine grained level. Unfortunately, Theo is stubborn that things like MAC and RBAC should not be included, as they are not necessary. Which is remarkably short-sighted. DAC has many problems, any any truly secure system should have an alternative. As much as I like OpenBSD for what it is, and as much as I respect the development team, a focus on quality is not the same as a focus on security. Secure by default is a good approach, but is somewhat meaningless, as you are limited in what you can do with it. A true metric would be to look at the vulnerabilities of software in the ports tree, of which there is still a lot.

At the moment, SELinux or RSBAC are far more secure systems, despite those platforms having more vulnerabilities. If you gain a root shell through Apache for example, you will not be able to do a damn thing. On OpenBSD, as there is no defence in depth, the system is yours. Even NetBSD and FreeBSD seem to have more of a focus on actual security, with efforts like SEBSD, executable signatures, PAX/NX support etc.

OpenBSD is quality, top not software. It is not however, a secure system.

There's a variety of standard, well documented features that anyone can use. Each element on it's own has it's own vulnerabilities, used in concert correctly they are very effective and predictable. Unlike SELinux for instance that will randomly just not let something happen emitting a cryptic error message. And, while we're at it.. why do you trust SELinux? Audited it's code?

Given the overall security track record of the Linux distros (Debian SSH RNG anyone?) -- I trust OpenBSD a tiny bit more.

I could be wrong, but I think the primary reason they release so fast is because the OpenBSD team does not attempt to bundle all of existing open source software with their OS like say Debian is trying to do. In *BSD distros, there is the core OS that includes essentially only the operating system and some utilities, and then there is the ports collection. I believe a serious bug in some port package will not halt the release process of a BSD distribution, at least for non-essential ports.

As someone who had used Linux quite extensively for the past 11 years, I recently started rolling out OpenBSD servers at my job. Two OpenBSD firewalls power our production network (using CARP/pfsync) and they do it flawlessly.

In our office, an OpenBSD firewall connected to two DSL modems is able to load balance traffic out, and do proper asymmetric routing. All this thanks to the developers who make a lot of great, innovative code for pf, CARP, pfsync, etc..

I couldn't do any of this properly with Linux, especially not the asymmetric routing.

I've worked on OpenBSD ports to make them better. I've found the developers friendly and helpful. The code is quite solid.

Not that that would be a bad thing. The majority of people in the world are average, by definition. When a truly extraordinary person tells them what to do, and they shut up and do it, the collective ability of the group is far greater than the mere sum of its parts. If the extraordinary person happens to be a bit of a knob, that's irrelevant if they are all focused on the desired result and not their own silly little egos.

"Oh noes, he told me my code was stupid and wants me to to it again! Cry cry cry!"

If Theo tells you your code is stupid, then it is. End of story. Do it again. Yes, there are better ways to deal with people, but seriously, Theo gets knocked for his personality not because it's really that big a deal, but more because ordinary people are jealous of his enormous capacity.

Get over it people. Theo's good at what he does, OpenBSD could and would not exist without him, and the world is a better place for it.

The problem with your statement is that you assume that Theo is perfect. If he's not (and he's definitely not, just like all of us), then "shut up and do what I say, and I don't need you trying to explain me why I'm wrong, cause I'm never wrong!" mentality will lead to a disaster when he gets something wrong. You could say that it's alright so long as, on average, he's right more often than he's wrong; however, the real problem with mistake combined with arrogance is that mistakes often tend to become grave in such circumstances. It's the "fuhrer problem" - it's very tempting to put a brilliant guy in complete control with no backup, but it only works for limited time in practice.

Theo's good at what he does, OpenBSD could and would not exist without him, and the world is a better place for it.

No, but assuming you can operationalize 'intelligence', it's a testable hypothesis. And it's always better to explicitly and not implicitly assume. Otherwise people think you're hiding something, 644bd346996 -- if that is even your real name.

Excuse the pedantry, but you're making a big assumption when you're considering that the majority are in the average. For all you know half of people could be extremely bad and the other half extremely bright, leaving no one anywhere near the average.

Ummm... they've measured IQs, plotted it on a bell curve, and defined average to be the peak in the curve plus some on each side.

Since that holds the majority of the population, it's entirely correct to say that the majority of the people in the world are aver

In that case, you'd better specify to WHICH average you are referring and your exact definition of "most", because, mathematically, there are three averages which can be taken from the data given:

MEAN: sum the data and divide by the number: 30 / 8 = 3.75
MEDIAN: write in order, the one in the middle (or mean average of middle two if number of data is even): (3 + 4) / 2 = 3.5
MODE: the data item which appears the most often (has the highest frequency count): 1

What I got out of it was that the core developers, not some other group, do the testing. Rather than hand the task of quality control/testing to some other group just prior to release, all developers are held to a high level of participation in this regard. Theo and other developers use nightly builds in their day-to-day work and the entire system compiles most every night.

That ability to eat your own dogfood for real sounds pretty crucial to the strategy. Unfortunately, if one is developing software that they don't actually need to use extensively and continuously to get through each day, relying on developers for testing "by using it" is likely less reliable and/or predictable

For example, developers of software for set top cable DVRs (Motorola developers who write the crap Comcast downloads to my DVR - you know who you are!) may not even subscribe to cable -- and, presumab

1) they do not create a separate branch for a release. The release stays in TRUNK until it is released. This has the advantage that ALL developers are working towards a release. Introduction of features is slowed as a release approaches. He does not address the disadvantage of this system: that many developers sit around idle when their work is completed early during this phase.

2) Everyone tests. There is no test team. All developers test things before a release. He does not talk about agile and how everyone should be testing their own stuff anyways.

Point 1) was interesting. It works for them because they are volunteer based. They are not paying the salaries of the idle developers during the release phase. It would not work in a corporate environment because those people are to valuable to be underutilized.

If everyone tests, the developers who are "sitting idle" are spending that idle time testing, no?

It would be pointless to test prior to integration of all submitted components. From the time the first component is completed and submitted and the last, those developers can test, but it's not meaningful if the goal is to evaluate the integrated product as a whole.

> It would be pointless to test prior to integration of all submitted components.

It might not be as useful, but it is not pointless. You can find new bugs, even when you write just a single unit test for a single function in your code. Especially when your system is build up from small independent applications.

That assumes that all developers are roughly equivalent. But if, says, the filesystem is basically in feature lock whereas active development is going on in the networking system, the fs developers are likely to be sitting on their hands. Sure they can test networking features, but that's not their expertise and their time might be much better spent working on the next generation fs, which is not going to be in the next release but might be a couple of releases away. A branch/trunk split would allow them to

Isn't it likely that an fs developer who found themselves "sitting on their hands" might decide to go off and start working on the "big file system feature" so they can check it in a few days into the next release cycle (which is when checking of such "big" features seem to be encouraged)? I'd hope so. Although I have no first hand knowledge of OpenBSD's dev, I suspect a lot of short lived "branching" really does occur - but it's hidden out of sight of the cm system.

Perhaps developers who feel "idle" (if they exist at all) should be writing automated tests or, at the very least, thinking on how to automate stuff like testing for real-time kernel concurrency problems or device-driver weirdness.

You do both. Automated testing is only as good as the people who write the tests. If you make assumptions in the code, and also in the tests, you won't find bugs until users actually use it.

I know all about how you can rearrange responsibilities and someone other than the developer writes the tests blah blah whatever, it never works 100%. After I get done with all of my testing I have a human try it and about a third of the time that user will try to do something that wasn't in the requirements and did not get tested because it wasn't expected. Maybe a bug comes out of that, maybe it's just documentation that needs updated, but you have to have people using it in real life situations to be called a true test. I've seen automated testing pass things and then users, because they operate more slowly, expose timing or deadlock problems. Real people are needed.

As a developer, I think I'd work faster/better if I knew a quality product would let me work on side projects in the end. If I knew that I'd never have time to experiment and play then I'd just trudge along and get depressed. It would be a tremendous moral boost. Developing has downtime unless you work for a slave trade.

I find that in a corporate lifecycle, having two projects to work on helps.. as one project approaches release, another is just coming out of a release, and into a rapid bugfix push... alternating the primary focus of your development time... This works pretty well actually.

2) Everyone tests. There is no test team. All developers test things before a release. He does not talk about agile and how everyone should be testing their own stuff anyways.

They do test their own stuff. They also test how their own stuff works with everyone elses changes rather than in a little sandbox on the side without interaction with all the other parts. This interaction is where you run into problems. Most developers can write small chunks of code that work fine when used exactly as expected, whi

To translate to the "agile" buzwords of the day, they use a 2 week sprint cycle, and at the end of each sprint, the features for that sprint are complete and working, and the product is stable. They ensure this by doing daily builds and testing on those builds. Everyone runs the current build (he implies they run the daily build, but I expect that is too much hassle to upgrade every day, so in fact everyone runs the last sprint build (which is less than 2 weeks old, and has had a brief stabalizaiton period).

It's not rocket science, the notion of small "sprints" and a releasable product ready at the end of each sprint is fairly well known. All it requires is more discipline than 99% of development teams have.:-) Kudos to them for having the discipline to make it work.

While there are uses for which video is king, video as a way conveying certain types of information DOES suck. I think most people on here can read MUCH faster and process information more comprehensively in written form than some talking head on a video. This vid has slides, so it's better but I'd still prefer to read the slides and attached notes than basically be lectured to at someone else's pace. It does more for my comprehension an

That is the secret of its security! OpenBSD is carefully crafted to ensure it either won't run at all, or at the very least won't run long enough for someone to exploit the server. It's really rather clever when you think about it.

hahaha, what a farce, Solaris through version 9 could never be hooked straight to the internet in default install or it would be pwn3d. Who runs a Solaris router or firewall? no one, that's who. Not even Sun marketing droids are dumb enough to spout the shit you just did.
VMS is slower than OpenBSD on a comparable platform running the same code because of the more complicated file system. And running DCL is slower than bash scripts.