Sex, software, politics, and firearms. Life's simple pleasures…

Main menu

Post navigation

Evaluating the harm from closed source

Some people are obsessive about never using closed-source software under any circumstances. Some other people think that because I’m the person who wrote the foundational theory of open source I ought to be one of those obsessives myself, and become puzzled and hostile when I demur that I’m not a fanatic. Sometimes such people will continue by trying to trap me in nutty false dichotomies (like this guy) and become confused when I refuse to play.

A common failure mode in human reasoning is to become too attached to theory, to the point where we begin ignoring the reality it was intended to describe. The way this manifests in ethical and moral reasoning is that we tend to forget why we make rules – to avoid harmful consequences. Instead, we tend to become fixated on the rules and the language of the rules, and end up fulfilling Santayana’s definition of a fanatic: one who redoubles his efforts after he has forgotten his aim.

When asking the question “When is it wrong (or right) to use closed-source software?”, we should treat it the same way we treat every other ethical question. First, by being very clear about what harmful consequences we wish to avoid; second, by reasoning from the avoidance of harm to a rule that is minimal and restricts peoples’ choices as little as possible.

In the remainder of this essay I will develop a theory of the harm from closed source, then consider what ethical rules that theory implies.

Ethical rules about a problem area don’t arise in a vacuum. When trying to understand and improve them it is useful to start by examining widely shared intuitions about the problem. Let’s begin by examining common intuitions about this one.

No matter how doctrinaire or relaxed about this topic they are, most people agree that closed-source firmware for a microwave oven or an elevator is less troubling than a closed-source desktop operating system. Closed source games are less troubling than closed-source word processors. Any closed-source software used for communications among people raises particular worries that the authors might exploit their privileged relationship to it to snoop or censor.

There are actually some fairly obvious generative patterns behind these intuitions, but in order to discuss them with clarity we need to first consider the categories of harm from closed-source software.

The most fundamental harm we have learned to expect from closed source is that it will be poor engineering – less reliable than open source. I have made the argument that bugs thrive on secrecy at length elsewhere and won’t rehash it here. This harm varies in importance according to the complexity of the software – more complex software is more bug-prone, so the advantage of open source is greater and the harm from closed source more severe. It also varies according to how serious the expected consequences of bugs are; the worse they get, the more valuable open source is. I’ll call this “reliability harm”.

Another harm is that you lose options you would have if you were able to modify the software to suit your own needs, or have someone do that for you. This harm varies in importance according to the expected value of customization; greater in relatively general-purpose software with a large range of potential use cases for modified versions, less in extremely specialized software tightly coupled to a single task and a single deployment. I’ll call this “unhackability harm”.

Yet another harm is that closed-source software puts you in an asymmetrical power relationship with the people who are privileged to see inside it and modify it. They can use this asymmetry to restrict your choices, control your data, and extract rent from you. I’ll call this “agency harm”.

Closed source increases your transition costs to get out of using the software in various ways, making escape from the other harms more difficult. Closed-source word processors using proprietary formats that no other program can fully handle are the classic example of this, but there are many others. I’ll call this “lock-in harm”.

[Update, two days later] A commenter points out another kind of harm from closed source: secrets can be lost, taking capabilities with them. There are magnetic media from the early days of computing – some famous cases include data of great historical interest recorded by the U.S. space program in the 1960s – that are intact but cannot be read because they used secret, proprietary data formats embodied only in hardware and specifications that no longer exist. This typifies an ever-present risk of closed-source software that becomes more severe as software-mediated communication gets more important. I’ll call this “amnesia harm”.

Finally, a particular software product is said to have “positive network externalities” when its value to any individual rises with the number of other people using it. Positive network externalities have consequences like those of lock-in harm; they raise the cost of transitioning out.

With these concepts in hand, let’s look at some real-world cases.

First, firmware for things like elevators and microwave ovens. Low reliability harm, because (a) it’s relatively easy to get right, and (b) the consequences of bugs are not severe – the most likely consequence is that the device just stops dead, rather than (say) hyper-irradiating you or throwing you through the building’s roof. Low unhackability harm – not clear what you’d do with this firmware if you could modify it. Low agency harm; it is highly unlikely that a toaster or an elevator will be used against you, and if it were it would be as part of a sufficiently larger assembly of surveillance and control technologies that simply being able to hack one firmware component wouldn’t help much. No lock-in harm, and no positive externalities. [There is some potential for amnesia harm if the firmware embodies good algorithms or tuning constants that can’t be recovered by reverse-engineering.]

Because it scores relatively low on all these scales of harm, highly specialized device firmware is the least difficult case for tolerating closed source. But as firmware develops more complexity, flexibility, and generality, the harms associated with it increase. So, for example, closed-source firmware in your basement router can mean serious pain – there have been actual cases of it hijacking DNS, injecting ads into your web browsing, and so on.

At the other end of the scale, desktop operating systems score moderate to high on reliability harm (depending on your application mix and the opportunity cost of OS failures). They score high on unhackability harm even if you’re not a programmer, because closed source means you get fixes and updates and new features not when you can invest in them them but only when the vendor thinks it’s time. They score very high on agency harm (consider how much crapware comes bundled with a typical Windows machine) and very high on lock-in [and amnesia] harm (closed proprietary file formats, proprietary video streaming, and other such shackles). They have strong positive externalities, too.

Now let’s talk about phones. Closed-source smartphone operating systems like iOS have the same bundle of harms attached to them that desktop operating systems do, and for all the same reasons. The interesting thing to notice is that dumbphones – even when they have general-purpose processors inside them – are a different case. Dumbphone firmware is more like other kinds of specialized firmware – there’s less value in being able to modify it, and less exposure to agency harm. Dumbphone firmware differs from elevator firmware mainly in that (a) there’s some lock-in [and amnesia] harm (dumbphones jail your contacts list) and (b) in being so much more complex that the reliability harm is actually something of an issue.

Games make another interesting intermediate case. Very low reliability harm – OK, it might be annoying if your client program craps out during a World of Warcraft battle, but it’s not like having your financial records scrambled or your novel manuscript trashed. Moderate unhackability harm; if you bought a game, it’s probably because you wanted to play that game rather than some hypothetical variant of it, but modifying it is at least imaginable and sometimes fun (thus, for example, secondary markets in map levels and skins). No agency harm unless they’re embedding ads. No lock-in harm, [low odds of amnesia harm,] some positive externalities.

Word processors (and all the other kinds of productivity software they’ll stand in for here) raise the stakes nearly to the level of entire operating systems. Moderate to high reliability harm, again depending on your actual use case, High unhackability harm for the same reasons as OSes. Lower agency harm than an OS, if only because your word processor doesn’t normally have an excuse to report your activity or stream ads at you. Very high lock-in [and amnesia] harm. If the overall harm from closed source is less here than for an OS, it’s mainly because productivity programs are a bit less disruptive to replace than an entire OS.

So far I haven’t made any normative claims. Here’s the only one I really need: we should oppose closed-source software, and refuse to use it, in direct proportion to the harms it inflicts.

That sounds simple and obvious, doesn’t it? And yet, there are people who I won’t name but whose initials are R and M and S, who persist in claiming that this position isn’t an ethical stance, is somehow fatally unprincipled. Which is what it looks like when you’ve redoubled your efforts after forgetting your aim.

Really, this squishy “unprincipled” norm describes the actual behavior even of people who talk like fanatics about closed source being evil. Who, even among the hardest core of the “free software” zealots, actually spends any effort trying to abolish closed-source elevator firmware? That doesn’t happen; desktop and smartphone OSes make better targets because they’re more important – and with that pragmatism, we’re right back to comparative evaluation of consequential harm, even if the zealot won’t acknowledge that to himself.

Now that we have this analysis, it leads to conclusions few people will find surprising. That’s a feature, actually; if there were major surprises it would suggest that we had wandered too far away from the intuitions or folk theory we’re trying to clarify. Conclusions: we need to be most opposed to closed-source desktop and smartphone operating systems, because those have the most severe harms and the highest positive-externality stickiness. We can relax about what’s running in elevators and microwave ovens. We need to push for open source in basement routers harder as they become more capable. And the occasional game of Angry Birds or Civilization or World of Warcraft is not in fact a terrible act of hypocrisy.

One interesting question remains. What is the proper ethical response to situations in which there is no open-source alternative?

Let’s take this right to an instructive extreme – heart pacemakers. Suppose you have cardiac arrhythmia; should you refuse a pacemaker because you can’t get one with open-source firmware?

That would be an insane decision. But it’s the exact kind of insanity that moralists become prone to when they treat normative rules as worship objects or laudable fixations, forgetting that these rules are really just devices for the avoidance of harm and pain.

The sane thing to do would be to notice that there are kinds of harm in the world more severe than the harm from closed source, remember that the goal of all your ethical rules is the reduction of harm, and act accordingly.

What if the Therac-25 was opensource? If you are pulled over dor DU, would you want the breathalyzer to be opensource.

On BIOS and rms, apparently if the chips are rom and soldered, rms has no problem. I can’t normally get schematics and layouts (much less editable cad files) for anything in my computer. Does an fpga run ‘software’ or is it a logic chip? Are the’firmware’ blobs in linux /lib/firmware code or data (instructions or data)? Even if they are code, if there is no rational reason or ability to modify them, how does it violate principle?

I think there is a line – if you can certify, learn, and/or improve by modification or addition – remix – it should be open.

Seems to me there may be another category of harm – which I’ll propose to refer to as “lock out harm”. Something like this … the existence of a closed-source product in a particular space that has a long feature list (i.e. complexity and code size) and a deep-pockets funding source tends to discourage the creation of open source alternatives because of the very high barrier to entry. Only a similarly well-heeled closed source competitor stands a chance. Whereas if that product were open source, the creation of alternatives is just a fork away. (I take it as a given that the existence of multiple alternatives/competitors is understood to be beneficial.)

I probably haven’t explained that very well, but the example that makes me think of this as a separate category of harm is Intuit’s QuickBooks and the fact that *still* no credible OSS alternative exists. I’ve had quite a few clients that would gladly have replaced their Windows desktops with Linux but the lack of something like QuickBooks made it basically impossible.

If RMS had cardiac arrhythmia, he would accept the pacemaker with closed-source firmware. But he would give his doctor a Very Stern Lecture about the dangers of non-free software, and probably do the same in a letter to the pacemaker manufacturing company.

As soon as a pacemaker with a free firmware stack became available, he’d go under the knife again for it. Even if it means having an assistant on hand with a defib/pacemaker rebooting device in case his pacemaker smashes the stack and he drops in the middle of a lecture or something.

I’m being facetious, but extrapolating based on his behavior w.r.t. for example, computer BIOSes. He was perfectly willing to boot a PC with a proprietary BIOS, as long as that was all that was available. Then he switched to an OLPC with a USB hard disk uncomfortably lashed to it because of its mostly-free BIOS, now he’s driving a Lemote knockoff-MIPS machine from China because of its 100% free BIOS.

There’s another sort of harm that is sort of ‘lock-in lock-out’. An example is when Microsoft Word changed their default format from .doc to .docx. They didn’t have to do it; it was a blatant ploy to get their customers to upgrade. I would list this as a closed-source harm, except how many Linux programs end up requiring you to upgrade your kernel if they’re new, and you want to run them?

I would expand on Michael Hipp’s comment, though. Lock-out harm can actually occur because of open-source software as well. Look at how long LLVM took before it attained enough critical mass to become even somewhat competitive with GCC. And that didn’t happen “organically” — there was some significant funding involved.

My intuition about lock-out harm is that it will occur more frequently when the dominant software package is GPL-licensed, for the simple reason that people who are happy to code on permissively licensed software (a) are not as inflexible in general; and (b) realize that they have expanded the possibility of a fork significantly, by significantly enabling even closed-source competitors. This means that they have more incentive to provide what their users want and need, in order to avoid a fork and to keep critical mass on the project. (This also explains why they are deeply unhappy when somebody decides to take their work and slap the GPL license on top of it — they understand that this fragmentation is driven by politics rather than technical merit or the opportunity to make a buck, and are unhappy at the ideologically-driven potential harm, and at the hypocritical chutzpah of people who believe that if you use their code, you have to give back to them, but if they use your code, they don’t have to give back to you in any sort of form you can reuse in your preferred fashion.)

Although the GPL provides more incentive for lock-out, the incentives aren’t always abused. For example, (even though there are always politics in any human endeavor) Linux tries very hard to accept contributions based solely on technical merit, whereas GCC’s acceptance of contributions was always tempered by the desire to ensure that nobody could “abuse” the software.

So while there is significant lock-out with Linux, it’s not generally a problem, like it was with GCC. Interestingly, LLVM is now good enough that it has caused GCC to step up its game somewhat.

I’d like to see your analysis of social networking sites (such as facebook and linkedin) using this scale, and where that compares to operating systems and word processors. What do you see as the OpenSource alternative, “personal website with comments and an RSS feed”?

>I’d like to see your analysis of social networking sites (such as facebook and linkedin) using this scale

I’m not really qualified. I’ve never used Facebook or LinkedIn, because I didn’t want my data languishing in a vendor jail, so I don’t have the experience to do a detailed evaluation. I do use Google+ because I have reasonable confidence that I can get my data back out, so maybe that tells its own story.

Just on elevators – while I as an elevator passenger don’t have a strong interest in whether the firmware driving the elevator is open- or closed source, I would be willing to bet that access to the code is a serious issue for the owner of the building who may wish to make future modifications to the elevator system.

>I noticed that you did not offer any advantages of closed source software over open source software. Is that because you don’t believe there are any?

As a general rule, no, not in 2012.

In the past there was one systematic advantage of closed source; the business models could put together concentrations of capital that open-source projects couldn’t match. I don’t think that’s generally true any more. But there are remaining exceptions, mostly produced by regulatory barriers and threat of liability lawsuits, where the up-front costs to enter the market are huge. I don’t doubt that pacemakers are one such.

Eric,
Your closing `pacemaker’ example is interesting because it’s actually real–have you seen any of Karen Sandler’s talks about her experience getting a pacemaker? I guess the pacemakers on the market right now are low on the `lock-in harm’ and `unhackability harm’ scales, but they do seem to score pretty high on both the `reliability harm’ and `agency harm’ scales.

Here, the asymmetry between the game developers / administrators and the players directly impedes the availability of the game to people that want to play the same game but either single player or on their own private server. I’d say that that counts for a lot in terms of agency harm.

And with lock-in harm, well, I admit that it’s different, but the problem with talking about media and arts is the tendency of a good work to be a class in itself. Lock-in is meaningful as an abstraction if there exists a general activity to which the specific could be treated by the individual meaningfully as a specific instance of it.

Which is a long-winded way of saying that I doubt you could tell a baseball fanatic to replace his “lock-in” with basketball, or football, saying that they’re “essentially the same kind of thing anyways”. The attachment to games/media/art is a different kind of thing from, say, the mere stubbornness to replace MSOffice with LibreOffice, they cannot be evaluated in the same way.

>I disagree on your assessment with the agency harm and lock-in harm with regards to games.

I think what you’re actually arguing is that for purposes of harm analysis there are two distinct kinds of games. One is standalone games, and non-standalone games where either a functional game server is purchaseable as a product or instances can peer-to-peer network among themselves. The other kind, which I agree is more problematic, is games which require a vendor server to act in a privileged role.

That said, it’s still not ethically a problem to play Angry Birds on your smartphone. That game isn’t tied, and playing it doesn’t create externalities that tie other players to the game.

Probably the single most destructive software bug to ship in the last 10 years with consumer software was an EVE Online update whose installer wiped boot drives on Windows.

Compare with Game DRM which has (at several points but StarForce most recently) damaged hardware or Windows Update which has just recently been fingered as the vector through which some highly complex malware was propagated… I think “Single most destructive” is debatable.

I believe your ethical stance is called “Pragmatic Ethics” (could also be utilitarianism)http://en.wikipedia.org/wiki/Pragmatic_ethics
(No motivation to actually look it all up) Those who attack you adhere to normative ethics. But that is all like Vi vs Emacs flame wars, inherently uninteresting to me.

I think your pragmatic stance made you miss one of the most important harms of closed source: Secrecy.

Closed source is basically a legal way to limit access to knowledge. It is my firm conviction that all secret knowledge will ultimately be lost. At the least, it has to be rediscovered by painstakingly reverse engineering it or reconstruct it from barely comprehensive sources. In most cases, reconstructing that knowledge takes as much effort as creating it in the first place (which is also a problem with classified research, it is essentially lost for the future).

Little of the knowledge embedded in Unices has been lost as it was open in practice. All we know of MS DOS or Windows 95/98 has been reverse engineered with an effort that rivals the initial effort needed to create them.

I take a normative ethical stance that destroying knowledge is as evil as destroying other fruits of human industry. Although it is legal to destroy useful products you own, I consider it unethical to do so.
(before anyone starts to rant, if you know how you can cure cancer, and want to take that knowledge with you in the grave, that might be legal, but is not ethical in my book)

On this point i will happily play the part of a luddite. Anyone who is stupid enough to trust a machine to vote for them, deserves the despotism they’ll end up with.

A machine that will help you fill out your ballot slip (i.e. print it out for you to then visually verify before submitting)… sure no problem. I could even go so far as having computerised tallying (always with the proviso that a challenge means a hand count, no questions asked).

My point was that washing machines and TVs will slowly become more complex software-wise, so they will evolve from having firmwares to having OSes *just* like iOS. Yes, even with apps and all. There is a likely scenario those OSes will be closed source (a very likely scenario), and have all of iOS’s restrictions (app installation only from the app store, proprietary APIs and locked bootloader).

Now, assume that more time passes, and manufacturers completely stop making non-smart washing machines and TVs. The only ones made are those running that iOS-like operating system.

Till an open source alternative is developed (and some manufacturer decides to put it in his appliancies), what are you going to do? Not buying new washing machines or TVs even if you have to?

There also the maliciousness harm. When installing something on device it often get access to allot of other stuff on the same device. In the worst case an app demand root access to install completely owning Your system. Game is also a high concern here.

You may argue that privilege separation is the right answer and android do some level of app sand-boxing. I’m also much less rigorous with using only free software on my android phone then on my desktop. For desktops currently the maliciousness harm factor is big.

@kurkosdr
“My point was that washing machines and TVs will slowly become more complex software-wise, so they will evolve from having firmwares to having OSes *just* like iOS.”

Not slowly. My TV runs Linux. And I bought that years ago. Washing machines are becoming more “intelligent” too. And why not? A washing machine should be able to figure out what to do with my laundry better than I do. That is it’s purpose in “life”.

> Little of the knowledge embedded in Unices has been lost as it was open in practice. All we know of MS DOS or Windows 95/98 has been reverse engineered with an effort that rivals the initial effort needed to create them.

I noticed that esr was more of commenting on the harm and benefits from the users’ side, not the developers’. Clearly, he would not *make* closed source software, but if it were the only tool already made for the job, it’s a different calculation against usiing it.

>The loss of a secret technology harms it’s users. Think of all the documents, and data that have become inaccessible because it was in a closed source format.

Ah, now that is a harm I am willing to talk about. I have updated the blog post.

The way you put it before, harm to “society”, made me shy away. In my experience, ethical assertions that use “society” as a term invariably end up as tools for power-seeking monsters, that is if they didn’t already start that way.

I think that you are absolutely right about everything, but that you should not be so quick abou deciding what is more or less important. That you value an open source phone OS higher than an open source elevator firmware is something you do based on your history and interests. I happen to have a friend that works in the elevator repair business. He has absolutely no interest in what OS his phone runs, but he would actually be very interested in an open source elevator firmware.

Sure, but it can’t install apps. But soon TVs and washing machines will be able to install apps, aka they ‘ll evolve from having a firmware to having an OS, and there is a high chance they will run an iOS-like OS, with the restrictions I mentioned above (locked app installation, proprietary APIs, locked bootloader).

@esr
Such an appliance would score pretty badly on your harm analysis, right? If no appliances not running that iOS-like OS are made anymore, what are you going to do till an open source alternative is developed and used by manufacturers (if it’s ever used by manufacturers)? Not buy TVs and washing machines anymore?

Open source is the wrong solution to get around DRM, locked app installation, locked bootloaders, closed APIs and other malfeatures. An entire alternative must be implemented just to not have the specific malfeature. The correct approach would be to have strong pro-user laws (“pro-consumer” in political talk) that regulates software, so that we don’t get malfeatures in closed source software.

@esr
“The way you put it before, harm to “society”, made me shy away.”

That is a semantic problem that has plagued our discussions before.

To me, a society is just a structured group of people who interact (live together, which is a literal translation of the Dutch word for society). You seem to use it as a political entity. I have no political intentions and use it purely descriptive as in “Archaic Greek society”, or “native Papua society”.

Not really. Ethical claims made on behalf of “society” or a “population” are always dangerous to the life and liberty of individuals, no matter what term is used and whether they are politically framed or not. The problem is that such claims set up a greatest-good-for-the-greatest-number imperative that is vague and manipulable – and it almost invariably does get manipulated, in a process that ends in blood.

When you say that proprietary software often harms users by stranding data in formats you avoid the vagueness trap by identifying a specific problem with a specific remedy that doesn’t require coercion.

Oh, come on, people. How many of you would demand, and then actually perform, an examination of every medical device in the hospital that would b used on you before going in? Especially on an emergency basis? I’ll bet a nice steak dinner that the number is zero.

As I said over on Google+: “Always mount a scratch monkey.” Except this time, you are the scratch monkey.

@TomM — especially as elevator firmware seems to be getting rapidly more complex. half the buildings i’m in on a regular basis have the new-style “pressing the call button doesn’t make a car that’s just leaving reopen its doors” variety installed now. and then there’s the new “smart” elevators where you pick the floor in the lobby and the system assigns you a car….

I can see another category of harm I’ll call “dumb-down harm”. The example that brought it to mind is the difference between the intended-to-be-closed firmware of the Linksys WRT54 router and what you get when you replace it with something like dd-wrt or Tomato. The closed Linksys product is a dumbed-down consumer-grade product and nearly featureless (minimum value for the money). The open source replacement has a feature set that is staggeringly more complete and useful (maximum value and more being added all the time).

I’d like to add something to this list, and while it doesn’t apply strictly to games, that’s where it affects me. There is a sort of peripheral lock-in to proprietary games that are not cross-platform. I’ve been an avid gamer since I was around five, and unless I want to give up a large component of my hobby, that means I have to run Windows on my primary computer. The closed source nature of such games prevents porting them. So closed source on a large set of products creates lock-in for a different product. Dependency lock-in, one might call it.

(my solution is to run linux in a VM for everything that isn’t games, but it’s not ideal. WINE does not do the job sufficiently, either.)

I’m curious what you think of the occasional practice by developers of releasing source code to older games, while keeping newer releases (i.e. those still making money) closed. It’s not widespread yet but seems to be getting more common, especially among indies.

I’m also curious what you think of this data point: the Humble Bundle project releases sets of indie games every few months where the buyer can choose how much they want to pay. They’re all cross-platform, and HB tracks payments by OS. Linux users consistently pay about 30% more than others. Do you think this represents generosity among linux users related to gift-economy ethics, a market force related to the relative scarcity of games for that platform, or something else? (My suspicion: Both.)

>I’m curious what you think of the occasional practice by developers of releasing source code to older games, while keeping newer releases (i.e. those still making money) closed.

All open-source releases are good things. These, too. What, did you expect me to object?

I don’t think the fact that the games are not initially released as open source is very important, if that’s what you’re asking. Such time-delay licenses are well accepted in the community and have been used for some serious products, such as Ghostscript. Having the source go open after a fixed delay has most of the benefits of initial open-sourcing for everyone except the vendor, who doesn’t get to collect the benefits of third-party peer review in the early part of the product lifecycle when debugging is likely to be most challenging. How vendors trade that off against increased early revenue capture from proprietary licensing is up to them, a straight business decision of the sort product managers are supposed to make.

There are other cases in which early open-sourcing is much more important. Anything security- or life-critical ought never to be closed-source at any point, because the additional reliability risk associated with lack of third-party review is too high to be tolerated in that context. But games specifically aren’t like that.

>(My suspicion: Both.)

I agree. That 30% difference in willingness to pay is interesting information; thanks for bringing it up.

@Winter:I believe your ethical stance is called “Pragmatic Ethics” … Those who attack you adhere to normative ethics.
I do not entirely disagree with your conclusions, but my lens of understading is the 5 pillars of morality formative analysis proposed by UVa researchers. Questions of accuracy aside, that provides an atypically fine-grained framework to discuss the formation of moral thought. From that approach, the disagreement between Eric and [some of] his more fanatical commenters takes on an entirely different character.

Eric here argues how one could measure the moral “Harm” of software, and I would expect readers here to find the moral “Reciprocity” of open vs. closed software to be intuitive. While he has not spoken on the topic, the nature of his arguments and past essays imply that he discards or ranks low the “Authority”, “Ingroup”, and “Purity” moral formations. [Aside: I do so, because these 3 latter modes assume one can begin with inviolate, perfect moral understanding, and weakly forbid moral refinement from new data. This may constitute bias in my interpreting Eric’s position.]

Fanatics, however, do score these three latter modes [NOT particularly “Purity”] quite highly, often above that of “Harm” or “Reciprocity”. I suspect the complaint of misunderstanding comes from these people viewing him as an “Authority” voice, and finding his pragmatic behavior in disagreement with their models of “Purity” or “Ingroup”. The resulting cognitive dissonance could create the “fanboi rage” that will occur here from time to time.

>While he has not spoken on the topic, the nature of his arguments and past essays imply that he discards or ranks low the “Authority”, “Ingroup”, and “Purity” moral formations.

That is correct. And it is one among several reasons I have never identified with conservatives, for whom these formations are emotionally important.

>I suspect the complaint of misunderstanding comes from these people viewing him as an “Authority” voice, and finding his pragmatic behavior in disagreement with their models of “Purity” or “Ingroup”. The resulting cognitive dissonance could create the “fanboi rage” that will occur here from time to time.

I think that’s very astute and probably true.

As a separate point, I’ve encountered the five-pillars model before and consider it deficient in one very important respect. More than any of those other things, I value individual liberty. Where is that in their taxonomy? Is this not a primary moral sentiment on a level with (say) reciprocity?

I think the obvious advantage of closed source software is the profit motive. It is certainly easier to make money from closed source shrink wrap software than other ways. I have read your books and I am well aware that there are secondary channels to make money in an open source world, but it is certainly easier in closed source.

Of course that is an advantage to the producer, not to the consumer, except that it is. The advantage to the consumer is the simple law of supply and demand — if the producer can make a profit he is more likely to make the software product you need.

That isn’t to say that there is no supply and demand law for open source software, after all the large majority of software today isn’t sold at all. All I am saying is that it is a major and significant advantage of closed source software.

Others have complained about secrecy as a disadvantage, but it is also an advantage. A perfect example would be the coverity software you used on GPSD. Their competitive advantage is that they have a whole bunch of complex rules encoded in software that make their product better than others. Open source that and anyone can copy it. That might sound great until you realize that it means there is no money to make the next even better revision. The advantage is evident — after all you use it yourself and are presumably pleased with the result.

I’m not arguing that closed source is better, I’m just saying it does have a number of significant advantages. Evidently so, since there are a heck of a lot of very smart people who go that way.
I think people who call closed source software evil have totally lost the plot. No one is making you use any piece of software no matter how much they jail your data, or snoop on your comms.

I might add that I think there are some types of software that absolutely should be open source. The most obvious example is voting machine software. Deibold got their ass kicked on that, and I see absolutely no reason why they couldn’t open source the software, get millions of dollars of free consulting from everyone in the world prodding their code, and still make a bucket load of money with their locked in government contracts.

You don’t mention open standards, which mitigate many of the harms that some closed source programs may have. Though you do mention incompatible file formats. If MS Office can read and write (and does by default!) ODF files (without propitiatory extensions), then it matters a lot less than if MS Office can only read and write MS Office format files, or writes MS Office format files by default. In the case that MS Office uses ODF by default, it hardly matters to me if someone else uses MS Office. But because MS Office uses .docx (etc.), that matters to me (using LibreOffice/OOo), to people using older MS Office (pre 2003), and to users of other office suites. That’s an external harm (which you mention, sort of). Ranking four closed office suites which are otherwise equivalent: the one that uses by default the open standard is better than the one that can open and save (but doesn’t by default) the open standard. Depending on your attitude, the final two via for last place, the one that can open (but can’t save) the open standard, and the one that can’t open or save the open standard.

Anyway, over all I have to say that your categorisation is very interesting, and I tend to agree with your ranking. That is, my OS and productivity tools need to be Free/OS, while games are less of an issue (though I prefer Free games because I can trust that the community will make them better over time). Lower down on the scale becomes my mobile phone OS, here, if I can get at the data within it (and I can), that’s more important than being able to hack it. But, if I were running a ‘smartphone’, I would want it to be open. Because the more power a thing has, the more I want to do with it, and not having control over my own tools…

Oh, and actually, when it comes to all sorts of appliances (from elevators, to TVs, fridges, and washing machines, to routers, portable sound/music players and phones), having access is very important. Whether or not the OS/firmware on my music player is free or closed is irrelevant. If I can replace it with a better (free) system, then that’s great! Fuck TIVO and similar, who prevent the replacement of the OS. While I don’t care too much about the elevator OS, as pointed out, someone else might. If the built in ‘home’ algorithm is fundamentally flawed (“always return to the top when empty” – which might be changed to “always return to the bottom when empty, which might work fine for tall buildings, but fails for elevators used underground), or if there are only a limited number of preset options (none of which work for a particular building), then the building manager probably would love to replace that system.

So, in summary: I want control (or at least, the option of control with minimal effort) and I want open standards. And I want safety, security and Freedom. I get all my desires from Free/Open Source Software.

This equally applies to social networks. I want control over my data, and I want to be able to remove it, and share it with other users using open standards. I also want it to be safe and secure. I also want Freedom. So, Facebook is right out, no control, no open standards, no safety or security, and not free. Google+ is also out, because I wouldn’t trust Google as far as I could stick a knife into them (just to see if they bleed: I suspect a vampire).

(Oh, and to permissively license fans who complain about people taking their code and using it in GPL programs, and complaining about the code not being given back: what’s the problem again? You are still free to use the code, after all you developed it, it’s still free. Would you complain if someone took the code and used it in a propitiatory system? What’s the difference actually? The GPL users aren’t being unethical by your standards, and they aren’t being unethical by their own standards (the code is still free by their standards). If you don’t like that happening, then use a different license, perhaps one that requires developers to release modifications to the code under the same license…)

Do you think this represents generosity among linux users related to gift-economy ethics, a market force related to the relative scarcity of games for that platform, or something else? (My suspicion: Both.)

There’s a third factor which is a signalling factor. Proprietary software vendors have avoided Linux like the plague; the few dabbling experiments they did try (such as WordPerfect Office for Linux) put the idea in their heads that Linux users are a bunch of fosstards who stubbornly refuse to pay for software. The greater prices that Linux users pay for Humble Bundle games is partly an attempt to signal to developers that they are out there and are willing to show their appreciation through cold hard cash.

I don’t know how well it will work. By developing against one platform (Windows), PC game developers have already reached 95% of their potential consumer base. A Linux port simply means far too much effort for diminishing returns. As of right now in 2012, AAA game development on Linux is dead, dead, dead. Not even id is countenancing Linux binaries anymore, the way it has in the past. There are rumors that Valve has Linux plans, including a Steam client and maybe some ports of their signature games. I wish the very best of luck to them.

By the way, the “too much effort for diminishing returns” thing is also true of iOS and Android. It takes many times the effort to develop a game and ensure it works across Android devices than it does to develop for the iPhone, which represents a single development target — two if you want a separate iPad version, three if you want it to work on the 3GS. And Android users are far less likely to want to pay for an app than iPhone users. Vastly more effort for less return. The economics of games on Android don’t add up, and when pressed about Android versions more and more mobile game developers are having the courage to say “after hell freezes over”.

> While I don’t care too much about the elevator OS, as pointed out, someone else might.

This is missing the point. As a user of an elevator, i.e. a passenger, is there a local harm if the elevator is running closed control code? Is there enough harm to _refuse to ride in that elevator_? Anyone who does this is a nutcase.

Re-framing it by bringing in the building owner is twisting things too much. Obviously for the person managing the device and likely extracting the value from it there is a larger harm factor if the control code is closed, and it would be reasonable for that person to refuse to install such a system in the first place. But that is unrelated to the personal choice you make when presented with an elevator to ride in.

> By developing against one platform (Windows), PC game developers have already
> reached 95% of their potential consumer base. A Linux port simply means far too
> much effort for diminishing returns.

This has been true, but I see the winds shifting, especially as mobile and Mac uptake increases, since OS X leverages OpenGL. Unless I’m a big, established developer with a huge amount of mindshare sunk (foolishly) into DirectX over the years, I’m going to look hard at the choice of graphics library: the one that ports to just about everything, or the one that only gives access to two things (Windows and XBox)? If the humble-bundle stats are anything to go by, choosing to go Windows-only would be missing out on 25% of your potential profits, and maybe more!

jsk: The DirectX vs. OpenGL decision isn’t quite that cut and dried, since DirectX seems to perform better, and using OpenGL is fraught with problems form different vendors’ implementations both at the firmware and driver levels. Even so, using it does give you a leg up on portability. The Second Life clients use OpenGL for just this reason.

Most of what I have to say has already been mentioned by previous commenters, except for one…

I’m still confused as to the position you take with people instrumental in developing the idea of free software.
Did you just completely antagonize RMS with this post?
(In other words, is he now the enemy, the whole enemy, and nothing but the enemy to you?)

I always say that open protocols/data formats are just as important as open source.

You can write a proprietary word processor, and as long as it saves the files in an open format, there’s no lock-in harm. You can create a proprietary FTP client, but as long as it uses the standard protocol, no lock-in harm.

The ability to plug in an alternative, whether open- or closed-source, mitigates many of the harms you describe. And that’s why it’s so damned important that APIs, protocols, and file formats be free of IP encumbrances.

Ya, DX has recently (only recently, mind) been overtaking OpenGL in performance. How much of that is DX and how much is hardware vendors building specifically for DX and neglecting OpenGL, I dunno. OpenGL in Windows is certainly a pain in the neck; you have to jump through extra hoops to get the video card’s OpenGL driver instead of MS’s crap-tastic software implementation.

There’s an additional concern if you’re making the decision to use the non-standard, vendor-specific OpenGL extensions. I think it’s a bad move, but it’s allowed and would add additional development overhead with multiple code-paths to debug (one each for nVidia, ATI, Intel, etc).

I think actively choosing DX is foolish and self-defeating, but I don’t deny there are plenty of reasons to do so, especially if you only ever intend to target MS systems.

@esr:I’ve encountered the five-pillars model before and consider it deficient in one very important respect. More than any of those other things, I value individual liberty. Where is that in their taxonomy? Is this not a primary moral sentiment on a level with (say) reciprocity?

I did specifically disclaim that this is not an accurate tool for analysis. Your complaint is one of the primary reasons I consider this to be a “work in progress” at best — liberty is just one of the many concepts (alongside honor or birth / caste) that are historical modes of moral formation that are not fully accounted for. [Shoehorning them under one or more of the given categories does not sit well with me.]

Even more important IMO is there is no implication that the modes described can be classified as “good” and “bad”. Perhaps my view is discolored by the politics which surround the DSM-IV, but this lack of presumptive “correctness” allows for a politer and wider discussion than most other moral analysis. [Personally, communication is among the things I grant moral value which cannot be fit to the given categories.]

There’s so much I want to say about this, but I’ll try to limit it to the most important points.

First of all, as others (Jessica) have noted, you fail to take into account the advantages conferred by closed-source. You point out some putative disadvantages of closed-source and then conclude that we should avoid any particular piece of CS software to the degree that it is subject to these disadvantages. But this mode of analysis completely ignores the benefits of such software. Thus you have only really done half the analysis necessary to reach any solid conclusions.

Second, you have failed to establish your claim that OS software is more reliable than CS software. I know there are one or two ancient fuzzing studies, but there is no well-established body of evidence supporting your claim. In reality there exist rock-solid pieces of CS software, and hopelessly unreliable pieces of OS software, and vice versa. The closed/open nature of the source code is an unreliable proxy measure for the reliability of the software.

Third, you conflate closed file formats with closed-source software. I agree that closed formats cause lock-in harm, but this has nothing to do with the software being closed-source.

Fourth, ‘hackability harm’ is only relevant to hackers. You gloss over this by saying that non-programmers can ‘have someone do that for you’. But in reality this is not an option for most people. Nobody is really going to hire a programmer to add some feature to an email client. They’ll just get along without it, or switch to a competing product. And even programmers do not have the time to code up a solution to every missing feature they encounter. Usually programmers want the software they use to be maintained by somebody else. They’ve got other things to do.

@The Monster: Yes, that’s pretty much what I tried to say, but I think you did a better job of it.

@jsk: refusing to use the elevator as a passenger because of the closed source nature of the software is maybe a bit over the top. However, who buys elevators? People who may want to modify the software.

Refusing to use a closed source word processor is not over the top. Can I really trust that it isn’t leaking data? I think it does come down to trust. I can trust the elevator not to drop suddenly, because the manufacturer will be likely shut down if there is a problem. I can trust my microwave won’t be a problem, and if it is, maybe I get overcooked food. I don’t need to trust my TV, because it can’t communicate (and frankly I rarely watch the thing anyway). Meh, I think I had a point when I started writing this, but I lost it.

The permissive-license fans are annoyed by forking a program and putting the GPL on the fork because it is, from the original maintainer’s point of view, the same as forking a program and putting a proprietary license on it. In both cases the forked code can’t be merged back into the original program without changing the original program’s license. The details of the new license aren’t the problem–the problem is the requirement to choose between changing the license on one’s own code or rejecting useful contributions. If the original maintainer did not object strongly to changing their license, they’d presumably have switched to GPL or proprietary long ago, just to avoid having to paddle all the time to hold their position at the top of a waterfall.

There’s an additional concern if you’re making the decision to use the non-standard, vendor-specific OpenGL extensions. I think it’s a bad move, but it’s allowed and would add additional development overhead with multiple code-paths to debug (one each for nVidia, ATI, Intel, etc).

Certain video card features require “non-standard, vendor-specific OpenGL extensions”. This is because OGL tends to lag behind the latest graphics cards and lack support for their advanced features, features which were probably developed with Microsoft’s help and are supported in the latest Direct3D.

If you want to have the most cutting-edge graphics, the choice is clear: Direct3D.

Direct3D is also easier to code against, easier to debug, and has vastly better tool support, all integrated with Visual Studio. The driver support for Direct3D is also less buggy.

The only thing OpenGL has going for it is cross-platform compatibility.

The only thing OpenGL has going for it is cross-platform compatibility.

Actually, not even that: different Android devices tend to support different OpenGL ES feature profiles. Some ship with parts that claim to support OpenGL ES 2.0, but may be missing support for this or that shader type. So when you write an Android game against OpenGL ES, you have to test it on every single device or you will find one that makes your game look wrong or produces glitchy graphics, leading to one-star reviews on Play and shit sales. This is another reason why many game devs are going iOS only.

> Fourth, ‘hackability harm’ is only relevant to hackers. You gloss over this by saying that non-programmers can ‘have someone do that for you’. But in reality this is not an option for most people.

Quick example of this, FWIW. I was working with a company that wanted to import their mailing list into a piece of software for generating UPS shipping labels. The UPS software (supplied for free by UPS) was closed source, and had a bug in it where the import clipped the street address line at something like 40 characters, even though you could type in and save much longer address lines in the software directly.

No doubt it would have been a trivial bug fix had I had the source, but I didn’t and had to do do a massive manual review of the addresses to clip them manually. There was, of course, absolutely no need for UPS to keep their software closed, except arguably some small security module to control the output of the barcode shipping label.

That was a while ago, I think it is all web these days, but you get the idea.

>Unless you run your own business that somehow manages to have only plain-text correspondence with the rest of the world, using anything but Word is very over the top.

I would agree that there is an *ethical* reason to avoid using Word, because its proprietary file format leads to lock-in (note that this has nothing to do with being closed-source, however).

However, as a practical matter, many people have little choice but to use Word. It’s all very well for people like Eric to advocate avoiding MS Word, because he only has to communicate with fellow plain-texters. But it is unreasonable to think that lawyers, administrators, managers, or simply people wanting to send their CV to a potential employer, have any choice in the matter.

Any closed-source software used for communications among people raises particular worries that the authors might exploit their privileged relationship to it to snoop or censor.

reminds me of another quote:

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

Modern games are becoming communication and surveillance platforms. They have in-game chats and messaging features, pull ads from servers, and push stats back to developers. The newer ones also have access to sensor devices that can capture audio and video, sense motion and position without visible light, perform facial and gesture recognition with local compute resources, and send neatly cooked data over WAN interfaces to third parties. There’s a huge potential for abuse there that isn’t present on traditional desktop and laptop systems where all the sensor ports aren’t there or can be reliably turned off.

Cars are all over the map. In production cars today there is firmware that implements security (powered door locks), safety (ECU and ABS systems), information (display cluster systems), entertainment (the traditional car radio now does video too, and can cryptographically authenticate itself to your satellite radio content provider and your iPod), navigation (with GPS) and remote systems monitoring and even intervention (with a satellite uplink that can email you when it’s time to change your oil, or open your car doors without a physical key present). Production cars aren’t driving themselves yet, but people are making good progress on that problem.

It’s conceivable that traffic police might one day simply tally up speeding and parking fines and send drivers a bill at the end of the month based on GPS log data reported by the car. The hardware is already installed–all someone has to do is write the software.

@Zygo if they get annoyed by people forking and slapping a propitiatory license on top, why are they using a license that permits that very act? I just don’t get it. I’ve got a bunch of stuff that I’m willing for you to go and take and use, so long as you keep my name on it. That’s fine. If you make a million dollars from my work, that’s fine. (Though I wouldn’t object to you kicking some of that back to me.) So, if I have a program, and I release it under a BSD-like license, I know that someone could take it and use as they want.

What’s the difference between rejecting useful contributions because they are GPL and rejecting useful contributions because they are propitiatory? From the point of view of keeping a program permissively licensed, none at all. I think that the problem is that when the code is used in GPL program, people can see the changes. “Ooh, I want that one, and that one. Oh, it’s GPL.” But when it’s propitiatory, it’s all “ooh, look this big multinational corporation is using my work without paying me a cent, or letting me look at how they changed it, aren’t I special!”. I know not everyone is like that (I’ve got code I’ve released under permissive licenses, and I’m not like that). But, well.

Patrick Maupin initially mentioned politics, as if it were a bad thing, rather than making a buck (as if it’s a good thing – and I would say it isn’t when it’s done by a big multinational corporation, but I won’t go into that here). As if the reason isn’t often just technical. Oh, I want to use that code, GPL, and that code, BSD. I’ll release my code as GPL, because I want to use GPLed code. That’s not politics, that’s just sensible.

@Jeff Read: Bullshit. I’ve used OOo/LibreOffice since, ooh, at least 2002. I have never had a problem with exchanging files with people using MS Word (and considering I’ve done one and a half degrees in that time, with a lot of group work…). Unless you are doing absurd things like using overly complicated formatting (maybe you should use a graphics program instead? or a layout program? or save as PDF?) or something equally stupid, then you should have no problem.

Tom
> Fourth, ‘hackability harm’ is only relevant to hackers. You gloss over this by saying that non-programmers can ‘have someone do that for you’. But in reality this is not an option for most people.

As someone who has actually made part of a living from occasionally having end users request modifications on open source software, I must say you are _factually_ incorrect.

>No doubt it would have been a trivial bug fix had I had the source, but I didn’t and had to do do a massive manual review of the addresses to clip them manually.

@madumlo

>As someone who has actually made part of a living from occasionally having end users request modifications on open source software, I must say you are _factually_ incorrect.

I’m not saying that some people don’t hire programmers sometimes for this sort of thing. Nor am I saying that hackability isn’t a big benefit for some people under some circumstances.

What I am saying is that it isn’t much of a benefit at all for quite a lot of people.

This leads me to a more general criticism of ESR’s analysis, which is that an advantage isn’t an advantage for everybody, nor is a disadvantage a disadvantage for everybody. Hackability is a huge benefit for Eric because he wants to be able to fix any problem himself. It would have been a big advantage for Jessica in her UPS scenario. But it’s no benefit at all for Aunt Tilly.

Another example is that *for me* OS X is massively more usable and productive than Windows or Linux. I just get more done and feel like I have fewer problems. That benefit vastly outweighs whatever slight harm might be done by the power imbalance between me and Apple. But that benefit doesn’t exist for Eric because he finds Linux to be a better user experience.

You can’t just say ‘this is a disadvantage’, because different people have different needs and preferences.

Real-world assessment of software has to take into account both benefits and harms, the individual needs of the user, and the specific qualities of the software under consideration. You can’t make sweeping statements like ‘closed source is harmful’, or even ‘closed source operating systems are harmful’.

Michael: 1) The word is “proprietary”, not “propitiatory”. It took me reading three of your posts to figure out what you meant.

2) GPL advocates scream loud about people respecting their choice of license. Yet they see nothing wrong with not respecting a permissive license developer’s choice and slapping their own license on the code instead. Hypocrisy, anyone?

> “I don’t think the fact that the games are not initially released as open source is very important, if that’s what you’re asking. Such time-delay licenses are well accepted in the community and have been used for some serious products, such as Ghostscript.”

That is what I was asking, yes; I was unaware such models were already well accepted. My exposure to OSS politics is pretty much limited to this site.

> “That 30% difference in willingness to pay is interesting information; thanks for bringing it up.”

I think it may have dropped over time. I’ll look it up this evening, if they provide data for old release sets. I would not be surprised to see both a larger average payment and a larger linux-differential back when the model was still a novelty.

Unless you are doing absurd things like using overly complicated formatting (maybe you should use a graphics program instead? or a layout program? or save as PDF?) or something equally stupid, then you should have no problem.

Now you’re the one talking bullshit. I don’t think headers and footers count as “overly complicated formatting”, yet LibreOffice still manages to fuck those up.

LibreOffice is an amazing suite of programs that gets you 90% of the way towards full compatibility with MS Office.

But 90% is not 100%.

Until it does reach 100%, the path of least resistance — the path that doesn’t have you making unacceptable excuses to your boss about why there’s a formatting glitch you can’t fix in a report or design document that’s based on a template that’s been used to produce hundreds of glitch-free documents in the past — is simply to run Windows and install the Microsoft applications.

> By developing against one platform (Windows), PC game developers have already reached 95% of their potential consumer base. A Linux port simply means far too much effort for diminishing returns. As of right now in 2012, AAA game development on Linux is dead, dead, dead.

You’re not mistaken, but I’m not sure it holds for anything *but* AAA titles. Libraries for cross-platform game development exist, and while they’re somewhat more painful as I understand it, I’m not sure the difference is enough to justify not using them and being able to sell to a wider audience. Sure, only a slightly wider audience — but one that has far fewer alternative games to spend on, too.

If OGL doesn’t have DX’s performance, that would justify it for the AAA titles, I suppose. If being visually spectacular is part of your sales pitch then you need every cycle you can get. But anything outside that category probably doesn’t need it. Making the choice based on graphics library performance isn’t just premature optimisation, it’s unnecessary optimisation. Targeting OGL, etc. from the beginning and accepting lowest-common-denominator performance gets you X% more customers for Y% more time spent debugging. Is value(X) > value(Y)? I think so, but have no proof. Y is undoubtedly much lower if you start with a cross-platform base, rather than porting after the fact, though. Y is almost certainly close to zero if your game is open source and popular enough to attract outside debuggers for the alternative platforms. (but X is lower too, as it will get pirated more; your customers are paying for game resources rather than code, and that requires no hoop-jumping to copy.)

If nobody on linux wants to buy games, or they all do what I do and just have a Wintendo for the purpose, then I’m wrong. But the HB stats seem to indicate otherwise.

(Disclaimer: Not a working game developer, though I dabble on my own time. Above analysis is made from a position of ignorance, but it feels right to me.)

A
> You’re not mistaken, but I’m not sure it holds for anything *but* AAA titles. Libraries for cross-platform game development exist, and while they’re somewhat more painful as I understand it, I’m not sure the difference is enough to justify not using them and being able to sell to a wider audience. Sure, only a slightly wider audience — but one that has far fewer alternative games to spend on, too.

And I believe the correct perspective to take, in the case of the HB games, is to consider that this audience supposedly makes up 1% of the population but almost 25% of the profit.

What’s the difference between rejecting useful contributions because they are GPL and rejecting useful contributions because they are propitiatory[sic]?

No difference, except that nobody makes “propritiatory[sic] contributions.” So you are comparing horses with unicorns.

From the point of view of keeping a program permissively licensed, none at all. I think that the problem is that when the code is used in GPL program, people can see the changes. “Ooh, I want that one, and that one. Oh, it’s GPL.”

No, you’re missing the point completely.

But when it’s propitiatory, it’s all “ooh, look this big multinational corporation is using my work without paying me a cent, or letting me look at how they changed it, aren’t I special!”.

Patrick Maupin initially mentioned politics, as if it were a bad thing, rather than making a buck (as if it’s a good thing – and I would say it isn’t when it’s done by a big multinational corporation, but I won’t go into that here).

If you take permissively licensed code and make a buck with it, you’re obviously in keeping with the spirit of the license. If you add a bit to it (or sometimes even not — in some of the cases that really got people riled up a few years ago, some people essentially slapped the GPL on top of preexisting packages) and relicense it under GPL, then you’re not in keeping with the spirit of the license. More on that anon.

As if the reason isn’t often just technical. Oh, I want to use that code, GPL, and that code, BSD. I’ll release my code as GPL, because I want to use GPLed code. That’s not politics, that’s just sensible.

And I have no problem with that, if that’s what’s actually going on. But that’s not always what’s going on. You can tell if that’s what’s going on by seeing if bugfixes, etc. are contributed back upstream to the original project. Even if someone mixes GPLed code and permissive code, they don’t have to go and update the headers on all the permissive code to add the GPL license and insure that nobody can use their bugfixes back in the permissive project. That’s been done and it’s unspeakably rude. Even people who use permissively licensed code in proprietary systems usually contribute bugfixes back because of enlightened self-interest. Someone who forks a permissively licensed package to put a GPL license on top of it is usually trying to starve the permissive ecosystem by attracting developers to the GPL fork.

It’s highly hypocritical for someone to say “here’s source code that I want you to use, but don’t you dare use my stuff if you don’t respect my license” when 95% of that source code consists of stuff that others created and licensed permissively.

The primary counterargument against this is “if you released it under a permissive license, then you don’t have a leg to stand on when people do whatever they want with it.” This is legally true, but morally, if you slap the GPL label on something that’s 95% code that I wrote and licensed permissively, and 5% your own precioussss IP, then morally, you’re much worse than a corporation that does the same thing but releases under a proprietary license.

Why? Because (a) you’re obviously angling to try to steal developers, the lifeblood of the project, from the original project, and (b) you’re obviously not going to be contributing changes back. In general, a company making proprietary releases is not stealing developers (because they probably don’t even release source code), and (assuming the permissively licensed project is actively developed), is probably even contributing bug-fixes and enhancements back, because they don’t want to get too far out of sync.

ESR: As a separate point, I’ve encountered the five-pillars model before and consider it deficient in one very important respect. More than any of those other things, I value individual liberty. Where is that in their taxonomy? Is this not a primary moral sentiment on a level with (say) reciprocity?

I think that’s on the Authority/Respect pillar. If I respect you, I have the liberty to disagree with you, challenge you, “disobey” you (not that you’ve given orders, but someone with could treat your ideas as orders) etc. If I treat you as an Authority, I don’t have the liberty to disagree with you, challenge you, or disobey your ideas/orders.

GPL advocates scream loud about people respecting their choice of license. Yet they see nothing wrong with not respecting a permissive license developer’s choice and slapping their own license on the code instead. Hypocrisy, anyone?

Those GPL advocates, screaming or not, still comply with the license that the permissive license developer chose, don’t they ? So, if those permissive devs don’t like the effects of the license they chose, why did they choose it ?

Tom
> Hackability is a huge benefit for Eric because he wants to be able to fix any problem himself. It would have been a big advantage for Jessica in her UPS scenario. But it’s no benefit at all for Aunt Tilly.

But the “Aunt Tilly’s” that ask for scripting changes tend to work for companies or offer services that a lot of “genuine” Aunt Tilly’s rely on, so I would be hasty about declaring the practical uselessness of them for the regular person.

It’s the same old “Linux user” argument. “If Linux magically disappeared tomorrow,” you ask a class, “who’d notice?” Nobody would raise their hands. “Okay, what if Facebook and Google were down?” and nearly everyone would say yes. “By the way…”

Much of the reliability of the Internet comes from nameless sysads being able to understand what’s going on when they try to script something.

Those GPL advocates, screaming or not, still comply with the license that the permissive license developer chose, don’t they ?

Yes. Legally. See my post above.

So, if those permissive devs don’t like the effects of the license they chose, why did they choose it ?

Do you mean, if those permissive devs don’t like the fact that a small number of GPL aficionados are so dogmatic that they will attempt to co-opt all open source, then those permissive devs should either just roll over and accept the GPL as their salvation, or should go the other way and close up their source?

Or do you mean that only GPL/FSF people are the ones that are allowed to complain about people doing what the license says? Tivo anyone?

> And I believe the correct perspective to take, in the case of the HB games, is to consider that this audience supposedly makes up 1% of the population but almost 25% of the profit.

Among open-source indie titles marketed to a geek audience with a high proportion of neophiles. There’s probably another aspect of signalling to it as well; a linux user who buys the HB isn’t just signalling “I’m willing to buy games” but also “I want to support developers getting out from under publishers’ thumbs,” and “I want more quirky games.” I doubt these are independent variables; there are undoubtedly a higher proportion of linux users who are predisposed to support indies and quirkiness, and that will distort the totals. If 2nd-Tier Franchise Game IV were released on linux, I doubt you’d see +25% sales. But you’d see +*something*.

I think the more interesting and more widely applicable stat is the per-user payment, not per-OS total; it’s reasonable to think that the average linux user is more likely to go for the HB, but that people already interested in the HB who happen to run linux will pay more is not so immediately obvious.

Of course total sales is what the development shop is interested in. So perhaps the lesson to take home is “if you’re an indie, go cross platform and open source, and you’ll make a lot more money; here’s the evidence. If you’re mainstream, you can try it but the jury is still out.”

I dindn’t mean anythin; I was genuinely wondering.
To an innocent bystander (i.c. me), it sounds like : i give you permission to do something, but if you do it, I’ll get angry.
Weird.

In those cases that this relicensing is indeed, as you explain, a deliberate attempt to steal devs and control more code, I can understand the ‘angry’ somewhat better.

>Or do you mean that legal == moral ?
I wasn’t aware of the fact that open source, attach morals to their licensing or their software – with the exception of GNU/FS, who make their moral position very clear.
So, most of the time, i look at licenses solely from a legal perspective : what are the terms and conditions for using this software, and do they allow my intended use.

I think that the risk of reliance on closed source is similar to the risk of driving without a seat belt: no problem as long as nothing goes wrong. And just like with seat belts, there was a time when we didn’t have a choice — there were no seat belts or open source products to choose.

But now that we do have a choice, deciding to protect ourselves and our organizations and clients from the very real risks of dependence on closed source software should become as much of a habit as the decision to “buckle up”.

kn: The GPL relicensers comply with the letter of the license, but certainly not with its spirit – especially in the same way they demand that others comply with the spirit of the GPL. If you’re going to be dogmatic about your own license, don’t complain when others are dogmatic about theirs.

@Tom
> What I am saying is that it isn’t much of a benefit at all for quite a lot of people.

Microsoft Office has a small amount of hackability via .net languages and an API that provides an object model of the application and content. There is a not inconsiderable market in add ons to Office to add various enhancements to the existing functionality. Aunt Tilly might very well use such an add on to help her make birthday cards for her nieces and nephews. Uncle Fred uses an add on to help auto process email from his Amway business.

I think Aunt Tilly can and does hire programmers to hack her office installation, she just buys them in a shrink wrap package.

@winter
>I was thinking. This argument can also be made about prostitution over love. Or raising children in orphanages. Even about speaking itself.

Hmmh, I really do pity your poor wife if you think sex == love. Nonetheless, if your goal is sex then the profit motive no doubt is a better guarantee of achieving that goal than love. Hopefully both will get your there, hookers are always horny.

In regards to children and orphanages, the main problem with orphanages as a place to raise children is a basic attachment disorder. Humans are designed to be raised in a closely held family, and absent that they don’t flourish. That is pretty close to impossible to get outside of a family, no matter how much money the orphanage is willing to spend.

I was not saying that the profit motive guarantees better software, only that it is one thing that closed source software has as an advantage (by some measures anyway.)

Any closed-source software used for communications among people raises particular worries that the authors might exploit their privileged relationship to it to snoop or censor.

On the other hand, when was the last time any of us checked the source of any of our open-source communication systems?

(For that matter, a Whole Lot Of Us use un-encrypted chat software, that uses a centralized server for brokering (and could presumably transparently M-I-M us into using its own evil spy proxy even if it’s notionally P2P after the brokering, though I don’t know protocol details)…

I’m not real worried about that one either. Sorry. Just don’t care.

More or less because I don’t think Google et al. give a damn, and if the Government wants to they have guys who can break into my house.)

Sorry if I came off sharp; occasionally people come by trolling with similar questions :-)

In those cases that this relicensing is indeed, as you explain, a deliberate attempt to steal devs and control more code, I can understand the ‘angry’ somewhat better.

Well, it’s that plus the whole “do as I say and not as I do” thing that goes on with that sort of relicensing.

I wasn’t aware of the fact that open source, attach morals to their licensing or their software – with the exception of GNU/FS, who make their moral position very clear.

For a long time, GPL was the “default” open source license, because of its early success with things like GCC and Linux and because of the viral/selfish gene/whatever nature. So I would argue that it is quite easy to use the GPL without much introspection about the morality of the nature of it, and that in fact, in a lot of cases, using a permissive license (especially in the face of people who are pushing you to GPL code) requires more thoughtful moral reasoning than using the GPL. (This is not true in all cases; obviously some entities such as universities have a default position that allows for commercial re-use, in which case the “no-thinking-required” position for someone who works at such a university is to go with their standard license.) The average person who uses a permissive license on a brand-new codebase has probably done as much, if not more, deep thinking about the morality of it than the average person who uses the GPL on a brand-new codebase.

So, most of the time, i look at licenses solely from a legal perspective : what are the terms and conditions for using this software, and do they allow my intended use.

Which is all that is required, legally or morally, if your intended use is actually about matching the software to a technical problem that you have. But if the intended use is mainly furtherance of a political agenda that is about not taking code unless you give back, then it’s quite hypocritical to take code and wrap a license around it that is expressly designed to prevent downstream users from being able to contribute back to the original source.

agent harm… ok, but, still, that add a big harm to any closed sourced program on a general system without good app sand-boxing.

This BSD->GPL discussion is interesting if maybe a bit OT, poligize for contributing a bit…

I fought the spirit of permissive licensing was not to impose restrictions on the primary users ro defend the freedom of later users in the chain, and those not ether the free us of contributions.

But I guess most users of permissive licence use it out of pragmatic reason rather then principle. They want a open source lib they can us for writing commercial program and program for ‘evil’ system like IOS.

But that is precisely why I don’t want to significantly contribute to it, I want protection for my free work so it don’t end up on stuff like a system with mandatory DRM. Sure bug fixes is OK, trivial improvements to. But if I make significant improvement I would licence them under GPL. I would still do a best effort to make all bug fixes available separably under the permissive licence but hardly any large chunk of my own work. I would not atempt a diliberate ‘big’ fork… Just my own for my need – but I would release it under GPL and after that it may of course live it’s own life.

I can see how a fork can feel unfair – as the sharing of code goes one way. But it still must be better than not releasing the code at all (what a permissive license is all about) – You still got it, You can analyze it, play with it, basically get a prototype for free so if You want to implement the same functionality Yourself it will be easier and probably give a better result.

What to do? Live with it or release the code as GPL, request transfer of (non exclusive) rights for contributions and sell commercial licenses for use on IOS or in closed source projects.

I much easier contribute to that – still my work may end up in program on IOS and the like but at least payed for and the money have gone to free software development even if not to me (if I’m about to undertake a big addition I might ask if You want to pay if the project are prosperous). That give some balance to the equation.

I wouldn’t demand to see the source before I allowed hospital equipment to be used on me.

But it would be nice if the code were available for people to at least read so someone could notice things along the lines of, “Hey, you know, you’ve got a potential arithmetic overflow here that causes the code to bypass the safety checks.” That being the actual coding error in the Therac-25 that gave at least six people hundred-fold radiation overdoses.

Sorry for the spelling and I must add that I don’t know if I can re-license BSD code… I so fore have gotten away with not doing any nontrivial changes to code with permissive license and my main intention is to keep it that way.

But I guess most users of permissive licence use it out of pragmatic reason rather then principle.

So, if I put my source out there for anybody to use under a permissive license, I’m just being pragmatic? Sorry, that’s not it at all.

They want a open source lib they can us for writing commercial program and program for ‘evil’ system like IOS.

Or maybe they just want a system where they don’t have to worry too hard about fulfilling the requirements of the license. Maybe they want their customers to be able to share software freely without having to worry about violating some license they don’t know anything about. Maybe they want to learn and use things that they can reuse in several different contexts without having to worry about politics.

But that is precisely why I don’t want to significantly contribute to it, I want protection for my free work so it don’t end up on stuff like a system with mandatory DRM.

And that’s absolutely fine. If what it takes for you to contribute to open source is some kind of legal assurance that the only people who aren’t allowed to take your software and profit from it are fellow programmers, I scratch my head at your reasoning, but am happy you’ve found a license that encourages you to contribute.

Sure bug fixes is OK, trivial improvements to. But if I make significant improvement I would licence them under GPL.

One argument that GPL people make is that they hate to see their work used as a starting point for others who don’t give back. But you’re saying you will happily use permissively licensed code as a starting point for code that you don’t give back. (Sure, you make it available, but perhaps not in a form that can be used by the original author. Maybe he works for a company that won’t let him use the GPL, and he worked really hard to get them to let him release some code. Now you’ve gone and proved his bosses point — there are a lot of FSF-zealots out there who are happy to take and not give back in a usable form.)

I would still do a best effort to make all bug fixes available separably under the permissive licence but hardly any large chunk of my own work.

That’s something I suppose. And if your work is much larger than the original, it’s morally defensible. But the GPL argument has always been that if your work is enabled by GPLed software, you should give back. So if your work is enabled by permissive software, why not give back in that case, too?

I would not atempt a diliberate ‘big’ fork… Just my own for my need – but I would release it under GPL and after that it may of course live it’s own life.

There are two kinds of code. Small throwaway code, where this is a viable strategy. But for a larger project, I think this is the wrong approach. If the original project was worthwhile enough to be a good starting point, then contribute whatever bug fixes, enhancements, and interfaces back to it that make it usable as a library for your project. That way, you’re working with the original developer, not at odds with him. (Of course, if the original developer is not interested in working with you, a fork might be in order, and then if it becomes too much hassle to maintain separate licenses, putting it under GPL might be reasonable.)

I can see how a fork can feel unfair – as the sharing of code goes one way.

Not just unfair. In a lot of cases, it’s stupid.

But it still must be better than not releasing the code at all (what a permissive license is all about)

You miss the point of permissive licensing, and haven’t read anything I’ve written very carefully. Most large permissively licensed libraries have significant code contributions from developers who use the library in commercial setups, but contribute back significant enhancements and bug fixes. How do you think apache works?

But if you cause a fork for purely political reasons, you divide the developer community. On one side, you have a lot of people who have to keep contributing to the old software, and on the other you have whoever you manage to convince to go with you. All because you are happy to take what others have done and offer to you in a form you can use, but are completely unwilling to reciprocate.

– You still got it, You can analyze it, play with it, basically get a prototype for free so if You want to implement the same functionality Yourself it will be easier and probably give a better result.

And if someone takes my permissively licensed software, guess what? I (and others) still have it and can prototype and play with it. You’re not making sense here.

What to do? Live with it or release the code as GPL, request transfer of (non exclusive) rights for contributions and sell commercial licenses for use on IOS or in closed source projects.

At the end of the day, what would happen comes down to a project-by-project basis, but people who are interested in selling commercial licenses generally use the GPL, because with a permissive license, they don’t have much to sell.

I much easier contribute to that – still my work may end up in program on IOS and the like but at least payed for and the money have gone to free software development even if not to me (if I’m about to undertake a big addition I might ask if You want to pay if the project are prosperous). That give some balance to the equation.

Let me make sure I understand. Are you saying that you would be more likely to give a GPL developer a code assignment that allows them to use your code contributions commercially than you would be to simply give back code under a permissive license that was given to you under a permissive license?

“Let me make sure I understand. Are you saying that you would be more likely to give a GPL developer a code assignment that allows them to use your code contributions commercially than you would be to simply give back code under a permissive license that was given to you under a permissive license?”

Sans nitpicking about GPL can be used commercially without code assignment, YES!

It make more sens, at least some money goes to free software development if the code is used for closed source development or even worse – targeting ecosystem that is harmful to freedom.

It depend on trust that the money is collected and used for free software development. But a well maintained GPL code base with active development from the ‘owning’ part is enough even if some formal organisation would help further.

Apropos of nothing but the OP, despite the comment threads having some interesting insights:

It’s amazing to re-read that post from 2008 about the Unix Hater’s Handbook, and run across the matter of Unix kernel-terminal interaction:

“Yes, it would have been really nice if Unix kernels had presented a uniform screen-painting API rather than leaving the job to a userspace library like curses(3)…. The fundamental problem was that Unix (unlike the earlier systems these guys were romantically pining for) needed to talk to lots of VDTs that didn’t identify themselves to the system (so you couldn’t autoconfigure them) and the different VDT types had complicatedly different command sets…”

–> “VDTs that didn’t identify themselves to the system” <–

…in short, it sounds like you're going through this all over again with GPSD. Unless Dave or Russell spent a great deal of time on curses(3) or ncurses, it sounds like your perspective on the growth of a typical open-source standard from infancy to adulthood is literally unique. I'm sure various hackers out there have participated in more than one of the many open standards since, say, 1970, but you're the only one I know who wrote long-form essays about it. (But maybe there are plenty of others.)

It makes me wonder what we can expect around GPSD in the future. Or other open standards. With concrete predictability comes concrete risk mitigation; this could have important, visible business implications. (And I'm not talking about "visible to a few 133t cognoscenti on a blog".)

I’m not sure the reliability harm is as trivially negligible in the case of microwaves and elevators as Eric describes. Indeed, reliability harm is probably why these two specific things would be brought up as closed-source champions. Namely: you’re trusting an elevator car not to be dropped, and you’re trusting a magnetron not to radiate lethal levels of microwave energy beyond the cooking space. In other words, there do exist bugs whose consequences *are* severe.

At the same time, it is arguably quite true that these specific severe bugs are mitigated, millions of times, given the number of working devices in operation today. However, I’m not as sure about “relatively easily”. Note that when I say that, I mean I’m literally not sure; I could argue either way. One could (I claim) rig their own elevator or microwave, without being reasonably cognizant of the engineering behind things like emergency brakes and magnetron shielding. We would expect such a jerry-rigged device to malfunction before long, and quite possibly lethally. (According to Wikipedia, the first screw-drive elevator was installed and working nearly 60 years before Otis invented a safety device to keep the cab from falling too far if the cable broke.)

That such devices do not frequently malfunction in lethal fashion is apparently because of well-understood safety mechanisms which are economical to manufacture (you can still make a $50 microwave despite having to shield it, and building companies can apparently afford elevators). But that’s not all. Economical is not free; a company could still save money in theory by not building in these features. By the time a device broke, the company would have received its money. There is a clear case for playing the game this way. A proprietarist could now trumpet the success of the safety certification system that ensures these devices are in place on every copy sold. Because of this certification system, we average humans can trust a cab not to fall, or a magnetron not to irradiate a kitchen… (or a drug not to kill you…)

…But consider what would happen if that certification mechanism simply weren’t there. Not just removed; but never occurred to anyone to be of use in the first place. Suppose you’re an average person faced with using these devices, and you’re given no signals about safety other than the natural ones. How risky is that elevator? Well, I can tell it’s putting my body in a box and hoisting it higher than ten feet; that’s as risky as the cable breaking. How risky is that microwave? Well… I put food in the box and it gets warm. What’s special about the box? Fire feels hot the closer I stand, and can cook food if it’s close enough, but being in that box seems to make the difference between cooked food and room temperature. Risk is a bit harder to estimate without understanding a fair bit of physics. The proprietarist is chuckling at me now.

…And yet, people *still* use microwaves routinely, without fear, but more importantly *without even verifying that the microwave has some sticker on it claiming it underwent some certification*. Seriously. Who does this? And of those who do, who believes a sticker on a microwave will rise up and shield you if the magic box decides to get unruly?

I can think of two likely reasons microwaves are routinely used by millions of average people without fear. One is fairly irrational (“the sticker gods will protect me”). The other is a bit less irrational – it worked fine yesterday, and the day before that, and I know millions of these are used, and so if one of them broke and irradiated a kitchen, I would have heard about it by now. In other words, people are playing the odds. I hear this all the time on airplanes; millions of people fly successfully; if it failed, I would know. The more people know, the fewer times that seller of elevators, microwaves, drugs, or airplanes would be able to sell successfully; what’s more, we know that sellers know that, and fear not only killing off their own customers but also anyone hearing about those customers.

Consider how useful that second mechanism would be without a pervasive reporting industry with a natural incentive to report accidents. Now consider how hard it would be if reporting accidents were only done by reporting to a proprietary reporter, who passed this news up a chain of proprietary editors, and then to a group of proprietary couriers, any of whom may elect not to pass that news further.

In short, I don’t think elevators and microwaves exemplify a reliability harm to consumers by sellers; I rather think it exemplifies either a reliability harm (or a sixth form of harm, depending on how I understand reliability harm) to these sellers by the reporting mechanism they use – which is open source in a way Westerners tend to take for granted. (Even though it’s not entirely open either.)

It seems to me that long-term technological evolution will ultimately determine the fate of the proprietary versus open source software modalities (and related analogs). I don’t think that ethical decisionmaking will play as much a role in this outcome as simple utility, economy, and self-interest.

It sounds like your view of how software should be developed is a cross between communism and the ancient Catholic church.

Giving your code to to anybody to use for any purpose (but if they are capable of making it do more than you can, they have to give back) is very much like “from each according to his ability, to each according to his need,” and letting people sin as long as you can charge them for it and use the proceeds for the greater good is a lot like selling indulgences.

I think it’s penny-wise, pound-foolish to worry about free riding from people who might make a profit on reusing your code. If RMS wasn’t worried so much about people reusing GCC in ways that make sense, GCC would be a lot better now, and there would be no LLVM to speak of.

Big businesses like IBM and Oracle work really hard to get their contributions into projects like Linux and Apache. Why? Not because they’re feeling charitable. No, it’s a highly pragmatic decision to invest a tiny bit and recoup that many times over from all the others who are investing a tiny bit. Philosophically, someone might make a private fork of Apache, but how well would that really fare in the world? Worrying about free-riding from people who *might* make a profit on your code is, IMO, silly, unless it’s a niche product you’re selling as closed source because that’s how you make your money.

>>“That 30% difference in willingness to pay is interesting information; thanks for bringing it up.”

>I think it may have dropped over time. I’ll look it up this evening, if they provide data for
> old release sets. I would not be surprised to see both a larger average payment and
> a larger linux-differential back when the model was still a novelty.

I was wrong about average payment (down, then up), but right about OS differential; the linux premium has fallen slowly but steadily since the first few entries, at least according to the moving average. (highest 80% over average, lowest 20%, currently 30%, MA about 35% and falling) How much that can be trusted with so few samples, I don’t know. Mac users exhibit the same higher payments that linux users do, in about the same amount once the chart has a chance to settle. That surprised me but probably shouldn’t, Apple prices being what they are.

Windows users are consistently cheapskates. Or at least more so. Sadly, everybody pays far less than the games are worth, by about an order of magnitude. Doesn’t say much good about fans.

(addendum: Seems I oopsed; not all the HIB games are open source, only a subset. This is called poor fact checking on my part, though I’m too tired right now to work out if there’s a correlation between the OSS releases and “Linux Bonus.”)

@Patrick:
>One argument that GPL people make is that they hate to see their work used as a starting point for others who don’t give back.

For me (although I’ve not yet done any coding for anything other than my own use) whether others give back isn’t really an issue. The reason I’d use the GPL for anything I released publicly is because I don’t want my work being used as a starting point for others who abuse their customers. I don’t believe closed-source is wrong, per-se, but it does tend to be comorbid with various things (like DRM) that I believe are wrong. The GPL either forbids those things, or makes them pointless to implement / trivial for the end user to work around. Permissive licenses like the BSD license don’t.

I don’t care if anybody contributes back: I do care that my code doesn’t make it easier and cheaper to develop software that abuses its users. Requiring that people give back (in the form of publishing the sources to any modifications they make) is a way of ensuring that.

Should RMS be in a 200-story building, would he refuse to use the elevators because of the closed fw they’re running? This attitude is zealotry, which looks a lot like religious fanaticism, and look what that has done to us. Just my 2 cents.

I’d be interested in what RMS’s position on these various examples actually are. I’ve spoken to him about open source hardware and firmware many years ago (before coreboot and the maker movement took off), and his opinion seemed to be that neither of these were significant because of some quality that I could not completely identify; I think primarily an innate difficulty in sharing this such software in a re-usable way. I don’t know whether he’s changed his opinion since then.

I also think that RMS’s ethical philosophy revolves more about your moral obligations about *writing* software, not using it. Writing closed source software is the unethical action; using it is just bad because you’re encouraging other people to write more closed source software.

@Patrick
“That wouldn’t be surprising — Stallman’s a lot of things, but stupid isn’t one of them. But do you have any sort of cite for this?”

I heard it several times in talks and podcasts with Bradley Kuhn. There is a quote on the FSF website. It is much more strategic than my initial paraphrase. But it is there. Bradley’s stance is that the FSF is promoting free software, and permissively licensed software is free. So, the FSF is happy with permissively licensed software.

When you contribute to an existing project, you should usually release your modified versions under the same license as the original work. It’s good to cooperate with the project’s maintainers, and using a different license for your modifications often makes that cooperation very difficult. You should only do that when there is a strong reason to justify it.

One case where using a different license can be justified is when you make major changes to a work under a non-copyleft license. If the version you’ve created is considerably more useful than the original, then it’s worth copylefting your work, for all the same reasons we normally recommend copyleft. If you are in this situation, please follow the recommendations below for licensing a new project.

If you choose to release your contributions under a different license for whatever reason, you must make sure that the original license allows use of the material under your chosen license. To minimize the impact on others, show explicitly which parts of the work are under which license.

In fact, I don’t think I’d even merge a patch where the submitter tried
to limit dual-license code to a simgle license (it might happen with
some non-maintained stuff where the original source of the dual license
is gone, but if somebody tried to send me an ACPI patch that said “this
is GPL only”, then I just wouldn’t take it).

I suspect the same “refuse to accept license limiting patches” would be
true of most kernel maintainers. At least to me a choice of license by
the _original_ author is a hell of a lot more important than the
technical legality of then limiting it to just one license.

@Danny
“and his opinion seemed to be that neither of these were significant because of some quality that I could not completely identify”

During the drafting of GPLv3, his position seems to be that if the producer of the gadget cannot change the firmware, the user does not need the right to do so. At least, the FSF will not fight for it.

Thanks a lot for this article and sorry for my miserable English level.

I’m usually agree with you and I’m not a fanatic.

By me software must be “free” in any case because software it’s knowledge and because it’s stupid to reinvent the wheel every time you need to develop anything.

We could have many more advances in software development if all the software source code would be available.

The are only two harms, the first is to oblige someone to free his work, because must be his choice what he wants to do.

The second harm is for the developer, because will loose the critics and the improvements that can comes from the community.

Who sells televisions, smart phones, elevators and so forth they should focus more on their business than software. The other way around, who sell software should think more how to take advantage of Free Software.

Patrick Maupin writes: “Not sure what you mean by *still* — this is, historically, a rather new development.”

Microwaves have been in homes since, what, 1970? That’s just under two generations.

In context, however, my full point was that people use microwaves despite their relative inability to estimate the threat of using one (unless everyone’s a radiation physics expert while I wasn’t looking – no, wait, people are still afraid of nuclear power). I doubt you believe that people are naturally comfortable with new devices at first, and then learn to fear them after a generation or two. Maybe it’s like cellphones: cheap access to an amazing capability – heating food up within minutes in this case – overcame any fears of the unknown. So we can add another reason to why people are comfortable around microwaves.

But “if one breaks, I would have heard about it” could still be factor in their comfort level at the same time. And I claim it’s still relevant in the sense of understanding why a microwave isn’t troubling to use. Again, it’s because defects would be widely publicized, thanks to a functionally open sourced reporting mechanism (our press, when it comes to reporting accidents).

In fact, we can see how and why this applies poorly in the case of software, and hence why closed source software is more troubling. The worst accidents in your closed source word processor result in your losing your paper you were working on, which relatively few people care about. More common accidents involve things like your paper having some formatting problems with footnotes that you can’t fix due to a bug, which is roughly analogous to someone complaining that their microwave won’t heat evenly. It’s an understandable problem, but it won’t make the news.

On the other side, software that could commonly malfunction in a lethal way would be very likely to be reported. Examples include software that runs health monitoring equipment and, eventually, self-driving automobiles. If either of these fucks up and kills someone, the author of the software would lose so much reputation that they’re willing to devote a great deal of time and effort to ensuring this can’t happen in normal usage.

The upshot of this is that closed source software in health monitoring equipment and self-driving cars should be about as troubling as elevators and microwaves. Some here may find this distressing; I kinda do, myself. But nevertheless, I claim it should be true by Eric’s reasoning – and because a certain open source mechanism is kicking in in a different point in the process.

@Paul Brinkley
“In context, however, my full point was that people use microwaves despite their relative inability to estimate the threat of using one (unless everyone’s a radiation physics expert while I wasn’t looking – no, wait, people are still afraid of nuclear power).”

Oddly, this sounds like you doubt the safety of microwave ovens (which injure like an open oven if they fail: Direct and painful) but trust the safety of nuclear radiation sources, clouded in secrecy (which injure in over many years if they fail).

>> Not sure what you mean by *still* — this is, historically, a rather new development.

> Microwaves have been in homes since, what, 1970? That’s just under two generations.

Sure, but I think it’s a fairly recent development that microwaves are used “without fear.” There was plenty of caution and hand-wringing when they first came out. The first Amanas were built like tanks and had three (count ’em, three) different interlock switches on the door, and a fuse would be blown if it ever didn’t interlock properly (switches actuated in the correct sequence!!! door might have been bent, you know…) and you’d have to take it to the shop, where they would check the door alignment and replace the fuse. I have no idea what microwaves are like now, but I actually worked in an appliance repair shop when I was in high school, and you’d have people come in who were sure their microwave damaged them because their elbow was hurting. You might say something like “I don’t see any burn” and then you’d hear something like “No, because microwaves cook from the inside out!”

> In context, however, my full point was that people use microwaves despite their relative inability to estimate the threat of using one

And I’d say they have an excellent ability to estimate the short-term threat of using one.

But “if one breaks, I would have heard about it” could still be factor in their comfort level at the same time.

Sure, that’s why their estimation is valid for the short term. But humans are incredibly lousy at estimating long-term threats from low-level chronic stuff. Lead goblets, anyone? Then some of us realize that we’re lousy at this, and decide to err on the safe side on other substances. It’s a wildly swinging pendulum.

And I claim it’s still relevant in the sense of understanding why a microwave isn’t troubling to use. Again, it’s because defects would be widely publicized, thanks to a functionally open sourced reporting mechanism (our press, when it comes to reporting accidents).

And, of course, to the extent that you’re right about it being “two generations” ago, the fact that parents routinely use stuff makes it normal for the kids. Same thing with sitting too close to the television.

We’re mostly in violent agreement. I was merely quibbling with the “still … without fear” when, to me, the “without fear” is rather a new development. I remember when it wasn’t so except for those few intrepid Joneses that everybody then decided to keep up with (after they didn’t die).

I buy the ‘communism’ part if it stays with folks like Malatesta, Bakunin and Kropotkin and don’t involve Marx or his followers :-)

Ancient Catholic church? sure it is a hacker sin to not share Your hack but I don’t really want to punish sin. It’s pragmatic. I want there to be a incentive to share freely and to discriminate in the favor of the free ecosystem. – because that make me and everyone more free in the long run. I think it’s most effective to do it gently – allow closed source use but charge reasonably for it, both discriminating in favor of free and founding free development in the same go – And if costumers actually contribute code back You can always give discount……. but…. maybe that is the business model of the Catholic church (sans the freedom part)? Sustainable then :-)

If faulty software is even capable of dropping an elevator car, it’s a very very bad design. Software should only be involved in deciding what floor to go to next. The elevator itself should employ physical protection against being dropped. There are two different devices of which I am aware that are typically installed in elevators.

Every microwave of which I’ve seen a schematic has multiple door interlock switches that disengage the radiation source and thus prevent irradiating anything outside of the microwave itself.

There is simply no reason to trust software to do the right thing in these cases. There aren’t enough eyeballs in the world to make me trust the code better than the physical safety features.

Some software safety issues in elevators and microwaves are well understood and designed for. ROMs fade, RAM fails, important mechanical parts break, lightning strikes hit power lines–all of these scenarios are (relatively) common, and all trump software bugs. It doesn’t matter how awesome your code is if you store it in ROM with bits that flip randomly every few years, and people who design elevators and microwaves know that.

That said, software-induced harm is still possible. Bugs that prevent the display of warning indicators or add subtle noise or harmonics into electrical signals can cause failure modes that are unknown to the device’s designers and therefore outside the scope of what the mechanical safety features were designed for. Battery charge controller firmware is a good example of this, especially when the battery in question is made of flammable materials. Hacking the software can make the battery perform better, but over the long term a change in software behavior can also make the battery more likely to catch fire.

If you took a microwave or an elevator apart, you might be able to build something interesting from the pieces that is unconstrained by the safety features of the original mechanical design. That’s a whole different scenario though, with a different harm analysis.

People apparently do build credit card skimmers using processor components from MP3 players. I imagine they do not find proprietary MP3 player firmware to be at the top of the list of things they find harmful to them (surely the willingness of vendors to turn over names and addresses from sales records to detectives is much more harmful). It’s difficult to be sympathetic with this group if they are harmed by proprietary credit card skimmer firmware.

The robot that parks cars at the Garden Street Garage in Hoboken, New Jersey, trapped hundreds of its wards last week for several days. But it wasn’t the technology car owners had to curse, it was the terms of a software license.

@Jeff Read; As usual, you are so full of crap, it’s coming out your ears regarding Word. I’ve used nothing but StarOffice/OpenOffice/LibreOffice since 1998. The only problems I’ve ever had have been related to fonts. And those can either be downloaded for free from Microsoft or you can use the Liberation fonts from Red Hat.

Uhhh, my wife and I did the neighborhood newsletter for a few years with OpenOffice (early 2000s), and although she’s somewhat creative, I’d say we mainly used basic functionality.

The thing was buggy. What can I say? It worked, but sometimes it would do really strange things — changing font characteristics when you type one more character, etc. And don’t get me started on anchors for pictures, etc.

It’s been awhile since I’ve used it for anything but basic text (where it mostly works great), but even the other day, I had trouble getting it to use the printer driver properly to suck in card from the aux tray. And once I had the printer driver stuff figured out, when I opened a new document and tried to set it up before using it, the settings all changed as soon as I actually entered text into the document.

For serious documents at work, I pretty much stick to rst2pdf. If I were doing more math, it’d probably be LaTEX, but raw restructuredText is much more readable than that. Either one is plain ASCII, easy to diff and revision control.

Assessing evil only in terms of harm to oneself misses part of the moral landscape, though it is better than the left wing view that evil is only harm to other people, the left wing view that harming random strangers far away, or even harming your enemies, is as bad as harming your friends and kin.

Closed source software has externalities that benefit those privy to what is inside it. Open source software has externalities that benefit other users of the software, thus, loyalty to one’s community – one is primarily benefiting people like oneself, who are apt to return the favor.

For some reason, it is impermissible for Jewish settlers to be loyal to other Jewish settlers, indeed horrifyingly wicked and racist of them, but it is OK, and indeed entirely admirable, for open source user/developers to by loyal to other open source users/developers.

@Patrick:
“I could have worded that a bit better, but I meant in the way of software. But I suspect you knew that :-)”

I knew exactly what you meant, but I still couldn’t let it pass. The idea that code is a “thing” that must be charged for is way past its sell-by date, and I hate seeing anything posted by otherwise-well-informed posters that could propagate this idea.

@Michael Hipp:
“True, but it appears there are a relative few who ever make much at this. Whereas there are lots of companies and programmers earning a handsome living on closed source. Pity.”

I worked as a professional software engineer for 10+ years, and worked in a variety of languages on a good range of hardware and operating systems. In that time, I worked on exactly one project that was sold as a shrink-wrapped “thing”. All the other code was back-office internal stuff — quite valuable to the organization, but useful for its ability to be productive rather than to be sold outside. I’m confident that a lot of people have made comfortable living writing code that is never sold.

I suspect that when you say “handsome living” above, you really mean “monopoly rents”.

@Cathy
It has been estimated that OTS software is (much) less than 10% of the total production. On the other hand, only around 5% of the money spend on OTS software is spend in actually writing it. The rest is marketing and retail.

@Cathy I suspect that when you say “handsome living” above, you really mean “monopoly rents”.

Not really. It is “monopoly rents” only in a relatively few cases (e.g. Apple, MS, Oracle, Intuit) and then only on the corporate side. The coders get a wage, not a windfall. But the point was that there are a lot more making a living on closed source software than the relative few that make a living on open source.

If you spend much time in the Windows ecosystem, you’ll observer that there are probably tens of thousands of companies and small programming shops making a living selling closed source “stuff”. The number that even bother to try to sell open source products is essentially zero.

In short, I’m happy for Red Hat, but they’re the exception not the rule. The likelihood of making a living selling “support” is very low. Open source is not a money maker except in a few rare cases. I’d be truly grateful if someone someone would show me I’m wrong.

The hard part in IT projects is not the software, but the matching of software and its configuration to business processes, work flows and document flows in a company. Or the redesigning of said business processes, work flows, … to better make use of new software or new features.

So the money is to be made on the hard part – the business model is to sell knowledge, expertise, solutions; the actual software is simply a tool you can trow in for free as a bonus (giving you an competitive advantage).
It’s a business model that suits open soucre very well, better than proprietary software. So I’d expect that, over time, as more software becomes commodity, the “selling bits” business model will become obsolete, although there might always be an individual coder left or right who makes a few bucks on selling his software. I know buskers who make a handsome living too. But the one guy becoming a millionaire by selling the one program he wrote in his basement is as much a thing of the past as the 2 guys becoming a million dollar company from selling personal computers they soldered together in their garage.

Of course the open source developers don’t directly benefit financially from this – unless they or their project gets funded by the companies that depend on it. That is happening already.

“Open source is not a money maker” is only true if you equate ‘make money’ with ‘sell a piece of software’, but selling software is a 20th century business model that’s loosing relevance.

I’m confident that a lot of people have made comfortable living writing code that is never sold.

Sure, but…

Most of them make that comfortable living writing said code for an entity that considers it to be a part of their special secret sauce.

I suspect that when you say “handsome living” above, you really mean “monopoly rents”.

Unlike Michael Hipp, I don’t spend any time (any more) in the Windows ecosystem. I do other things in addition to writing software. Yet I consider that my primary worth is that I’m a developer, and I make what I, personally, consider a “handsome living” although I’m sure it’s a pittance to some.

Coders are kept down by the monopoly control of some over the “knowledge” ie proprietary code.

That’s a rather simplistic view. In point of fact, an argument could be made that the availability of good open source software increases the number of people who can be productive in the field, thus driving down the cost of labor. For a concrete example of this phenomenon, one need look no further than the proliferation of script kiddies among the cracker contingent. Are they good? No. Are they easily caught? Usually. Do they still manage to do a lot of damage? Quite often.

It has always been the case that an really good programmer might be 10x as productive (counting time not lost due to defects and non-extensibility) than a mediocre programmer, but in most companies, the pay delta might, if the good programmer is lucky, only approach 1.5x.

Even in the past, I was exposed to environments where the worse programmer could actually make much more money than the better programmer, because they did half-assed RAD prototyping, and managed to palm off really making stuff work to mere maintenance programmers. The cynic in me only sees this escalating; the Dunning-Krueger effect is strong in many programmers, and many managers still confuse confidence with competence. This is exacerbated by face-time. When people fuck something up badly, and then manage to fix it, they get known by upper management as the guy who saved the account. Also, upper management knows they’ve had face-time with the customer, so they need to be compensated so they don’t leave for greener pastures. The guy who just sits quietly in the back and does his job may never meet the customers.

So, to the extent that open source puts more great tools in the hands of the unthinking, they can bamboozle management even more quickly.

Long ago, I gravitated towards working for chip companies. Things are much better there. A bad tapeout, even for a small, non-bleeding edge company like the one where I work, can cost upwards of half a million dollars and 3 months of opportunity costs. Those things scale dramatically for larger companies. If you’re developing an SOC for a consumer market, you might be at a node where the out-of-pocket NRE for an additional mask spin is well above $2 million, and the opportunity cost is all or nothing, because if you don’t have chips to your customer by April for testing, you won’t be in this Christmas’s devices, so it’s time to start work on all the enhancements for next year.

In this sort of environment, people are interested in real defect tracking and real continuous improvement. I won’t say “best practices” are always used — those things can actually cost more than they’re worth — but certainly “better practices” are used, because the results of poor practices are far too costly in a fashion that’s transparently apparent to everybody up and down the management chain.

Now if only we could wave a magic wand and make marketing as accountable as engineering.

Less than you think. Adam Smith already explained how the party that controls the “property” can skim of all the profit.

For instance, MS are extremely crafty in how they are able to set the margin of all down-stream players. Many companies who opened up new markets on top of windows lost all their profits to MS, were acquired, or forced to take up MS people in their management (Citrix comes to mind). MS get an 80% margin on Windows for a reason.

Compare to doctors. If one drug company or one big HMO would be able to get patent control over health care, they could easily extract all profit. Leaving little for the medical practitioners. However, the medical practitioners have always been able to block any such control over the tools of the profession.

But all the proprietary code prevents coders from practicing at high margins. When they try, the owner of the code from which they work, MS, IBM, Oracle, Apple, can simply put on the screws and skim off the profits.

@Patrick
“It has always been the case that an really good programmer might be 10x as productive (counting time not lost due to defects and non-extensibility) than a mediocre programmer,”

Less than you think. Adam Smith already explained how the party that controls the “property” can skim of all the profit.

And, that’s still a rather simplistic generalization, especially when applied to software. Otherwise, there would be no software companies except Microsoft. But we have already seen where they have succumbed to a certain amount of pricing pressure, e.g. on Windows for netbooks.

But all the proprietary code prevents coders from practicing at high margins. When they try, the owner of the code from which they work, MS, IBM, Oracle, Apple, can simply put on the screws and skim off the profits.

But it’s arguable that most of the money being skimmed is from other areas of the economy, rather than out of programmers’ pockets. (A minute’s rational reflection will show that programmers are often much better paid than people in other trades with similar skills and education.) The arms race for better computerization is analogous to carriers selling iPhones — a business can die if it doesn’t use computers properly, but only the big boys like Amazon and WalMart can actually make money using computers. For everybody else, it’s an expense that’s paid in order to simply not fall too far behind.

I have seen numbers ten times as high. [e.g. 100 to 1 productivity ratio in bad/good programmers]

Sure, but a minute’s reflection will show that it’s really hard to monetize this sort of discrepancy in most fields. For example, a plumber who is 100 times better than another plumber is not going to make 100 times as much. Things like physics of simply getting from house to house, and finding the customers in the first place, will preclude this.

In most fields, to make serious money, you have to become a manager or owner (at least to the extent of taking ownership of things like finding and managing customers). You haven’t written anything that would convince me that programming is any different, and my thesis is that wide availability of good open source software might even reduce, not only the average programming wage, but the average programming wage for what you might consider to be good programmers.

@kn “The hard part in IT projects is not the software, but the matching of software and its configuration to business processes…”

Seems to me that experience from the field indicates both are plenty “hard”.

“…So I’d expect that, over time, as more software becomes commodity, the “selling bits” business model will become obsolete…”

Yes, that theory has been around a long time. It was a pretty strong theme in CATB as I recall. I just don’t see that grumpy old Mr. Reality cares much for the theory. Other than a few like Red Hat, IBM and Oracle, where are all these people/companies that should be earning a decent wage doing support of OSS.

I would think by now (1.5 decades in?) we’d be seeing lots of examples of all shapes and sizes, but where are they?

“.. but selling software is a 20th century business model that’s loosing relevance.”

Regarding the EVE Online bug: it did not “wipe boot drives”, it deleted one critical file, named boot.ini, from the root directory of the drive the EVE client was installed on. Should that drive have happened to be the boot drive for Windows, the Windows machine would not boot up upon the next attempt to do so. All other data on the affected drive was intact, but it involved pain and gnashing of teeth to get to.

WordPress, Basho, Opscode, Puppet Labs, MontyDB, etc., etc. Writing open source software and selling support is a model that works very well for server software. If your software needs a good UI, for some reason your chances of success with that model drop.

@ Michael Hipp
>Seems to me that experience from the field indicates both are plenty “hard”.

Personally, I find setting up a server and installing some software on it pretty easy. Getting people to critically think about their work flows and business processes and seeing how they could be made more efficient by means of software is a lot harder. Actually changing processes and work flows is really hard – and it’s where most IT projects fail, and people write books about it.http://en.wikipedia.org/wiki/Business/IT_alignment

>Yes, that theory has been around a long time.[ …] Other than a few like Red Hat, IBM and Oracle, where are all these people/companies that should be earning a decent wage doing support of OSS.

You’re focusing too much on that “support” word. It’s actually, more general, about added value. Open source software is a commodity, and you make your money on the value you add to it What value you add, thats a business question, and I’m not a business man.
Support is the obvious answer, and not a very good one, unless it goes way beyond helpdesking and troubleshooting.

Here’s an example. Imagine a sector that still runs its administration in a very traditional way – everything on paper, with computers machines to produce text to be printed. Management wants to reduce costs and be more efficient, so they’re going to work more ‘digitally’.

Consider 2 solutions :
(a) sell them document management software.
(b) sell them your expertise in office automation and document management, some project management and process redesign consultancy, and your expertise in integrating a new document management system with their existing software. Throw in an open source document management system for free, but make sure you can set it up and run it in a way that matches the work flows and the processes you designed.

Which one do you think has the best chance for success ? I think (b).
(a) is the approach from the 80’s and 90’s, and it’s been known to fail often. That’s why I’m saying selling software is an obsolete business model.

I know several companies in Belgium that I do business with and that have ‘some sort of added value on top of open source’ as a business model. I think the shift from selling support on software to selling solutions in which software happens to be a component is a relatively new trend, and I expect it to grow.
But I haven’t really researched this, so I can’t give you any hard facts.

@kn “Personally, I find setting up a server and installing some software on it pretty easy.”

So do I. But the topic was about *writing* software, not about doing a few apt-get’s and edit some config files.

“You’re focusing too much on that “support” word.”

That was Cathy’s word, not mine. And it was her assertion that this is the answer to how to make money with OSS.

“(a) sell them … software.
(b) sell them your expertise”

No doubt we’re in agreement here as to (b) being the better gig. But distinguishing yourself from the typical Take-My-Money consultant requires considerable care. Business people, rightfully, think of IT consultants in the same frame as used car salesmen and TV preachers. Note also, that most of these “value add” organizations just want to sell you MS SharePoint, Exchange, and Dynamics GP on top of a lucrative “customization and training” contract. Someone doing this with a fully-featured OSS stack would be a breath of fresh air.

I thought the topic was “making money from software’. That’s what I replied to, anyway.

Is writing (application) software really that hard ? I’d guess the really hard part is figuring out what the user/customer wants and needs; after that it’s just a matter of coding it.
Apparently getting to know user/customer wants and needs is so hard that a whole new style of software development was invented to deal with that problem.

> […]. Someone doing this with a fully-featured OSS stack would be a breath of fresh air.

I agree with the risks of being taken for used car salesmen and TV preachers etc, but by your last sentence, i take it you’d consider what i described a viable model for making money of open source software. That was my point .

A good lawyer can easily earn a hundred times the money that a bad lawyer makes.

Usually for that to be true, the good lawyer either needs to be somewhat morally bankrupt, or fighting another lawyer who is morally bankrupt. There’s an old saying about small towns: “One lawyer will starve, but two can prosper.”

Of course, you could be looking at the other end of the spectrum, as well, where the factor increases from a hundred-fold to infinite. There are lawyers who are so piteously incompetent, they must have passed the bar by random chance. Due to the way language works, we call these people lawyers. But a programmer who is that bad is not called a programmer, by anyone but himself.

@kn “Is writing (application) software really that hard ? I’d guess the really hard part is figuring out what the user/customer wants and needs; after that it’s just a matter of coding it.”

Yes, it really is “that hard”. Every part of it. Even the “just a matter of coding it” part.

“I take it you’d consider what i described a viable model for making money of open source software. That was my point .”

Er, well, not really. It would be great except for 2 problems:
a) The OSS stack to fully automate most complex business processes simply doesn’t exist. I’m thinking mostly of something like ERP, but I’ve also looked into things like Document Management and found mostly bits & bytes, not actual implementable solutions.
b) Nobody (or few, that is) are doing it – which leads me to believe it isn’t the pat answer any more than Cathy’s “support” answer was.

The point is, there’s still way more money it seems in closed source than open. That was the original debate methinks.

The OSS stack to fully automate most complex business processes simply doesn’t exist.

That’s the hard part. You need a catalyst. This is why the first areas where open source has proliferated are either the sorts of areas where people do the work as a hobby, or are areas that are actively being researched by universities. Business processes are old hat and boring.

Most management has certain biases, but is not stupid. It’s hard to convince your manager to open-source something that has been developed completely in-house, because the management realizes that it’s probably not suitable for the general market, so all you might be doing is enabling your competitors, or even just giving them information on how you do things. Even if those things aren’t true, management could see a lot of effort being wasted to generalize the code, and then either nobody else uses it, or others use it but just generate feature requests, rather than co-development.

On the other hand, it’s often easy to get management to let you contribute to a preexisting open source project, especially an active one. The argument that you’re getting more value than you’re giving is a good start. But for active projects, the argument that bug-fixes and enhancements are much easier to roll in to your internal processes if you give back your own tweaks is one that no sane manager can rebut.

So for the sorts of software you are discussing, there seems to be a hump that is difficult to get over. Venture capitalists want a huge return and aren’t that interested in the service model. They got all excited about open source a few years ago, but seem to have ratcheted it down a notch. Normal corporations who are the putative customers have seen lots of software projects go south, so even the kickstarter model might be a difficult sell.

I think we’re seeing some growth based on programmers and small shops who want to be able to reuse and share, and end-customers who are starting to see the light. I think this is accelerating slowly; it was accelerating much more quickly when the BSA was suing the likes of Ernie Ball, but they’ve apparently gotten a bit more savvy about picking their targets, and/or the size of the discount for not discussing settlements publicly. But this is slow, mostly organic growth.

@Patrick Maupin “That’s the hard part. You need a catalyst. … Business processes are old hat and boring.”

This is a good point. But it’s actually worse than that. There is one thing in common about almost every successful OSS project: it’s a horizontal solution. Kernels, network software, development tools, even user stuff like office suites all work horizontally across a varied application space. Business processes are almost always vertical – in 2 ways. They’re vertical by industry and type of business. But they’re also vertical even within an industry and type of business in that each one is highly unique. e.g. Two mid-sized retailers probably run identical office suites, but their inventory systems might bear almost no resemblance to one another.

It’s fun to solve interesting problems, even in grunge space like business process. It’s not fun to have to re-solve the same problem every time it is encountered. You never get good at it and you never reduce it to a “formula”.

But enough whining!!! I’ve got a w2k3 server to re-load from scratch by 6:00am Monday morning and it won’t get done this way even sans church tomorrow. Night all.

It is well-documented that RMS used proprietary OS’s and tools to build GNU. I believe he has himself made the argument that free software on a proprietary platform is better than closed software on a closed platform.

I am, according you your comments, a “zealot” because I want to modify my VCR. I normally wouldn’t want to, but it had an obvious bug: it adjusted for Daylight Saving Time _backwards_. This is either a bad branch or a bad compare. You could almost fix it with a hex editor; but there’s no way to install the fixed version. I get irritated finding small, easily corrected bugs that I’m blocked from fixing. I find them in my car, in my toaster; they’re everywhere.

Lastly, you clearly need to finish this work. I figure it’ll take 3 more blog posts to fully cover this idea. You now need 2) positive benefits of proprietary software, 3) harm from open source software, and 4) benefits from open source software. Once you have all four quadrants of this Cartesian plane covered, you can then start to draw conclusions. And, perhaps, we can figure out what the tic-marks should be for each axis. This would permit a quantitative analysis of specific software in specific use models.

I have been trying to convince my employer (a very large defense company) that their choice of MS for desktops has resulted in lowering the financials for this quarter. This is because when the economy is bad, a vendor like MS can simply force an upgrade cycle (in our case, stop providing security updates for XP). This means that while their bottom line improves, our deteriorates. Since we can’t control the timing, we have to invest large sums of money in an upgrade we don’t need yet, in an economy that reduces our income.

It’s part of the “total cost of ownership” that no one seems to factor in. I suspect because B-school grads have this hallucination that there are no other choices.

On using the elevator at my work, I often wish it wouldn’t pick me up on the ground floor and then go to the cellar to pick up the cleaner who wants to go up 1 floor with her trolley that fills the entire elevator. It should either pick her up first or drop me off first. Not that open source firmware would help – can’t imagine my employer being too thrilled with a request to install some changes. Anyway, great article. I instinctively do treat software in different real world situations much as you describe but having it clearly worded makes it easier to consider more complex cases.

You mention the cost of unhackability and stability. And yet Microsoft Office is arguably far more stable than *any* open source counterpart. It’s also arguably a far better piece of software.

If the very best open source application is inferior to the closed source application then you have to evaluate things not in “harm” but in *COST*. Yes the *cost* of converting your documents is potentially high–but only if there is a superior alternative. Open source has consistently failed to produce superior solutions to commercial closed source applications. I use an application called 3ds max on a day-to-day basis. It’s a closed source application and its closest open source rival would be Blender. The two are almost incomparable though. 3ds Max is without question the superior product. If my clients pay me $100-$300 an hour and the software costs $3,000 then it’s a cost question not a moral question. Most of your “harms” are actually costs. What’s the cost of transitioning off the application? Well, it would be higher than the cost of the application for sure–but the cost of not using the application would be even higher.

What’s the lock-in cost for me using Windows? Probably a $1,000. But if Windows saves me an hour or two a month that means in 6 months the “harm” of using windows is negated.

You’re falling for the exact same thing that you state at the beginning of the article: “one who redoubles his efforts after he has forgotten his aim.” You yourself are obsessed with theory at the cost of reality. The theory that Open source is more stable, less susceptible to snooping, receives more frequent feature updates and has no lock-in is simply theory. And it’s not even very solid theory.

Let’s take more stable. I know plenty of horrendously unstable open source applications, and plenty of stable ones. Windows is very stable for me and by stable I include odd hardware conflicts etc. When I run linux setups there are always numerous and unpredictable conflicts between the specific linux distribution and the software in question. If I only ran one application I could run their supported Distro but that’s impossible when two different programs want 2 different distros.

You say that closed source vendors can insert censorship or spy software on your system. But that’s only true if you know the exact state of your system at all times. No system is 100% secure. So an open source linux system might be secure but it only takes one infected executable to potentially infect your computer and spy on you. And it doesn’t matter if your computer has 1,000 vulnerabilities or 1. It only takes 1. So both open source and closed source are impossible to secure against espionage.

You say that closed source vendors lock you into file formats. But the up side of being locked-in to a file format is that it’s widely supported. Sure some open source application might expose their file format in the code but the *cost* of finding someone to create a new application to read that is probably substantially higher than a popular closed source application like Word which has numerous parties supporting. It also is far less likely to happen. Word by the nature of its monopoly and lock-in also is far less transient than my open source applications which you start using and then have the developer lose interest.

You mention hack-ability. That’s true to some degree in a very abstract theoretical nature but in practical terms closed-source Unreal Engine is more hackable than many open source game engines since Epic Games has put so much effort into making the engine accessible. Sure you could theoretically add any feature you want to ___Insert__Open_Source_Engine___ but if it cost you $100,000 to do so and Unreal Engine does it easily for $100 then the open source alternative is far more *costly* than the closed source alternative.

Open source can update faster. This is true. But *will it*? Being able to theoretically update but having no active development isn’t advantageous. Sure Linux *could* add a lot of features–but only if someone decides they want to add them. In that respect 99.99% of users are in the same boat with open or closed source. Many open source applications have not achieved feature parity with their closed source equivalents. If your application gets daily updates but doesn’t have feature parity isn’t that sort of pointless? I say yes.

When is closed source harmful? When there is a better cheaper open source alternative (including future costs such as data translation etc.)

With respect you have *enumerated” not “evaluated”…
So what’s the value of having an open source vs a closed source system ?

I would model this using “Real Options”, basically what price do you put on being able to choose, rather than taking what you get ?
Real options are often taught in terms of real estate. When you choose to build offices, shops, factories or homes, you are giving up the ability to (say) 3 months from now look at the market and decide that a different choice will make you more money.
Of course if you never develop the land you pay interest but don’t get any revenue, hoping that the general price of real estate will go up, an assumption that we all have seen hurt a lot of people recently.
But it applies to anything where “using up” a choice now, reduces your value and that reduces the utility of whatever you get for making the choice.
Basic option theory tells us that the more volatile the market, the more an option is worth, indeed bankers who value financial derivatives like options focus on volatility almost to the exclusion of all other factors.
The intuition here is that when things are drifting predictably along, being able to choose is worth less than when things are varying a lot.
In IT there are some clear examples:
I type this text on software that works, a predictable situation, so the value of the option to change it is effectively zero.
Occasionally some data of mine gets corrupted and often the code basically says
if (header == good) load else Message(“Bad data file go away”)
In that situation, being able to change that code is worth money to me, money being defined here as the cost of my time to rebuild the data. Note that changing the code also has a cost and if rebulding the data is less expensive than rewriting the code to retrieve it, then that’s what I should do.
Again this is like a financial option, the right to buy Microsoft shares tomorrow at $50, when they trade all day at 40-45 turns out to be useless. But to buy this option I usually have to pay someone, so I’ve made a loss.
But if MS shares bounce around from 40-60, that $50 option might be worth a lot.

So to evaluate the value of open source vs closed you need to know how “random” your life is going to be with it. Also your cost function is personal, I can code C++, but am mediocre at Java and have barely used PHP and each of us has a different mix.
Since many people reading this are employees, we have a different function entirely….
If I can re-cut a bit of open source to do a job, I’ve created extra value for my employer that is more due to me than to the writer of the s/w. If that s/w is closed source, that option is not available to me and for as given function that maps the value you add to your pay, which may be low but is usually positive, then closed source has less value to you.

we met eek a decade ago i believe, in Denmark. my thoughts on this are simple: people have a different threshold tolerance to “pain”. we’re all different. i believe that people who have a particular goal in mind and stick to it by living it *out of principle* should be admired – but not necessarily followed! again: everyone is different, and has to make up their own mind. but in any area of expertise or way of life, i’d far rather listen to someone who actually lives by the principles they preach rather than just talks about them.

so: this is an absolutely superb and incredibly useful assessment system, similar to that of security threat assessment, which applies right across the board, and i agree with others that it should be made more of a formal metric. you *are* however going to have to accept that others have different tolerance levels; Software (Libre) sets a much lower “pain” barrier. also, these metrics are as equally useful to Software (Libre) Developers and even and especially to the FSF, because they will help the FSF to formally and officially explain the prioritisation of its activities.

Dr Stallman is *not* a fuh-nah-tyk with “no goal or aims” – his aims are the same: he’s merely set a much lower threshold tolerance than you have. that’s all.

>Dr Stallman is *not* a fuh-nah-tyk with “no goal or aims” – his aims are the same: he’s merely set a much lower threshold tolerance than you have. that’s all.

Huh? I don’t think anyone here has claimed RMS has “no goal or aims”; that would be nuts.

The FSF’s language becomes fanatical when it crosses into describing closed source as an unconditional moral evil, demanding that others regard closed source as an unconditional moral evil, and insisting that anyone who fails to do so has no principles.

This is not the same as a lower threshold tolerance; it’s a completely different mind-set and a different way of framing the problem, one which deemphasizes pragmatic avoidance of harm and invokes emotional responses related to purity, corruption, obligation, prohibition, virtue, and sin.

I think it’s a little short sighted to diminish the significance of things like microwaves and elevators. Any student of history who is familiar with the effects of guilds and trade secrets should be able to see parallels. Patents were originally put into effect because it was widely seen that manufacturing secrets were a huge threat to a healthy society.

And, the world isn’t standing still. Controlling software and music and movies is the old wave.

I’ve got a homebrew makerbot upstairs that I’ve been raising my kid to think of as a toy. There are torrent sites being set up in the wild for the sharing of 3D models of physical objects so that people with these tools can be empowered, and the tools are spreading through society and getting better fast.

Personally, I would like to build a future where my kid is liberated from the coerced participation that the industrial revolution brought. I think your evaluation of the situation is backwards, really.

The purely physical layer, things like bottles and tables and sculptures, those are not very threatening from a control perspective. You can see them, scan them and manufacture them for yourself.

The pure data layer is also not very threatening from a control perspective. Open source software is rampant, digital distribution is trivial. And the more highly abstracted the software is from the hardware, the more trivial it is.

But the electronics and low level code that requires intimate understanding of the electronics are hard to do for yourself. You can’t just look at a multilevel circuit board with a simple scanner or the naked eye and model it up for manufacturing, and you can’t develop effective low level code to exploit black box hardware.

The areas you trivialise are the new fronts in this battle, and you don’t even see it. Maybe it’s just that you’re getting old…

>The areas you trivialise are the new fronts in this battle, and you don’t even see it. Maybe it’s just that you’re getting old…

You seem seriously confused in at least two different ways.

Of course the place where the world of bits meets the world of atoms is vitally important, and bound to become more so. But that doesn’t mean every kind of software that lives at that interface is inherits that importance. Behavioral range and flexibility matter a lot to our calculations of relative harm and what battles we choose to fight in what order. Closed-source elevator controllers fail to be very interesting, relatively speaking, because they have little of either; range or flexibility; their hackability payoff is low.

You’ve also confused the future with the present. Just because the more flexible and general-purpose descendants of today’s elevator firmware are going to be important “new fronts in this battle” doesn’t mean we should over-allocate attention to the dumb firmware that exists in the now. There are more important battles to fight in the now – like ensuring that closed source doesn’t dominate smartphones.

I just wanted to add, the thing I like about RMS is that his stance doesn’t just protect the people that live in the present. It also protects the people who are building the future. That’s the difference between, and I’ll use your words… “idealism and pragmatism”… the idealist protects people who he will never be able to relate to, while the pragmatist protects his fellows because his ability to relate is core to what he considers important.

This is really the same as the issue I raised, the profit motive. I think though that Eric is right, capital formation (which again is basically the same thing in this case) is much less of an issue with OSS. The plain fact is that a huge amount of labor is given away for free in the OSS world that would have to be paid for in close source, which is not insignificant.

BTW, I think it is worth distinguishing two dimensions here that tend to get conflated. There is open source verses closed source and paid verses unpaid. Most of those four quadrants have some occupants, and the arguments on the two different dimensions are slightly different.

> You mention the cost of unhackability and stability. And yet Microsoft Office is arguably far more stable than *any* open source counterpart.

But here is the crazy thing: Microsoft in 2007 totally changed the GUI for Office, to the point that I find it almost unusable. No backward compatibility of the GUI. I think that sucks, and is the very essence of some of the problems with closed source software.

And just to be clear, I am a person who makes a living selling closed source software. But I try to recognize the limits of that world. I think RMS’s zealotry, his moral crusade, is really not much difference that Bill Gates calling OSS communism.

The idea the closed source software in inherently more stable is plainly not true. The internet does not run on Windows and IIS. I use Windows every day. I’m a programmer so I push it to its limits. I usually have to reboot once, sometimes twice a day. I don’t think Linux people have that problem. IIS is a lot better than it was (especially with separated App Domains in ASP.NET) but I don’t doubt for a second that Apache is both quicker and a much more parsimonious user of resources.

But you are right, the problem Eric enumerates are seen in the corporate world not as problems but as costs. However, they are, indeed, rarely factored into TCO calculations.

Although a lot of option theory is “profit”, the right term is “utility”, ie that closed source has fewer choices that I can make. Yes, some of my programming is for money, but not all and even when I’m being pad I get utility from “doing a good job” even if I get the same cash. That’s an important motivation for nearly all of us nearly all of the time.

As I happens, even though I have used Excel since 1987 (and helped debug the bloody thing at one point) I like the new interface and your dislike is another option that is gone: open source would allow you to keep the old one, whilst gaining access to new functionality.

I’m not doctrinaire about OSS vs CS, but regard the MS calculations as just bogus (I have some significant finance education as well as running an IT dept, I’m quite good at that).

My calculation has always been that (except for Oracle trying to screw me), the licence cost of nearly all the S/W I’ve ever bought is negligible compare to the cost of the people who use and adapt it to my needs, or indeed my own time.
Ironically, that makes me a dedicated Outlook user since I can script it more easily than anything else I’ve found.
The general case of that is that making decision for the groups I’ve run basically it is:
“I have to pay some guys to make X happen”, which toolset and which guys will get it done faster and better ?
I’ve not ever seen any numbers on either side that makes me believe that OSS is better or worse than CS. In particular cases I’ve chosen OSS and other CS, but even then I have to be honest that 80% of the time I had gravely imperfect numbers to make the decision on and after I’d made it I could not know what would have happened in the alternate universe where I’d picked the other.

@Dominic Connor
> My calculation has always been that (except for Oracle trying to screw me), the licence cost of nearly all the S/W I’ve ever bought is negligible compare to the cost of the people who use and adapt it to my needs, or indeed my own time.

That might be true for you, and in fact might be true in orgs that have big super bloated IT departments where the costs of people overwhelm other costs. But that is mostly because IT departments are usually seriously wasteful of people resources (often because their support department spends hours and hours with stupid pointless tickets related to the general crappiness of windows, and a lack of investment in self management tools.

Fact is that IT is usually a small amount of the cost of most businesses, and the execs who run the company are not IT savvy enough to know that the estimates they get from their IT people are bloated bullshit. The plain fact of the matter is that at least 50% of people who work in the IT departments of large companies are verging on zero value, some are less than zero value, but the corporate structures and politics prevent it from being squeezed out. Which is to say, the cost of software in indeed small compared the the already horrendous staff cost bloat.

However, if you are running a low margin business where you actually have to get efficiency out of people, the cost of software becomes a major issue. For example, if you are a startup running an advertising driven business it makes a big difference if you are backened by php or asp.net, (and the database in partic makes a big difference.)

Another major hassle with CSS, and you should add this to your list Eric, is just the raw cost and inconvenience of managing the licensing. When you produce Windows OTS software, as I have done, you need to test it on hundreds of different machine configs. If you are using OSS operating systems, databases etc., you can simply create all the VMs you want. Managing the licensing on CSS is a major pain in the ass when creating these big libraries of VMs. Both the cost, and just the plain inconvenience of dealing with getting and setting the licenses, maching prep and so forth.

Again, I say this as a person who works in and advocates for proprietary software. It is pure zealotry to not be able to see the other side of the coin though.

@Jessica >But here is the crazy thing: Microsoft in 2007 totally changed the GUI for Office, to the point that I find it almost unusable. No backward compatibility of the GUI. I think that sucks, and is the very essence of some of the problems with closed source software.

Except that just proves my point exactly. I think the Ribbon GUI change was a radical but ground breaking upgrade. That’s precisely the sort of creative and well researched shift involving hundreds of thousands of hours of user testing and research that simply doesn’t come very often from Open Source. Open source generally doesn’t innovate in these areas. Even if you look at something like Apache it’s not really innovating so much as a 1980s web server that’s getting more polished. It’s an area where that’s what matters though. In many other areas there is actual room for improvement and Open Source *isn’t* improving. If anything Open Source is obsessed with Nostalgia. Most Linux distributions still look exactly like Windows 95. Because even though many Linux advocates hate Windows–they also seem to essentially want to clone it. OpenOffice although professing to hate Microsoft Office has spent most of its existence trying to poorly clone Microsoft Office.

I would be perfectly happy to buy and invest in Open Source but outside of a few server packages it’s so inferior to the closed source options that the costs are enormous to adopting out of date and stale software. Take Photoshop. Yes I could use Gimp and I could theoretically pay someone $200 an hour for 3 months to add a feature I want. Or I could spend $100 a year and get gaggles of new features. Again if Gimp is an hour slower a day and I get paid $200 an hour then I’m losing $200 a day in opportunity costs to gimp. The cost of software development being on the order of tens of thousands of dollars for a new feature means that investing in Open Source is an extremely capitol intensive proposition. Yes as a society everyone would benefit from that new feature–but $10k when *I* could just pay $200 is fool hardy. And since most people can’t or won’t invest $10k for their pet new feature I end up subsidizing all the free-loaders. It’s estimated that only something like 3% of NPR listeners contribute. I think that’s a safe indication of general human psychology. Yes if we all paid something in we would all get something better. But more likely a few of us pay a lot in and most people pay nothing in. And even then a lot of the improvement I need in software are pretty substantial and core to their foundational code. It’s going to be incredibly expensive to hire a programmer for 2 years to rewrite some core operation (and convince everyone else that they need to update their calls to said code). Like the ribbon that’s the sort of investment that takes an authoritarian leader who simply dictates that the core is going to be rewritten and it takes costs being shared among the stakeholders. It isn’t glamorous but it’s essential to an effective software package. I just don’t see these sorts of challenges being taken on for most software by any volunteers or users being organized to manage such an important shift.

If open source tries to develop a free/os alternative for a proprietary software, they’re said to be not innovative, poorly cloning, chasing tail lights, just not there yet, etc.

If they don’t try to mimic proprietary programs closely, users complain that it’s too different, and bloggers lecture how devs should pay more attention to what people expect and that if their program doesn’t behave exactly the way a Windosw/MS office user expects, they’ve failed.

There’s a difference between innovating and just doing things differently for the sake of being different.

Let’s take gimp which is the perfect example of this. Having different hotkeys from photoshop is just irritating. Even commercial applications like Softimage and Maya offer hotkey setups which emulate competing packages to make transitions smoother. But their feature set is unique and innovative. That’s just one of the concessions that products (open source or closed source) should make to try and ease adoption when they aren’t the leader. You can then innovate around that.

The problem is that many of the open source applications I’ve tried to use innovate new and previously undiscovered ways of sucking. At which point I just want to throw up my arms and say “look the closed source alternative sucks, but this sucks even more. Better to suck a little bit than a lot–just copy the existing solution until you have a better one.”

Also users are inherently resistant to change. Even change for the better. If you always listen to your users you’ll never innovate. You have to be willing to lose customers to make new ones. Microsoft is going to lose a number of customers to Windows 8’s changes. The calculation they’re making though is that they’ll save or make more customers than they’ll lose by making such radical changes.

[OT] esr, I’m re-reading TAOUP (as I do from time to time) and I noticed that the “Operating System Comparisons” section seems very US-centric, with no mention of (say) the Amiga or Acorn’s RISC OS. Was that just because you have little or no experience of them, or don’t you see them as having ever been serious contenders in the OS space? I’d quite like to see your analyses of them in the same manner as those in TAOUP.

I get your point but I don’t think GIMP is a very good example, or at least the comparison with Photoshop isn’t.

Photoshop is part of a toolset for professional graphics designers, mostly designing for print.
GIMP was originally a tool for web images and to create graphic elements in GUIs and the likes.
GIMP was never meant as a Photoshop clone or even as a substitute for Photoshop.
That only came to be when Windows users got used to having pirated copies of Photoshop available as a substitute for MS Paint, and get told (by misguided zealots) that Gimp is the Free Photohop.
So maybe you should compare GIMP to MS Paint.

On the other hand, it’s true that in OSS seems to have trouble with end user applications, especially professional tools in specific trades. It probably has to do with what Patrick Mauphin mentioned : if it’s not someone’s hobby, or something academia is researching, it doesn’t get developed, except as a “we should at least have something vaguely resembling ” effort.

true; and it’s also a cost on the receiving end.
I’m a sysadmin. The time I’ve wasted on comparing all the different licensing models for one and the same product, activation procedures that fail, having to set up licensing servers which turn out to be more complex than actually setting up the software itself, hassle with hardware dongles, having to keep track of number of licenses versus installed instances for per-user or per-machine licenses, procedures to migrate software from one server to another, software that stops working because you’ve added some extra RAM or an additional disk in a server or VM …

While with open source ; you install, and you’re done.

That, and the amnesia problem Winter brought up, are some of the most important pragmatic reasons I often look for OSS first.

Gavin Greenwalt on Sunday, June 10 2012 at 2:19 pm said:
> Except that just proves my point exactly. I think the Ribbon GUI change was a radical but ground breaking upgrade.

The is a pretty charitable interpretation of what happened. My interpretation would be more along the lines of Office had reached the point where there really weren’t any new features that could offer a reason to upgrade, and so people were beginning to skip upgrades. So they do this big innovative thing that sunk billions of person hours spent learning the old interface in exchange for a silly reason to upgrade, and lots of people making money on new training.

I have a friend to trains people on office, after Office 2007 came out he had some of his best years because the apparently easy to use ribbon required everyone to totally retrain.

That is the down side of the profit motive, it caused people to do non useful things for profit as well as useful things.

And C++ was just an excuse to get people to upgrade from C and colleges to make a lot of money training CS grads in a new language?

The old office interface was nearly unusable. Ease of use/learning does not mean you automatically know how to use it. But failing to bite the bullet and learn new things/develop new solutions is exactly what needs to happen in mature products in order to evolve.

If you never abandon legacy solutions you’ll be hog tied and unable to innovate. Some legacy solutions are the best solution but sometimes you have to just cut your losses and impose training to learn something else.

People who switched from horses to automobiles had a huge learning curve to transition. But the benefits are substantial. Imagine if we were still trying to use reigns to steer our cars. People are lazy though. They’ll keep trying to use bad solutions for years instead of spending a little time up front to save lots of time down the road. It’s human nature.

@Gavin Greenwalt on Sunday, June 10 2012 at 5:08 pm said:
> And C++ was just an excuse to get people to upgrade from C

That is a ridiculous comparison. C++ is massively more powerful than C, automobiles have a much higher utility than horses. Office 2007 offered almost no useful additionally functionality, and I totally disagree that it was unusable. Here is the plain fact, I can barely use Office 2007 when I try. I can never find a freaking thing in those ribbons. Of course I have a lot of experience using the old tools, but for grandma friendly that ribbon is sure hard to use. How come everyone needs all that training if it is such a step forward?

BTW, you are only looking at one side of the coin regarding legacy holdovers. A basic principle of good GUI design is predictability — the software behaves and responds to you the way you expect. Radical changes like the Office 2007 ribbon put that core principle in the meat grinder.

“Predictability” doesn’t apply to software updates once every 20 years. Predictability is about having to continually relearn user interfaces. Also predictability is just one dimension of a good UI. If predictability was king then nobody should ever use new software since it’ll change the user interface–and you definitely shouldn’t use Open Office which is completely different as well. The iPhone’s user interface was completely different from Windows Mobile’s but people found it far better. Predictability also ignores speed. An extremely predictable user interface will leave every UI element in exactly the same place regardless of context. An efficient user interface will adapt to the current context of usage and intelligently predict the needs of the user. Done badly and yes you suffer from unpredictable UIs. Done well and it dramatically boosts performance.

A massive multi-level hierarchy of drop down menus is not efficient after a dozen or so items. Word was getting into multiple times that and multiple levels of sub-menus that they managed to condense down into about 7-8 places for things. And instead of everything being hidden in a sub-menu you can now visually scan all of the sub-menus in a given tab.

Most of the programs I use are an order of magnitude more complicated than Word and none of them use menus for their user interface. There’s a reason for that–it’s simply an unsustainable user interface beyond the simplest of applications. Tabbed command panels (ala a ribbon) is one of many that at least offer incremental improvements.

But ultimately neither your anecdotes or my anecdotes are what matter. What should matter is the data. And from the data published from Microsoft’s user testing it was a significantly faster. And with millions of Word users and years upon years of usage metrics it’s pretty easy to come up with a UX testing routine that accurately measures its success or failure. Users are surprisingly bad at self evaluating their own performance. Our psychology is more likely to identify one thing that is slower even if that slow-down speeds up two other things in exchange.

And maybe it is slower for you. Maybe you have an unusual usage pattern. But that’s exactly why I seem to find open source user interfaces suck so badly. Some developer has some esoteric usage style that runs counter to how 99% of the users use it but since the Open Source motto is “if you don’t like it, learn to code and change it.” the developer always wins that argument. It’s incredibly rare to have someone who is actually skilled in user experience design to have any sway in an open source project. The open source philosophy is generally “we can design an application and then slap an interface on that.” The better approach is to start from the user. Imagine the ‘perfect interface’ and then build out the functionality to enable that usage.

“On using the elevator at my work, I often wish it wouldn’t pick me up on the ground floor and then go to the cellar to pick up the cleaner who wants to go up 1 floor with her trolley that fills the entire elevator. It should either pick her up first or drop me off first.”

Be careful what you wish for. Where I used to work, you would push the ‘down’ button at the end of the day, and wait. By and by, ‘ding!’, you’d look, and darn it, the car is going up. You’d wait some more, and ‘ding!’…it’s the same car, and it’s going up again! Nice! Our building does not obey the Law of Conservation of Elevator Cars! I often passed the waiting time imagining all those crews at the top floor with torches, franticly disassembling the cars that arrived and running the pieces down to the basement where they just as franticly reassembled them for their next ascension.

Getting into the upward-bound car would not work. If you did, and pushed the ground floor button after the thing had visited all the higher floors, it would light up the ‘G’ button, start going down, then cancel your choice and go where it wanted to, leaving you pushing ‘G’ over and over again while wailing, “Noooooooooo!!!!!!!……”

Gavin Greenwalt on Sunday, June 10 2012 at 6:52 pm said:
> “Predictability” doesn’t apply to software updates once every 20 years.

Curious, I wonder if you are deliberately being obtuse. The point is not that GUIs shouldn’t change at all. The point is that you maintain the core patterns that people expect. Evolution is the key with products like word, not revolution. And like I say, I have tried using Word 2007, I find it impossible to find anything on those ribbons, and, to be honest, I am pretty savvy about computer stuff. So how exactly can that be an improvement?

A new rev of Office is nothing like a paradigm shift to something like an iPhone, people don’t have the expectation that Ctrl-O will open a file on the iPhone for rather obvious reasons.

Like I say, I know the new ribbon seems popular, to be me it is just a case of the emperor’s new clothes. Hey, but maybe I’m wrong. I think Macs, reputedly grandma friendly, are really hard to use, I can’t get anything done with them either. (And have you tried XCode? Yikes!)

> And instead of everything being hidden in a sub-menu you can now visually scan all of the sub-menus in a given tab.

And that’s exactly what’s wrong with the Ribbon Interface. When there are lots of commands, there’s a tradeoff between “right at the user’s fingertips” and “easy for the user to find.” More frequently used commands need the former, while less frequently used commands need the later. But the Ribbon tries to make every command “right at the user’s fingertips” at the expense of “easy for the user to find.” It’s a general bias that Microsoft has had for a very long time, and the Ribbon takes it to a new extreme.

I will give Microsoft this, however, it generally does (or did) at least try very hard to be user-friendly, even if it occasionally fails horribly (Clippy, the Ribbon). Too much open-source is unixcentric, with the “Users are Lusers” attitude bred into its DNA.

Speaking as an outsider to the US political process, but as an interested observe, I see much of this argument could & should be made regarding the US constitution. The bulk of the arguments made above about the harm/benefit trade-offs to be made regarding the availability of source code revolve around the FUTURE uses/misuses/implications of not having the ability to modify its operation, past the initial point of purchase/use.

The same line of reasoning I believe should be made about a nations constitution – the circumstances surrounding the initial constitution are no longer the dominant environment of the world, and the amendment processes initially proposed have become increasingly sticky with the rise of well-coordinated pressure groups.

As a bellwether example, the issue of gun-control springs to mind. My cursory reading of the appropriate sections of the US constitution leave me in little doubt that it was intended that essentially, any citizen of (*deliberately fuzzy definition here*) good character should have the right to obtain and maintain a personal defense weapon and to be able to use it. This does not convince me that this should remain their right today – technology, social structures and the more heterogeneous nature of US cultures have changed radically since the founder’s time. As you point out, holding to a principle which causes more harm than good is a somewhat silly approach. I think that the gun-control issue exemplifies this very well.

I don’t think this is an isolated example – I think that there are many examples, not just in the US but across all western-style democracies. (As as aside, whether you agree or disagree with them, I feel the Occupy-* movements reflect this to SOME degree). Other issues such as free-market forces, movement of capital across borders, individual vs collective rights, big vs small government arguments, direct election of public officials, all have their own ossified statements of principle from the past that should be examined in this light.

That said, one of the things that infuriate people around me is that when they ask me a question about my position on issues, I almost invariably start with “It’s complicated….” :-) I don’t think I can solve the world’s ills – but I don’t think a set of principles enunciated 200+ years ago (or 2000 years ago if you want to include religion) can be unquestioningly adopted any more. At the risk of sounding like a sociologist (which I *really* wouldn’t admit to), all these things need to be contextualized with a view to current cost/benefit trade-offs.

Just one other thought on this: I am not familiar with this part of the Office GUI. In nearly every case the menu commands are either directly on the top level menu, or at most on a second level menu in a fairly obvious grouping. There might be one or two places I am forgetting but that is what it is for 95% of the GUI. Most of the important commands are readily available on toolbars too.

I’m not saying there aren’t many Windows apps that have crappy GUIs, but I don’t know why you think that the Word GUI was especially bad.

And since I am ranting about Office 2007, the other thing that drives me nuts is that the totally changed the default styles, and the new ones have WAY too much whitespace, and the two default fonts are really ugly. What bonehead came up with that decision? Course you can fix that, but there are a lot of ugly documents with a lot of sickly blue Calibri headings lost in a sea of whitespace.

Namely: you’re trusting an elevator car not to be dropped, and you’re trusting a magnetron not to radiate lethal levels of microwave energy beyond the cooking space. In other words, there do exist bugs whose consequences *are* severe.

Two examples that don’t make the case you’re trying for. Microwaves have hardware interlocks to prevent the magnetron from energizing when the door is unlatched, and since Otis invented the safety elevator, even cutting the cable usually won’t result in serious injuries: the only known free-fall death on an elevator with safety brakes was the result of a B-25 hitting the Empire State Building.

@ Jessica >And since I am ranting about Office 2007, the other thing that drives me nuts is that the totally changed the default styles, and the new ones have WAY too much whitespace, and the two default fonts are really ugly.

What’s that? Games have low amnesia harm? PC games maybe. But console games are a whole ‘nother story. They’re produced for proprietary, locked down hardware which, if it wasn’t for people who are essentially treated by the law like pirates, and effectively ARE pirates, would disappear with the ages.

Hell, let’s go back to the Super Nintendo: truckloads of custom chips with on-die program ROMs for anti-piracy purposes which were basically unemulatable until very recently. And lots of these cartridges are rare, and won’t last 20 years much less the 90+ copyright law demands. Not to mention the difficulty in writing the hyper-accurate software required to emulate all the idiosyncrasies and hardware bugs which conceal the software bugs in much of the rushed-out-the-door software written by game programmers.

Modern games aren’t any better because they are on optical media rather than IC cartridges. First off, the same things about stupid anti-piracy obfuscations apply; console manufacturers will do what it takes to ensure their games die with them. But there’s also online services now which sell DLC. What happens when the 360 or PS3 reach end-of-life and all that DLC goes with it?

We already know what happens – it’s the same thing that happened with the Super Famicom’s Satellaview service and the Sega Genesis’s Sega Channel service. The games disappeared. In the case of the Sega Channel, there was no internal memory on the cartridge, so we can’t even pull data off of old units like we can with the Satellaview. And even then, Satellaview games have to be hacked up to run on emulators.

Sega Channel and Satellaview were niche services which didn’t have much exclusive content. But downloadable content is a standard industry practice right now and most releases have it. Sure, yes, we might be able to get the most popular games’ DLC preserved. But what about niche games that aren’t that popular? Those might slip through the pirates’ fingers. Not to mention we didn’t have reliable ways of compromising the CPU security of the PS3 and 360 until about a year or two ago. DLC is going to evaporate faster than we can preserve it.

If anything, game preservation ranks up there with space shuttle tape preservation in terms of difficulty. At least back 10 or 20 years ago when most or all of the game was still on the disc, the preservation goal was easy. But when we have games being sold piecemeal to nickel and dime consumers, we have a looming preservation disaster on our hands.

Yes, because preserving them is not in the larger scheme of things very important. OK, I’m glad I can play the Infocom text adventures on emulators, but let’s keep some perspective here – it’s not like they’re (say) a small business’s financial records, or a person’s medical data, or primary data sets from some scientific experiment that is difficult or impossible to replicate. Neither life nor livelihood nor irreplaceable knowledge is on the line.

The manufacturers of games with heavy ties back to a privileged server and blobs of pay-as-you-go content have made the choice that they don’t want to produce art for the ages. They won’t show up for the future, so they don’t get to be part of it. Your choice to put emotional investment in these games has to be made knowing that they’re ephemera.

Mind you, this doesn’t man I disapprove of efforts to save what we can. But, again, I was analyzing relative degrees of harm. Game amnesia doesn’t score very high on the pain meter.

My apologies if some (or all of this has been covered, above. I read quite a bit of it and skimmed through the rest – I just don’t have the time right now to read it all in detail. Anyway…

When I publish Open Source software, I use a slightly modified version of the MIT License. It says, in effect

if you have it, do whatever you want with it. It is “as is” – I accept no liability for anything.

I want people to use my software. I want folks to use it in or with proprietary and/or closed source software. My goal has always been to attract attention to me and, hopefully, offer me large amounts of money to do stuff for them.

Unfortunately, the money part didn’t work out, but, mostly because of some AI I invented and software I wrote, plus a few other Open Source projects, all listed on my web page, Google really likes my home page. For example: type “placer claim” without the quotes and a page I wrote about not buying a placer gold mining claim on the Fraser River comes up second today (it goes up and down a bit). Doing this from the US may not produce the same results; I am not sure if Google take the country of the user into account.

The only page ahead of it is a Wikipedia page. My page is before all the outfits selling placer claims, placer mining associations, and, more impressively, it is before all the BC provincial government pages – the Ministry responsible for mining has quite a large and sophisticated web site.

About the GPL

The FSF claims the GPL is all about freedom. The “GNU GENERAL PUBLIC
LICENSE” contains 5733 words describing all the restrictions and responsibilities I have to follow to use this license. They also have, in addition to other stuff…

A Quick Guide
An FAQ
How to use GNU licenses for your own software

plus a good deal of other stuff so that I can understand how to be “FREE”.

To me, and probably many of the folks here, this isn’t freedom, it’s just plain nuts.

I realize that patents involve divulging trade secrets – that was their original purpose. And copyright does not protect algorithms (well they can, but in general, that is evil).. I am thinking that I have a legitimate example of where trade secrets are appropriate.

I am not referring to anything for which I would want a mathematician.

Well, maybe the seismic example would, because it might be dealing with Fast Fourier Transforms, time-domain/frequency-domain data and such. This area has already had a great deal of attention., and this subject is not my strong suit.

But the analysis of satellite imagery to look for kimberlite pipes (a primary source of diamonds) would be different, It would be based on looking for circular features of the right size with certain indicators… um…..

I just realized that since my spine has made it pretty much impossible for me to be employed, it might be wise to not explain every thing that I have been thinking about kimberlite pipes.

Yeah – i think that this is an example of where secrets are appropriate.

@ Winter: What did you mean by “if you want to keep it secret, encrypt it during application and do not publish.” ?

I could do the analysis and try to sell it, but this would probably not be attractive unless I could get he satellite imagery nothing. I might want to focus on areas in which mining companies are interested in my work.

So, I ask again: an example of where closed source might be appropriate?

“That sounds simple and obvious, doesn’t it? And yet, there are people who I won’t name but whose initials are R and M and S, who persist in claiming that this position isn’t an ethical stance, is somehow fatally unprincipled. Which is what it looks like when you’ve redoubled your efforts after forgetting your aim.”

the implication is that dr stallman is the one that has forgotten his aim and it actually *states* that he has no ethical stance and no principles. when linked to the paragraph above which defines what a fanatic is…

i spoke to dr stallman about the assessment concepts presented here, because i think they’re fantastic and should be more formalised, and adopted more widely including by the FSF. unfortunately he was sufficiently upset by the personal references that he can’t possibly recommend that. what’s the best way to proceed?

>the implication is that dr stallman is the one that has forgotten his aim and it actually *states* that he has no ethical stance and no principles.

Only half right. I do think RMS fulfils Santayana’s definition of a fanatic, and he’s known that for 15 years. He chose the rhetoric of moral evil over pragmatic persuasion much longer ago than that, and the rhetoric took over.

But how you can read what I wrote as a claim that RMS has no principles is beyond me. What I’m objecting to is his and the FSF’s repeated insistence that people who refuse to shout “proprietary software is evil” are unprincipled and lack ethical grounds for their position.

>he was sufficiently upset by the personal references that he can’t possibly recommend that.

Then maybe you should ask him when his personal ego became more important than advancing his cause. I expect better of him than this. In the past, his fixation on moralizing rhetoric has been a problem but his ego hasn’t been; it’s not good news if this has changed.

There was more I was going to say about this, but I think it’s important enough to warrant a blog post.

More like Linear Predictive Coding, a 1960s/1970s mathematical procedure (or even older).

@Brian Marshall
What did you mean by “if you want to keep it secret, encrypt it during application and do not publish.” ?

Make sure the key algorithm is not visible in the binaries. Use these evil mallware technologies for encrypting the binary only to decrypt it into RAM when the algorithm is run. Use every obfuscating technique you can lay your hands on. Encrypt input and output using one-time keys, only to divulge the keys after being sure things run safely.
If the algorithm must be run on networked computers, let the algorithm be decrypted using a key obtained by public key cryptography, DRM style.

In the end, it will not keep your stuff secret, but it might seriously delay discovery. Not that I think the delay will be worth it.

Or, simply keep everything secret, and do the analysis on your own computers. Say, run your software on some (secret) cloud host and let customers connect to it by way of Tor services.

Any mathematician can tell you that a secret algorithm is most likely an incorrect algorithm.

This depends very largely on what sort of algorithm you’re discussing. If it’s a security-related issue such as a cipher or hash function, then the rigid constraints of pseudorandomness do require external review for assurance that there isn’t a lurking non-obvious attack. However, the situation for analysis algorithms is quite different; these are not meant to be “correct” in some mathematical sense but rather useful in that they mine large quantities of data and produce some convenient information. In addition to the mentioned geological exploration application, consider PageRank.

@Christopher Smith
“However, the situation for analysis algorithms is quite different; these are not meant to be “correct” in some mathematical sense but rather useful in that they mine large quantities of data and produce some convenient information. In addition to the mentioned geological exploration application, consider PageRank.”

There is a difference between PageRank results and getting a team to drill in some inhospitable place.

However, my point was that “secret algorithm” is a lot like “security by obscurity”. With the added benefit that all software has bugs in inverse proportion to the number of reviews. The same holds for algorithms in general.

It’s not just the “number” of bugs is it ?
Severity and how quickly they get fixed can often be more important.
Also, the type of bugs found by different testing and review processes will necessarily differ.

One only has to look at the ongoing tragedy of Firefox memory and resource management to see that open source has different bug sets. Maybe it’s worse or better than IE, but there is no arguing that the bug sets are different in character, not just size.

I did bug hunting on a truly vast IBM/Microsoft CS project and we could find bugs far faster than they could be fixed.
On the face of it, that sounds like CS is worse, but in effect it was like most open source efforts in that literally thousands of people could look at the source code and there were people whose job it was solely to read code to see if it was good.
One gang of programmers created a “random program generator” which generated random programs (surprise) to test the operating system API in ways they were “unusual”, ie they didn’t use the API the way the developers expected and thus found bugs that wouldn’t have been spotted until it got wider use.
No reason of course that can’t be applied to OSS, indeed part of the futility of the argument is that there is no process that can be carried out by one set of developers that can’t be executed by a different set.

The example of Counterstrike springing from Half Life is probably the biggest flag saying that games rate a high hackability harm. Users created a more popular game from an engine that a gaming house made.

@Luke Leighton “the implication is that dr stallman is the one that has forgotten his aim and it actually *states* that he has no ethical stance and no principles. when linked to the paragraph above which defines what a fanatic is…”

Luke, it appears you have misread what ESR wrote. He says RMS is a fanatic. He then says that RMS considers non-FSF people to be unprincipled with no ethics. He does not say RMS has no principles or ethics. You’re reading that one sentence in the wrong direction.

Here is the relevant sentence: “there are people (RMS) … who persist in claiming that this position isn’t an ethical stance, is somehow fatally unprincipled.” RMS is claiming certain others to be unprincipled and without ethics. No-one is saying that about RMS.

>That’s a feature, actually; if there were major surprises it would suggest that we had wandered too far away from the intuitions or folk theory we’re trying to clarify.

You are not the type of guy who reads Eric Voegelin and yet you use his methodology rather beautifully. Roughly this: in the human sciences incl. philosophy the subject of investigation, the human society, is different from the subjects of the natural sciences, because it is a self-interpreting system, the theorist himself is more or less member of it and actually his work is part of its self-interpretation. Hence the symbols used by theory cannot be separated wholly from the symbols used by society which are themselves under investigation, what we can do is to take the symbols used by society for self-interpretation and we see if we try to make them less subjective and less arbitrary with a process called critical clarification.

@Michael Hipp:
>You’re reading that one sentence in the wrong direction.

To be fair to Luke, the sentence that he misread contains a syntax error that confuses its meaning.

The correct wording would be: “And yet, there are people who I won’t name but whose initials are R and M and S, who persist in claiming that claiming that this position isn’t an ethical stance, is somehow fatally unprincipled.”

The missing duplication of “claiming that” in ESR’s original makes it sound like RMS is making the claim that “this position isn’t an ethical stance”.

@Jon Brase “The missing duplication of “claiming that” in ESR’s original makes it sound like RMS is making the claim that “this position isn’t an ethical stance”.

Huh? Your “improved version” is wrong. It changes the meaning of what ESR wrote. RMS says others are unprincipled and without ethical stance. ESR’s original version (below) reads just fine. Luke was apparently misreading it. But you’re putting words in Eric’s mouth.

ESR:
“… we should oppose closed-source software, and refuse to use it, in direct proportion to the harms it inflicts.

That sounds simple and obvious, doesn’t it? And yet, there are people who I won’t name but whose initials are R and M and S, who persist in claiming that this position isn’t an ethical stance, is somehow fatally unprincipled. “

Oh. I see. Yeah, disregard my correction. ESR’s original still doesn’t read well. I might suggest “this position isn’t an ethical stance and is somehow fatally unprincipled” or just “this position is somehow fatally unprincipled”.

Transitioning from “isn’t an ethical stance” to “is somehow fatally unprincipled” with a comma is confusing (though the equivalent spoken construction would probably be less so).

LS on Sunday, June 10 2012 at 9:59 pm said:
4. I often end up here, where I pull down my copy of “Word 2007 for Dummies” and look it up. Hey, I’m not proud – when I’m out with the SO in the car, I’ll even stop and ask for direction

Yes, word processor UIs all suck because they do so many things, and the function tree quickly overwhelms whatever brilliant UI the designer comes up with. You need to keep the manual handy. Ditto for programming in C++. In life you have to deal with sucky things, though we all try to minimize our exposure to them.

It is easy to think closed firmware and blobs inside things we don’t think of as ‘computers’ isn’t an issue. And sometimes it is, we just haven’t imagined what we are giving up.

A PC BIOS isn’t much of a problem since in a modern machine it’s purpose is mostly to just load the actual operating system and then be ignored. Except of course for ACPI. It almost never actually works right and everyone spends a lot of time writing around those bugs… when they can. You can’t repair that defective code without being able to replace the BIOS.

The firmware in your TV isn’t important, right? Because we haven’t imagined what the alternative could be. However we are seeing signs of what could be. A lot of TVs now offer a lot of PC like functionality. But even in models without network ports and such there is a lot of possibilities being lost. I have (like most people now) a TV that runs Linux. Having the kernel source doesn’t do a heck of a lot of good because all it does is boot and launch “application” which is a totally closed blob that opens up another big blob of binary resources. If most of that stuff were just documented somewhere, or even source available (you can release source under copyright after all, the spectrum includes more than no source, GPL and BSD) it wouldn’t take long before a wide community of rethemers, modders and such sprung up. Many specialized set top boxes could probably be eliminated, think the hotel industry and such. Just standardize on a product line and flash the extra functions into the sets. Now that would require a relationship with the vendor to ensure availability this would be a win for the set maker, so where is the downside?

Or think about even a Wifi firmware blob. Yes the idea of opening any of that scares the FCC and thus the manufacturers. But they operate in an unlicensed band and probably can’t generate the legal limit under any circumstances in the case of most products. So that is just needless fear. Now imagine what sort of imaginative new uses for inexpensive broad bandwidth software defined radios with attached DSPs could be possible if specs were available! But they won’t even give up the specs to licensed Amateur Radio license holders, who are legally permitted to experiment with RF devices, it is one of the officially stated reasons for the Amateur Radio Service to exist.

So while I can’t really think of a reason I’d want the source to my microwave, anything more complicated could probably be made better by somebody who would care enough to tinker with it. The question is why is industry so hesitant to even talk about how their products work? It isn’t just the source they hold as a most precious secret, they don’t want anyone to see a detailed spec sheet. What are they afraid of? Somebody might make their product better? Where is the downside there?

The question is why is industry so hesitant to even talk about how their products work? It isn’t just the source they hold as a most precious secret, they don’t want anyone to see a detailed spec sheet. What are they afraid of? Somebody might make their product better? Where is the downside there?

Actually, believe it or not, a lot of companies are fearful of patent trolls using their data sheets against them.

Actually, it should be “freely readable”, but that also implies that you can verify the source you’re reading and the code you’re wondering about are the same. It also implies understanding entire build toolchain. Just because *my* compiler doesn’t produce bugs doesn’t mean the vendor used the same compiler.

It also implies understanding entire build toolchain. Just because *my* compiler doesn’t produce bugs doesn’t mean the vendor used the same compiler.

Yes, but you also need to understand whether the sensor works right, etc. At some point, reasoning won’t get you there — you have to treat the breathalyzer like the physical device it is, and rigorously test the design (by abusing a few units) and making sure that it generally works, and that any failure modes are actually known and accounted for by the documented calibration procedures, and that the cop actually followed the documented calibration procedures, etc.

Source code is useful in this endeavor, but not much more so than knowing the exact chemical composition of a steel alloy used in another physical device that harmed somebody when it corroded, or fatigued and broke under stress.

Interestingly, the primary utility of having the source code or toolchain is because programs can operate in a very non-linear and counter-intuitive fashion, so it is too easy to miss operating submodes. But all these things can eventually be teased out of a hardware unit. It just costs more.

@Jeff Read: Bullshit. I’ve used OOo/LibreOffice since, ooh, at least 2002. I have never had a problem with exchanging files with people using MS Word (and considering I’ve done one and a half degrees in that time, with a lot of group work…). Unless you are doing absurd things like using overly complicated formatting (maybe you should use a graphics program instead? or a layout program? or save as PDF?) or something equally stupid, then you should have no problem.

I just opened a .docx of some documentation I’m supposed to review. It didn’t have particularly complicated formatting, yet it showed up in LibreOffice as a single blank page.

In the workplace, you can expect to receive documents that you have to act on that were authored in some piece of Microsoft software. Unless you use the same Microsoft software to open them, you’re rolling the dice and, ultimately, costing the company you work for money.

I’d disagree with part of your game opinion. Games have a different type of lock-in value at times, one that can be very high, requiring you to use Windows whenever you want to play most of the games avaliable, especially MMORPGs.