Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.
If this question can be reworded to fit the rules in the help center, please edit the question.

5

Be careful what you read about when pulling out these "debates" (flamewars) from the "old" days. Consider that because Linux was very dear to Linus's heart there was likely a lot of emotion surging during these discussions. Therefore you're likely to come across many statements being made just to be cheeky or to piss someone off.
–
Wayne KoortsFeb 14 '11 at 6:06

10 Answers
10

As Linus writes in the debate, it's with tongue in cheek (i.e. not to be taken too seriously).

Then, he goes on to explain that while portability is good thing, it's also a trade-off; unportable code can be much simpler. That is, instead of making the code perfectly portable, just make it simple and portable enough ("adhere to a portable API"), and then if it needs to be ported, rewrite it as needed. Making code perfectly portable can also seen as a form of premature optimization - often more harm than good.

Of course that's not possible if you can't write new programs and have to stick with the original one :)

I think it means that each program should be written specifically for the hardware and operating system it runs on.

I think what he's driving as is that general purpose code that can run on several platforms is less efficient or more error prone than code written specifically for and tailored to one platform. It does, however, mean that when you develop like this you have to maintain several different code lines.

Back when Linux was first written, it used features available only on the i386 CPU, which was fairly new and expensive at the time.

That is exactly what linux does: it
just uses a bigger subset of the 386
features than other kernels seem to
do. Of course this makes the kernel
proper unportable, but it also makes
for a /much/ simpler design. An
acceptable trade-off, and one that
made linux possible in the first
place.

As we went into the 21st century, the features that made the i386 unique became totally mainstream, allowing Linux to become very portable.

"...became totally mainstream, allowing Linux to become very portable," and proved that portability of Linux, at that point in time, would have been premature optimization.
–
Roger PateOct 29 '10 at 9:53

2

@Roger: I can't really agree. Those features have become mainstream -- but in the time since, CPUs have added more features, many of which Linux either ignores completely, uses only minimally, or has had to be massively (and painfully) rewritten to use even reasonably well. At the same time, Linus does have at least some point: something that works reasonably well now (even non-portably) beats something that gets talked about for years but never finished (e.g., GNU HURD).
–
Jerry CoffinNov 4 '10 at 18:51

@Jerry - sounds like research projects at a place I used to work: "You should give up now. What I'm working on will render all you do obsolete". That was 20 years ago. Still haven't seen that whiz-bang new stuff leave the research lab.
–
quickly_nowFeb 14 '11 at 6:40

@Roger, portability is not an optimization.
–
user1249Sep 24 '11 at 10:07

As someone who's done a lot of Java, and experienced the "write once, debug everywhere" phenomenon on a weekly basis for years, I can fully relate to this.

And Java is probably a mild example. I can't even begin to imagine what people go through who try to while a portable code base in a language/toolkit which wasn't even designed to be portable in and of itself.

Right now at work, we are investigating the idea of writing a lite version of one of our products for mobile devices. I've done some research on how to do a portable version of it for both J2ME and Android - that tries to share as much of the codebase as possible (obviously can't be fully "portable" per se, but it's a similar philosophy). It's a nightmare.

So yeah, sometimes it really is good to be able to think (and do) in terms of using the given tools for the given job. ie Freely developing against one, single, monolithic platform/enviroment. And just writing separate, clean versions for each.

Although some people view/treat portability, following standards, etc., as morally superior, or something on that order, what it really boils down to is economics.

Writing portable code has a cost in terms of effort to make the code portable, and (often) foregoing some features that aren't available on all targets.

Non-portable code has a cost in terms of effort to port the code when/if you care about a new architecture, and (often) foregoing some features that aren't (or weren't) available on the original target.

The big qualifier there is "when/if you care about a new architecture". Writing portable code requires effort up-front in the hope of an eventual payoff of being able to use that code on new/different architectures with little or no effort. Non-portable code lets you delay that investment in porting until you're (at least reasonably) sure you really need to port to some particular target.

If you're sure up-front that you're going to port to a lot of targets, it's usually worthwhile to invest up-front in minimizing long-term porting costs. If you're less certain about how much (or even if) you'll need to port the code, writing non-portable code lets minimize the up-front cost, delaying or possibly even completely avoiding the cost of making the code portable.

I think it's also worth noting that I've spoken of "portable" and "non-portable" as if there was a clear-cut division between the two. In reality, that's not true -- portability is a continuum, running from thoroughly non-portable (e.g., assembly code) to extremely portable (e.g., Info-zip), and everywhere in between.

Tanenbaum makes the point that much of Linux is written in a non-modular way to leverage the 386 CPU, state of the art at the time, instead of making the CPU interaction be a component, and thus very easily swappable. Tanenbaum essentially believes that the fact that the Kernel is so monolithic and tied to 386 CPU makes it very difficult to,

If you're want to write portable code, you have to write portable code.

What do I mean by that?

The design must reflect the purpose. If the language is C, for instance, design it so that the minimum number of lines of code need to change in order for it to work. This would often mean separating the display from the computation, which is a good design philosophy anyway (MVC). Most C code can be compiled anywhere, provided you have access to a good compiler. Leverage that and write as much as you can to be generic.

BTW, this answer will only apply for applications. OS and embedded are another animal entirely.

In another of Linus's quotes he said : "C++ is trying to solve all the wrong Problems. The things C++ solves are trivial things, almost purely syntactic extensions to C rather than fixing some true deep problem".

Also in his biography, "Just For Fun" linus while quoting about microkernels said that for a problem with complexity 'n' if you divide the problem in '1/n' unique parts.. then the total complexity of developing such a system would be 'n!' this itself is a factor enough not to attempt such a thing, and extracting efficiency from such a complex system would be very difficult.

You have to take into account the fact that during those debates, Linux was very new and was largely a 386 only OS. I think if you asked Linus today, he would have a different opinion. Maybe not quite as extreme as Tannenbaums, but he will likely give hime a nod and say that he was right about some things.

Linus and the other kernel developers went through a lot of pain to make Linux portable, but then again, Linux may never have existed if Linus had had to make it portable to begin with.