Posted
by
ScuttleMonkey
on Sunday August 06, 2006 @02:29AM
from the time-for-a-face-to-face dept.

j00bar writes "After Linus Torvalds' impassioned critiques of the second draft of GPLv3 and the community process the FSF has organized, Newsforge's Bruce Byfield discovered in conversations with the members of the GPLv3 committees that the committee members disagree; they believe not only has the FSF been responsive to the committees' feedback but also that the second draft includes some modifications in response to Torvalds' earlier criticisms." NewsForge and Slashdot are both owned by OSTG.

The FSF intends to use the GPL as a means to prevent people from doing certain "bad" things with free software

There are two things to think of that are important about this - first,it will also stop things that are not "bad" by the FSF definition, second - the FSF didn't write the software and does not own the copyright on it or have any obligation owed to them by the authors - attempted name changes or not.

It is up to the FSF to convince the authors that any new licences are a good idea - the implication that the authors haver to do what the FSF says at all - especially when it is only a draft version of the new licence - is an odd way of looking at things. Effectively to get linux th use a different licence the FSF has to convince Linus, nearly every other kernel developer and all the groups that package distributions that it is a good idea, and even since a superficial look at the new draft licence turns up some problems there is more work to be done. Going out of your way to hurt companies that already comply with the GPL and add to the devlopment of free (as in look it up in the dictionary not make up your own meanings) software but have signed binaries on their hardware may be seen as "collatoral damage" by the thoughtless - but surely the FSF and other contributors can do better than that?

The problem I have with Linux re: GPL 3 is that he's just being ignorant. He has some beef about people having to give out their own personal private keys that has been shot down by any number of people that actually know what they are talking about legally (PJ, Eben, etc). Just casually reading the license and Linus' comments, he just isn't making any sense.

My best bet is that Linus doesn't actually want to understand the GPL v3. Linux is eminently practical, and the practical thing to do to increase Linux usage, fix bugs, and add new features is to make Linux corporate friendly. A *lot* of contributions come from the likes of IBM, Red Hat, Sun, Novell, and other companies. I bet the prospect of these companies pulling out their support is a major consideration (whether intentional or not).

But...I dunno. Until Linux came along, these things seemed a bit on the fringe to me, except for Emacs, which predates the FSF anyway. I installed GCC and GDB once or twice in the early 90s, but it never did as good a job as the compiler and debugger you always got along with your proprietary Unix, which you got along with your workstation. (The $1000 license fee being peanuts compared to the $40,000 hardware anyway.)

So at least in my experience -- and I admit I was a scientific programmer, a user, and not a systems programmer or applications developer -- the GNU tools were pretty much just curiosities until Linux made it possible to run Unix on your PC. Now that was a Great Thing. All the elegance, stability, security and network-savviness of the work computer now available at home. Very nice. And the GNU tools made that possible, yes. But the free kernel was the keystone to that arch, I think. Linux could have squeaked by with few less GNU tools (albeit not without GCC), but I think all the GNU tools would have remained curiosities without the free kernel. As soon as a great free Unix existed, a lot of people jumped in to add what was still missing, like a fancy desktop instead of plain old X and fvwm, drivers, or package managers instead of a giant tarball and a 64kb README. But would people have ever jumped in to create the kernel, knowing the various GNU system applications already existed? Well, they didn't -- not until Linus. Maybe it had to wait until hardware prices came down, so if it hadn't been Linus it would've been someone else anyway. But maybe it's also harder for people to get excited when they see a bunch of pieces lying around, so that if maybe you built the central piece you could assemble everything into a coherent whole. Maybe it's easier to get excited when you can see a working model, even if it's crude and belches smoke everywhere, and could use some serious extra tinkering to work better. It's from that point of view that I think Linux has inspired and will inspire more people to do OSS work, or use it, than GNU. Maybe Linus is Shakespeare stealing Roger Bacon's plays -- but it's nevertheless Shakespeare who gets remembered in the history books.

Also, what I recall (vaguely) is that between '85 and '95 or so, the GNU kernel was always coming along Real Soon Now, but seemed stuck because they wanted to Get It Right. Let's just pass lightly over the gcc/egcs wierdness, which is maybe harder to understand than the Pope's nuanced position on masturbation among priests. I think substantial dithering got short-circuited by Linus, and by the people fired up about Linux,

Now, I'm not saying RMS or the FSF's work isn't highly valuable. The value of their work isn't what I'm talking about at all. What I'm saying is that I think the future belongs more to people like Linus -- that they will have more lasting influence -- because, as the OP said, they seems more focussed on getting stuff out the door, and the FSF (and RMS in particular) seem more focussed on making sure it's the right stuff, built with the right moral philosophy, isn't going to exploit the masses or give you karma, et cetera. In all my working experience, folks who spend substantial amounts of energy on the aesthetics of their product rather than on its bare ugly function get chewed up by the real world sooner or later. Jobs and NeXT, Betamax vs. VHS, Multics, DEC's Alpha chip -- tragedies like that come to mind. The perfect is often the enemy of the good, as they say.

Being unable to modify free software on a hardware device and run it on the same device violates the spirit of free software. The vendor could build upon the mountain of free code, saving a lot of money in the process (i.e. not reinventing the wheel), but does not grant any of these freedoms to their customers.

But the vendor must still publish the source of any changes he makes to the code. So the vendor is giving back. If you don't like that a vendor's device is locking you out, don't buy it. Is that so hard? Do you think you can avoid buying something you don't want on your own, or do you need the FSF to protect you from your own bad buying decisions?

Meanwhile, for whatever reason, I want to buy this hypothetical vendor's device. Maybe I have a certain application where I want a TPM set up. Because the FSF wants to protect me from myself, I no longer have the choice to buy a machine that will run OSS.

That's the sort of world we'd have if the GPLv3 became the dominant license. Yuck.

I'm afraid that soon DRM will be implemented everywhere at a low level so you'll have to completely refrain from buying any devices that can run user code, because neither of them will let you run what you choose.

This is the second time you've proposed that without the GPLv3, OSS developers will be locked out from developing a whole generations of computers. Could you please explain this scenario for me? Is there going to be a conspiracy of hardware makers that are going to lock out OSS development?

So you think licence is right way to fight it? That GPLv3 will push hardware manifacturers, RIAA, MPAA and rest of parasyte crowd say "ohhh, our bad, let's move back to old, good way of earning money?"

Sorry, but in reality, it requires political fight, because our opponents have took it to political level - they don't want to market to deside that DRM is bad, they want law that protects DRM. They don't want to market to deside how much value is in code or product, no, for that they go to politics and buy new, shiny patent law.

Stallman could have listen to Linus and allowed clearly several uses of DRM with GPLv3 licence. I think everyone would be happy. Yep, maybe it would be a little bit more difficult to understand, but anyway...

I am not against Stallman, or for Linus. Both they have some right points and some false. Linus is a little bit flamer - but he has been always, for example, he trashed GNOME, which I use - but at least he some right to do it. And at least he give some serious points from developer's side.

And yes, Stallman rocks as ethical leader or politican, because he sees bigger picture. What he needs is to be more constructive and learn new ways to achieve his goals.

Here's why DRM will fail on it's own: at this point in history, when a cartel of copyright holders are trying to wall off culture and charge admission, we have unprecedented new tools for the creation, marketing, and distribution of culture.

Here's why this is wrong: DRM is not about music and video any longer. It was, but technology companies have realised that DRM is "digital"... it doesn't just refer to music and video. DRM hardware (such as these trusted computing systems, like the Intel Apple Macs) allows them to specify the exact piece of code that is allowed to run, if and allowed to run, whether it is allowed to access as piece of data. It is control beyond their wildest dreams.

DRM is not about the MPAA/RIAA these days... they are just puppets. It is the Microsoft/Intel/IBM/HP/Apple's of the world -- they will control the hardware/software and they will broker access to your machine, and run their software secretly, and stop you from changing anything or doing anything of which they do not approve.

Controlling music/video (and selling it to you) is just the cream on the top of the pudding for them. It's about time geeks realised this.

I do use emacs, or rather xemacs, but the project is a good as dead, and certainly not the future.I use glibc, and I would like to use it even more, but the port to Solaris is dead.I use gdb, and I would like to use it even more, but the support for SPARC64 is very flaky.I use gcc, and I would like to use it even more, but it does not compile to common virtual machines (Perl, Parrot, Java or CIL).bash is nice, certainly compared to the bourne shell, but it has been like this for more then 10 years.

So while a lot of GNU tools are useful, even the very best ones leave at lot do be desired. And don't tell me to just write a patch, because it is not that easy. The missing support of virtual machines for example is a political decision of RMS, and much the same can probably be said about the Solaris port of glibc. The missing support of anti-aliased fonts in emacs is just a symptom that the project is dead.

Man! The knives are out! Is there anything to which you won't stoop? Character assassination because a respected member of the community disagrees with you? The ends invariably justify the means with you types. Anyway, thank you for revealing the true colors of the FSF goon squad.

But so far it's been Linus who's done the most to actually change the world.

No, he has not. All Linus did was write a kernel. If he hadn't done so, there were half a dozen alternatives about to become available. Most likely, if Linus had been run over by a truck, we'd be running the BSD kernel now, or some Mach derivative.

Proving once again the superiority of actually getting working technology out the door, versus spending a decade or so fine-tuning your philosophy about how to begin working on the great technology that you will eventually design when you have the philosophy just perfect (if everyone hasn't succumbed to old age first).

That charge is totally unfair. GNU released plenty of software long before the Linux kernel was created. And the reason development on the microkernel went slowly was not because of any "fine tuning of philosophy", it was because porting and cleaning up a large, existing microkernel codebase and giving it POSIX APIs was a big project that needed to be completed in one big development effort and required the PC industry to start delivering hardware capable of running it. Linus instead delivered a flaky and incomplete kernel that became popular because it ran on PCs right away, but that required many years to beat into shape.

I've had enough troubles in my own career directly traceable to wanting to Get Things Right at the expense of Getting Things Done to appreciate this particular point with some sensitivity, not to say bitterness. Feh.

You're right: getting code out the door, even if it is inferior quality, is clearly generally good for companies and developers. It's not good for users or the community. What happened with Linux vs. Mach was somewhat analogous to what happend with DOS vs. UNIX: the quick and dirty hack won, and users ended up paying the price. Fortunately, because Linux at least copied proven UNIX APIs, the Linux cleanup avoided most of the pain that have accompanied analogous evolutions at Microsoft and Apple.

He's written pretty extensively on this issue. He does not see the threat of totalitarian Trusted Computing happening. I happen to agree. This is a paranoid fantasy put into your head by RMS and others to further their own agenda.

You're calling me a troll? You're the one suggesting that Linus has sold out. I'm not even a big Linus fan, but I'm aware of his contributions and I have respect for the man. And it amazes me that you could even ask that question about him. It's really astonishing. How can you maintain it's an "honest question"? At best, it's a stupid and reckless question. At worst, it's a calculated smear. I just don't get how you people can impugn his integrity and at the same time, not question the integrity of RMS, well known for his demagoguery.

Let me ask you this: In your mind, are there no good uses for trusted computing? Are there never circumstances where trusted computing could be applied without evil effect? Or is all trusted computing inherently evil and must be stamped out?

DRM and TCPA are user hostile technologies, support can and will be ripped out of Open Source code. There is no argument here, I will not running DRM or TCPA compliant code, period.

I've been seeing senseless rants like this increasingly on Slashdot. It's like some superstition. All Trusted Computing does is give ownership of a computer to whoever owns the master keys.

If you have the master keys (as you would, on a box you built yourself) then Trusted Computing means you *really* own your computer. You can prevent rootkit installation, and guarentee access to content no matter how DRM-encrusted. If someone else has the master key, why would you pay money for something you don't own?

My basic objections - in soundbite form, as unfortunately encouraged by the interface - seem to have been submitted already; I'll still mail the text below to the comment system, in the hope that a human being might encounter it somewhere in there.;)

That was a very interesting read indeed, and confirms that Linus was and still is wrong about the process. I still think he's right about the result so far, though. Here's what I'll submit to the e-Mail comment system in a few minutes, both "for the record" of this thread, and as a shameless attempt to exploit the attention of a committee member, if it does get dropped:

I feel deeply uncomfortable with the following section, specifically with the *marked* sentences:

Terms and Conditions, section 1, paragraph 4

The Corresponding Source also includes any encryption or authorization keys necessary to install and/or execute modified versions from source code in the recommended or principal context of use, such that they can implement all the same functionality in the same range of circumstances. (For instance, if the work is a DVD player and can play certain DVDs, it must be possible for modified versions to play those DVDs. *If the work communicates with an online service, it must be possible for modified versions to communicate with the same online service in the same way such that the service cannot distinguish.*) A key need not be included in cases where use of the work normally implies the user already has the key and can read and copy it, as in privacy applications where users generate their own keys. *However, the fact that a key is generated based on the object code of the work or is present in hardware that limits its use does not alter the requirement to include it in the Corresponding Source.*

This specifically allows modified versions to hide the fact that they are modified. Thereby it creates two technical problems:

it makes it impossible to verify and certify distributed systems consisting in whole or in part of (derived works of) Free Software.

it makes it impossible to verify the integrity of a distribution package consisting of Free Software and hardware and/or non-free software, e.g. to determine eligibility for support or warranties.

These seem to me to exclude Free Software from a substantial set of reasonable and legitimate applications in existence today or foreseeable for the future.

Additionally, it seems to be in conflict with the following provision:

Terms and Conditions, section 5, paragraph 2

a) The modified work must carry prominent notices stating that you changed the work and the date of any change.

In particular, the section quoted earlier seems to amend this with, "unless the notice would be machine-readable, in which case the modified work may choose not to carry such a notice". This serves as a strong deterrent to authors of original works to free those works if they customarily provide them e.g. as part of a hardware package on which they in turn provide a warranty.

Lastly, this whole issue seems to open up a veritable can of worms with regards to the definition of "derived work" vs. "aggregate". In the case of Tivo for example, the distribution package consists of a computer with some proprietary hardware on which some free software and some proprietary software is installed, all of which works properly only in connection with an online service. Now for some reasons, the proprietary software part is s

If you have the master keys (as you would, on a box you built yourself) then Trusted Computing means you *really* own your computer. You can prevent rootkit installation, and guarentee access to content no matter how DRM-encrusted. If someone else has the master key, why would you pay money for something you don't own?

If, as you suggest, we shall be in control of the "master" keys then I fail to see how it would help the content industry.

If, as I suspect, someone else shall be in control of the "master" keys then I can see perfectly clearly how it would help the content industry.

As the idea of TC and DRM are being pushed by the content industry I think it would seem logical to assume that my suspicions are in fact correct.