Posted
by
samzenpus
on Monday July 16, 2012 @02:52PM
from the too-big dept.

alphadogg writes "A host of small modifications and a large number of system-on-a-chip and PowerPC fixes inflated the size of release candidate No. 7 for Version 3.5 of the Linux kernel, according to curator Linus Torvalds' RC7 announcement, made on Saturday. Torvalds wasn't happy with the extensive changes, most of which he said he received Friday and Saturday, saying 'not cool, guys' in the announcement. However, the occasionally combustible kernel curator didn't appear to view this as a major setback. 'Now, admittedly, most of this is pretty small. The loadavg calculation fix patch is pretty big, but quite a lot of that is added comments,' he wrote, referring to the subroutine that measures system workload."

Slackware is great for what it is. I remember working with it in the early 90s. I think Arch is also excellent. But none of these fill the same niche as RHEL.

I too used Slackware in the early nineties - after SLS [wikipedia.org] on its fifty-mumble floppy disks, Then I used Red Hat, Mandrake, and even Caldera before I found Debian in about 1996. Once you've used Debian, and the Debian package manager, you're never going to want to use anything else.

No, Coward, actually I use two different RHEL6 clones and an RHEL5 clone on a total of 7 different machines every day. Try telling anyone who demands enterprise reliability you don't want to cope with RHEL, dipstick. Sayonara, loser.

Your lack of knowledge is showing. Up2date was not a package manager. It's an update utility. You don't use it to uninstall anything. Yum is the package manager. The best package manager.

As someone who has dealt with dependency hell. I'm going to go ahead and say the deb and associated utilities (apt-get, aptitude) are the best I've ever dealt with.

Unless you're talking about Ubuntu. Where anyone and their mother can start their own PPA with no qualifications. I like the strict adherence to packaging standards that it takes to get into the debian repository.

I think the point is not so much the swelling but the fact that this is a huge bunch of stuff to be thrown in during an RC cycle, between rc6 and rc7. You're not really supposed to be doing anything major to a release candidate...

If it wasn't for his bitchiness, it would be Windows. Yes, I am not kidding.There'd be ENTERPRIIIISE CODING brilliance in there, AKA useless bloat for stuff nobody should EVER, IN THE HISTORY OF EVER, have access to, and countless other things. (up YOURS Microsoft! )What's that, writing a driver are you? If it isn't fully descriptive in code, you're fired!What's that? You saved a huge number of cycles by using a Goto there? FIRED, we want more lines! (I'm not even kidding, Linus had to defend a Goto in a driver-level file, this is how mad this anti-Goto retardedness is these days, kids man)So on and so fourth.

Hey, at least he isn't a Ballmer. Nobody can beat ol' monkey boy.Developers developers developers deve... oh go away developers we don't want you in Windows 8 anym... no sorry we were just kidding!... honest!Linus is always solid.Without him, Linux would turn in to PHP. Look what happened to that. PHP is plain awful now. It started off with a good idea, then all the amateurs took control and ruined it. You don't want that now, do you?

"Without [Linus], Linux would turn in to PHP. Look what happened to that. PHP is plain awful now. It started off with a good idea, then all the amateurs took control and ruined it."There are a remarkable number of bodgers in the linux kernel too. The general code quality is not that high. In particular from some SoC vendors. He has to trust that if they're selling that code to customers who need it to work, then it probably works. And if it doesn't, then they are the maintainers responsible for fixing it. U

Unfortunately the customers are sometimes as incompetent as the chipset vendors, and don't know what they're being sold.

Or who to blame... If it doesn't work with Windows it is seen as the manufacturer's fault (as they provided the drivers) but if it doesn't work under Linux it is the kernel dev's fault (as the user doesn't know that the drivers there were written by the manufacturer too) and it is they who are expected to fix the problem. I do not envy them that position!

My last job at ${GADGET MANUFACTURER} was to receive code from ${SOC VENDOR}, and vet it for quality, before we pulled their changes into our source tree.

And you may interpret that painfully literally. Even if the quality was shit, we still had no choice but to pull the changes. All I provided was a head-start on the bug-filing. At least I did have someone to blame. I didn't envy my position, I was going to demand a rise or walk.

Without him, Linux would turn in to PHP. Look what happened to that. PHP is plain awful now. It started off with a good idea, then all the amateurs took control and ruined it. You don't want that now, do you?

Really?!

I've not touched PHP for a few years, so I might be wrong about its current status, but from what I gather there have been significant improvements. Objects working properly for one. References too. References to objects specifically. When I was working on some stuff in PHP4 those areas were a mess. I'm also told that chunks of the standard library now have more normalized variants. This was starting when I left the arena, with data access libraries t

PHP will be around for decades more at least. Even though Perl had a fairly significant fall from grace (initially as people moved to PHP or Python web-side and Python for admin, with other environments like Rails taking a chunk of share too more recently) it is still going strong in some places. PHP will become less common as people move on to other stuff (Python, Rails, javascript through node and others, and so forth) especially when one or more of the other options hits the tipping point where it become

> If it wasn't for his bitchiness, it would be Windows. Yes, I am not kidding. There'd be> ENTERPRIIIISE CODING brilliance in there, AKA useless bloat for stuff nobody should> EVER, IN THE HISTORY OF EVER, have access to, and countless other things.> (up YOURS Microsoft! ) What's that, writing a driver are you? If it isn't fully descriptive> in code, you're fired! What's that? You saved a huge number of cycles by using a Goto there?> FIRED, we want more lines! (I'm not even kidding, Linus

.Without him, Linux would turn in to PHP. Look what happened to that. PHP is plain awful now. It started off with a good idea, then all the amateurs took control and ruined it. You don't want that now, do you?

I used PHP back when it was Rasmus Lerdorf's neat hack to maintain his Personal Home Page. It was a very neat hack. It was always a very neat hack, and it continues to be a very neat hack. It wasn't ever meant to be an elegant and well engineered language, although it did get a bit full of itself around PHP3. But the difference in software engineering terms between PHP and Linux is (it's car analogy time, folks) the difference between a child's home made soap box cart [1st-crowbo...uts.org.uk] and a Lotus Elise [wikipedia.org].

Minix still exists. The purpose of Linus Torvalds was to learn assembly and in particular how intel cpu were supporting multitasking and memory protection.
Linus then criticize minix idea of message passing saying it was incompatible with an optimal usage of cpu performances. They are still fighting.

I doubt Linus is getting more bitchy than normal. He's just had more 'popular' exposure and attention of and to his rants than normal. It's easy to guess why: Google+ gives him a lot more exposure and spread. Prior to his posting the rant against the root password requirement on Google+, I don't think I'd seen any of his opinions outside of near-fluff interview pieces or, possibly, LKML emails.

Certainly, people didn't care as much until they saw him lambast OpenSuSE developers. That got their attention and interest, and so folks like Slashdot and NetworkWorld are more likely to cover it. Heck, this kind of story is even out of character for/..

In article wjb@cogsci.cog.jhu.edu(Bill Bogstad) writes:>> I have a 8 Meg system and also am having problems compiling fork.c.>I would have thought that would have been sufficient....

Ok, the problem isn't memory: it's gcc-1.40. For some strange reasonthe older gcc runs out of registers when optimizing some of the files inthe linux source distribution, and dies. This one isn't the same bug asthe "unknown insn" which was due to my hacks in the earlier 1.40 - thisone seems to be a genuine gcc bug.

Linux 0.95a is compileable with the older gcc if you just add the flag"-fcombine-regs" to the command line. In fact, the only thing you needto do is to remove a "#" from the makefiles: the line

#GCC_OPT = -fcombine-regs

should be uncommented, and gcc-1.40 will have no problems compiling thesource. This was documented in some of the release-notes for 0.95, butI guess I forgot it for 0.95a.

Why remove the flag in the first place I hear you say? Simply becausegcc-2 doesn't understand -fcombine-regs, as it seems to do theoptimizations even without asking. There are other things I had tochange in the source to get gcc-2 to compile it, but this is the onlyproblem that made the old gcc choke.

With the advent of an official gcc-2.1 (this weekend?), people mightwant to change to that one: note however that gcc-2.1 is about twice asbig as 1.40, so it's going to be slower on machines that swap... Peoplewith just 2M of mem might not want to upgrade (*). I like the changesto 2.1: the code quality seems to be a lot better (esp floating point).

On a slightly related note: the as-binary in newgcc has been reported byseveral people to have problems. Getting as from the originalgcc-distribution by me (gccbin.tar.Z) might be a good idea if you haveproblems with the newgcc version.

Linus

(*) Even with only 2M of mem, using gcc-2 has it's good points. Theshared libraries should cut down on memory use as well as loading timeand disk-space use. Shared libraries work even with 1.40 if you know howto build them, but 2.1 does it all automatically...

I wouldn't call it extremely high end, the 82359 memory controller supported 32MB and it was available on workstation class machines in 1992. I bought a 486 SX-25 around Christmas 1993 and we got it with 4MB, by the next christmas it was upgraded to 16MB.

But you are taking about workstation class machines, back when they were a few times the price of a regular desktop PC. Consumer desktop PC's as late as 1995 were still being sold with 4MB as standard.

Even today, a workstation class machine, such as my desktop costs quite a bit more then even your normal gaming desktop.

If I'm reading the article correctly, this isn't so much about file size as about the number of bugs fixed. Or rather, how many bugs still needed fixing in what was supposed to be the seventh release candidate of the kernel: something one would not expect to find so many bugs in very quickly.

Because I last week I thought that
making an -rc7 was not necessarily realy required, except perhaps
mainly to check the late printk changes. But then today and yesterday,
I got a ton of small pull requests, and now I find myself releasing an
-rc7 that is actually bigger than rc6 was.

It seems like part of what he's trying to point out here is that there may be developers trying to cram in what are really new features into 3.5 by declaring them bugs and pushing them into RC's, rather than waiting until the next release. This behavior wouldn't surprise me in the least.

Yes, the idea is every RC should be getting closer to release quality. In general, this means RC7 should have had less changes that RC6.

For all the IT guys who don't actually understand large scale development, on Monday morning you have a list of printers to unjam and you are supposed to give daily status reports. You would expect your lazy ass would make the list of crap left to fix slowly get smaller, but instead your list is growing because you jam every printer when you try to print your daily status

Linus is mainly complaining because he wants bugfixes to come in during the merge window. The RC's are then used to iron out bugs that got added by features that were added during the merge window OR to fix existing bugs that were too invasive to fix in a normal 3.x.x update. The idea is that the change from 3.4 to 3.5-rc1 is massive, 3.5rc-1 to 3.5rc2 is smaller, 3.5rc2 to rc3 is even smaller. And it keeps getting smaller until the number of commits is very low, and those commits are very small changes themselves. This SHOULD have been 3.5 release, but instead a ton of large commits were done after rc6 and that makes Linus uncomfortable about labeling 3.5 as Stable until people have a change to test out those new commits. The more commits people do past like rc2, the longer the delay until 3.5 is marked as stable and released, honestly unless im forgetting something, I havent seen a 7th release candidate for any kernel since the change to 3.0, most of them have been capping around 5. By a 7th RC there shouldnt be really anything going on unless an email comes in that is labeled "URGENT KERNEL PANIC FIX" and from the sounds of it...none of these were that, and could have all been saved for the merge window for 3.6. Instead we have the 3.5 kernel delayed by another week.

The way to achieve what you say Linus wants is for him to reject/postpone changes that fall outside RC criteria. "Sorry, the train has left the station. There's another one due to leave at 3.6." When developers learn that the development phase criteria are enforced they will adjust their behavior to fall in line, but contrapositively they will not adjust their behavior if the criteria are not enforced.

My sympathy is miniscule -- if RC-appropriate changes are what he wants then he should reject/postpone the changes in question as falling outside RC criteria instead of kvetching about them. It's a self-made and self-perpetuated problem; developers will abuse largesse only as long as they are allowed to.

The way to achieve what you say Linus wants is for him to reject/postpone changes that fall outside RC criteria. "Sorry, the train has left the station. There's another one due to leave at 3.6." When developers learn that the development phase criteria are enforced they will adjust their behavior to fall in line, but contrapositively they will not adjust their behavior if the criteria are not enforced.

He does. All the time. And people try bending the rules and stretching the definitions. All the time. You make it sound like Linus only had to tell them once and everybody'd go "well alright then" but it's more like a horny teenager with a girl on the back row of the cinema. No matter how many times those hands are pushed back they'll be back in a slightly different way or after another round of sweet talk. For those of you who have no idea what I'm talking about or what this "girl" thing is, you can imagine it's like the lobbyists in politics. No matter how many times a bill is defeated they'll keep pushing for new laws that amount to the same. In all three cases they just don't quit until they succeed.

I prefer a car analogy: it's like, no matter how many times you fill the gas tank, the damn car always empties it while driving, and will stop completely and refuse to continue if you don't fill it up again when it wants you to. Automobiles, eh? Can't live with 'em, can't live without 'em.

I actually read it as him being upset about getting such large patches all in such a small time-frame. Going through all that does take quite a toll, so I understood that he'd wanted the patches to be strewn over several days. As an aside, "not cool, guys" does not actually sound like bemoaning, let alone being angry.

Are we talking about source code size, or the actual binary footprint on any individual supported system? In other words, does an ARM SoC running Linux get bloated down by the unnecessary PowerPC (!) support code?

The "Not cool guys." comment sounded like he wasn't thrilled they all dumped lots of fixes all at once. You know, like they were sitting on lots of changes and then, in concert, released everything before a release milestone. I doubt it has anything to do with binary footprints.

Why would you compile for other platforms when you don't need to? Oh wait, you wouldn't, so those Alpha, Mips, and even VAX binaries that you never see in an x86 distro wouldn't be on your embedded system either unless that's that platform you want. Which raises two questions: are you lying or did somebody feed you that shit which you are passing on with understanding it?

I don't think so. His "jokes" are all just insults at people that dare to be computer geeks or misleading bullshit that is a million miles away from being funny. It's stuff along that lines where if somebody compiles code for a program they have to be pointed out as a freak to laugh at.

A host of small modifications and a large number of system-on-a-chip and PowerPC fixes inflated the size of release candidate No. 7 for Version 3.5 of the Linux [networkworld.com] kernel, according to curator Linus Torvalds' RC7 announcement, made on Saturday.

Torvalds wasn't happy with the extensive changes, most of which he said he received Friday and Saturday, saying "not cool, guys" in the announcement. However, the occasionally combustible kernel curator [networkworld.com] didn't appear to view this as a major setback.

"Now, admittedly, most of this is pretty small. The loadavg calculation fix patch is pretty big, but quite a lot of that is added comments," he wrote, referring to the subroutine that measures system workload.

However, he noted, there were also the assorted changes for SoCs, PowerPC compatibility, USB and audio to be folded in, forcing a comparatively large RC7.

"Ok, so it's still not *huge*, but it's bigger than -rc6 was. I had hoped for less," wrote Torvalds.

He also hopes that it won't be necessary to deploy an eighth release candidate before Version 3.5 of the kernel can be properly rolled out, and urged the community to "go forth and test."

Among the biggest new features expected in Linux 3.5 is enhanced compatibility with the ARM processor family, which are used in a wide array of low-cost computing devices. Several ARM-related fixes are part of 3.5-RC7, according to the official announcement email and changelog.

The H-Online reported earlier today [h-online.com] that the final version of Linux 3.5 should be deployed next weekend, if all goes well with RC7.

The h-online.com article the networkworld one is a rehashing of:

Over the weekend, Linus Torvalds reluctantly published a seventh release candidate [kernel.org] (RC7) for the 3.5 Linux kernel. In the LKML announcement email [lkml.org], the Linux creator says that he originally thought another RC would not necessarily be required; however, a large number of small pull requests submitted by developers late last week necessitated an additional RC for testing, leading Torvalds to tell the developers, "Not cool, guys. Not cool."

These changes include media fixes, random SOC fixes and PowerPC fixes, as well as patches [kernel.org] for the leap second bug [slashdot.org] that caused Linux systems to freeze because of permanent high CPU loads that resulted in increased power consumption and wasted electricity [slashdot.org]. "Ok, so it's still not *huge*, but it's bigger than -rc6 was," said Torvalds, adding, "I had hoped for less."

Linus has asked the kernel developers to test the rc7 release to "make sure it's all good", and is hoping that he "won't have to do an -rc8". Barring any major problems over the coming week, Linux3.5 will likely be released next weekend. An overview of the changes made in the 3.5 kernel can be found in TheH's Kernel Log mini-series "Coming in 3.5" which examines the various subsystem developments in the upcoming release.

Review each article and notice what is and what is not a link, and where the links lead.

Very disappointed that the geniuses at "Network World" did not include a link to the original article [lkml.org]. For articles like this it's much better to read the source material yourself and come to your own conclusions, without the sensationalism and ad-baiting.

Egads, there hasn't been a new Powerpc in ages except for a few game consoles and people stuck with legacy IBM big iron. Any reason to continue bloating the kernel with that stuff? Time marches on. Why inconvenience everyone so that a few dozen PS3 users can run Linux?:)

the embedded space has used lots of PPC for years. Notice it stated SoC?

Exactly right. We're designing a high-end router right now with 40 Gbps ports and the management CPUs are PPC based - just like all the other equipment we've designed (and all the other vendors too) for the past 15 years. In this case, one of the CPU's even runs Linux.

IBM still makes and supports PPC. Before that, it would make sense to drop some of the dead RISC CPU support - such as PA-RISC and Alpha. Indeed, given that even Itanium support has been dropped by all distros except Debian, the only RISC that deserves continued support from Linux are ARM, Sparc, MIPS and OpenRISC.

But honestly, when Linux is installed on something, does it have anything like NEXT's fat binaries? I thought that only the target platform binaries were included. How is the support of othe

1) The PPC code never gets inserted into other machine's architectures. So in that sense, it can't possibly "bloat" the kernel. Now there could be design issues with PPC that end up being carried into future Linux kernels, but those are much harder to root out without breaking something.2) How else can Linux keep its reputation for being able to operate obsolete stuff, long after the commercial vendor has abandoned it?3) Anything that's in the kernel (like PPC support), has an "active" maintainer for i

Because too many people contributed too many patches during a window in the development cycle when not many (or large) patches should be contributed?

Umm... I think you didn't understand what the problem is here. It's a violation of development process protocol that has nothing to do with the quality of the code. Someone trying to submit refactoring patches would have made it much worse, not better. Actually, it wouldn't have been worse, because Linus would just have rejected them at this point in time.

Like every large software project it deserves a rewrite from scratch because it's full of cruft, but nobody will ever find the time to do it.At least some refactoring and de-crufting is done from time to time if some dev gets pissed off enough. Not something that happens in commercial SW development unless the code is hopelessly broken.

Like every large software project it deserves a rewrite from scratch because it's full of cruft, but nobody will ever find the time to do it.At least some refactoring and de-crufting is done from time to time if some dev gets pissed off enough. Not something that happens in commercial SW development unless the code is hopelessly broken.

Every time someone says this they should be forced to sit in the corner and and copy this essay by Joel Spolsky on things you should never do [joelonsoftware.com] 5000 times and give a copy to each of their friends together with an essay about what they have learned from this punishment.

- Code is just bad and impossible to understand..- Code it slow, has become to bloated...- Hard to debug and hard to track down problems happen from time to time.

You start with a small corner and when that small part is done, and working, then you might go for the next thing... But don't throw out everything.. just the parts that are bad... And while you are doing things like this you should try and do some type of unit-test implementation also to ma

No they wouldn't. IMHO the reason HURD has moved so slowly is because they told just about everyone who was interested in helping to fuck off. Linus was a bit more diplomatic even when people without much of a clue want to join in so it went rapidly from a small group to what we see today. Some of those clueless newbies he was not rude to and didn't scare away 15+ years ago are now a very long way from being clueless newbies.

The real reason HURD has moved so slowly (besides managerial incompetence) is that HURD has ceased to be a product with a deadline. HURD is now an operating system research project, with the goal of tinkering with it long enough to publish a paper on their findings or dilettante OS topic.

HURD was originally designed with the presumption that microkernel architecture would be more desirable (operate more efficiently) than a monolithic kernel (that has been the basi

In fact the core of the kernel has already been refactored many times and is of excellent quality. It represent only a very small percentage of the whole code. Most of the code is drivers. Many drivers are poorly written and may need to be rewritten, but developers are too busy coding the many missing drivers.

In fact there is no real problem in Linux code, just a recent increase in the number of developers.