Posted
by
samzenpuson Monday September 05, 2011 @09:00AM
from the moving-to-better-quarters-on-campus dept.

An anonymous reader writes "Linus Torvalds has announced that he will be distributing the Linux kernel via Github until kernel.org servers are fully operational following the recent server compromise. From the announcement: 'But hey, the whole point (well, *one* of the points) of distributed development is that no single place is really any different from any other, so since I did a github account for my divelog thing, why not see how well it holds up to me just putting my whole kernel repo there too?'"

I clicked the link and here's what I got: "Server Error 500 - An unexpected error seems to have occurred. Why not try refreshing your page? Or you can contact us if the problem persists." with a cute parallax scrolling animation of GitHub logo falling down the Grand Canion. I've never seen 500 error on GitHub before.

Linus writes: "since I did a github account for my divelog thing, why not see how well it holds up to me just putting my whole kernel repo there too?"

Why not? Because you just broke GitHub! That's why!

And now let's all remain silent while the instant, distributed, cpu-intensive, encrypted https slashdotting of GitHub starts in 3... 2... 1...

I'm not sure if you meant this specifically, but as a nitpick, https itself is hardly CPU-intensive these days [imperialviolet.org]. GitHub might be doing CPU-intensive stuff to service requests, but if so, it's more likely to have something to do with their backend than with https.

Their response time to this problem is a great advertisement for their services.

Translation:18 min ago - One of our frontend servers was automatically isolated because it did something suspicious.7 min ago - Don't worry, no one would ever hack us so we reconnected the server and all looks normal.(hey wait, who made all these unsigned commits?)

Oh, come on, you are just being negative. They is at least trying to do something different, give him a chance. It's not like he has done anything before that was anywhere near a success. Let him play with the guttenhub and lunux kranel.

Part of this is because there seems to be far fewer Slashdot readers than in the past. The stupider ones have moved to Digg, reddit and Hacker News, apparently

While I will admit there have been many Slashdot readers who have moved to other websites, I think the issue here is more that as a percentage of the web community Slashdot no longer is the dominate community of discussion. This is more because there simply are fewer geeks running around on the web any more as Facebook, Twitter, YouTube, and other "social media" sites have more ordinary non-geek people.... any one of which can also post a link going viral that will dwarf anything Slashdot would ever produce. Many of the larger websites routinely expect a large number of visitors for some things they post, and can more than compensate for what happens when they become the focus of a lot of people at once.

Slashdot will still bring a huge number of visitors to a site and for somebody doing a homebrew website it can be a big deal, but I'd agree that due to improvements in hardware and better software management there isn't nearly so much of a problem any more.

Not to mention that slashdot's interface has progressively gotten worse, making it a real pain to use. Although, it did get better recently. I boycotted until it got usable again. It seems a lot faster recently.

I think you're underestimating sites like Digg, Reddit, and Hacker News, which drive enormous amounts of traffic. Slashdot just isn't as relevant anymore. It is fast becoming a sounding board for fanboys and trolls who think and act a certain way, and the accepted news submissions reflect this. There used to be programming links on the front page and discussions of technical issues.

Bah, people have been saying the same thing about Slashdot since the JonKatz era. Say what you will, but Digg and Redditt cover a far, far larger variety of topics than Slashdot and thus garner more users (and links) by nature. Slashdot has tried to stay a crystallized, topical board and as such its traffic and influence have remained static while the Web has grown around it.

Slashdot's real draw is the discussion system[s]. With great ease, I can restrict an article's comments to a few high-ranked ones and

Part of this is because there seems to be far fewer Slashdot readers than in the past. The stupider ones have moved to Digg, reddit and Hacker News, apparently.

Actually, most of the normal people who want actual discussion left for those sites, leaving hardcore fanboys here who either troll anonymously or post obvious karma whoring posts that just repeat some obvious belief that the community has (Microsoft is evil, Google is great, piracy is awesome, etc.). Finding insightful posts has gotten more difficul

I thought it was just the fact that modern hosts have better connections and higher quotas... And a lot are hosted on things like Blogspot, which has the massive power of Google behind it - No mere huge number of viewers will being *that* down!

NOTE! One thing to look out for when you see a new random publichosting place usage like that is to verify that yes, it's really theperson you think it is. So is it?

You can take a few different approaches:

(a) Heck, it's open source, I don't care who I pull from, I just wanta new kernel, and not having a new update from kernel.org in the lastfew days, I *really* need my new kernel fix. I'll take it, because Ineed to exercise my CPU's by building randconf

Linux kernel is very mature at this point, but some basic functionalities like HAL (hardware abstraction layer) are not present and not even planned. Linus is perhaps happy with the current 3.x state of Linux, but lots of people demand more.. I recently ventured to ReactOS website and have seen lots of activity in the SVN. This is maybe thanks to Google Summer of Code 2011 ReactOS involvement, lots of commits on daily basis in the trunk now, the project seams to be getting in motion again.

I know that I can move a Linux installation image from one machine to another without a glitch, while Windows (which has a HAL) fails miserably if the source and destination machine vary in any non-trivial way.

My employers, not particularly tech-literate, have even seen this and learned it first-hand, and have had to get themselves out of the habit that "moving that server to new hardware means configuring a new one, effectively".

Move a Windows server - you can be in for a world of hurt unless you want to fresh-deploy it every time. Move a Windows-client, historically you'd be prepared for blue-screens because you have the "wrong" processor type (Intel vs AMD - requires disabling some randomly named service via the recovery console, for example), reinstalling the vast majority of the drivers (probably from a 640x480 safe mode) and even then can't be guaranteed to get anything back and working - not to mention activation, DRM, different boot hardware (e.g. IDE vs SATA), etc.

Move a Linux server - unless your OWN scripts do something incredibly precise and stupid with an exact piece of hardware, it will just move over. At worst, you'll have to reassign your eth ports to the names you expect using their MAC address (two seconds in Linux, up to 20 minutes in Windows and a couple of reboots).

Hell, you can even change the kernel entirely, or the underlying filesystem type or any one of a million factors and it will carry on just as before, maybe with a complaint or two if you do anything too drastic but almost always with no ill-effects and a 2-second resolution.

The only piece of hardware on Linux that I have to "fiddle" is a USB-Fax modem that has ZERO identification difference between two examples of itself. You literally have no way to assign them to fax0 and fax1 except guesswork - or relying on the particular USB port name which wouldn't translate between computers. But the install has moved through four machines (from an ancient office workstation with IDE - sacrificial hardware to prove my point about its usefulness -, to a state-of-the-art server class machine with SAS RAID6 and redundant power supplies) without so much as a byte-change - just me swapping the fax modems over rather than bothering to code the change.

And if the hardware breaks? No big deal - pull out the old machine and/or any random desktop machine (or even laptop) with enough ports, image it across byte-for-byte and carry on regardless.

People don't get that this is a BIG feature that they should be pushing - whereas with Windows I've heard (and seen) horror stories about RAID cards not working without the exact controller/firmware/driver combo that they were setup with, blue-screens and hangs and activation dialogs when you attempt something like that, not to mention HOURS of fiddling to get the image running exactly how it was on the original machine (if that's even possible). It goes along with the "plaintext" / "plain file" backup strategy (hell, my/etc/ is under automatic version control with two commands!), etc.

The point of an OS is to make the software independent of the underlying hardware. Windows lost that independence a LONG while ago (Windows NT / 95). Linux still has it because of the underlying design of the whole thing.

Don't even get me started on restoring an "NT Backup" without having the exact correct hotfix/service pack setup that you were backing up from...

We had some Windows and Linux (CentOS) servers that were running on real hardware. We consolidated them to a VMware ESXi host. The windows images moved over seamlessly and without issue. The core linux box with svn, wiki, bug tracker,... would not migrate properly so we ended up reinstalling the OS and migrating the apps and data by hand. Overall the windows box took the time to copy the data + 15 minutes and Linux took time to copy the data twice and half a day to troubleshoot and reinstall.

uh, you do realize vmware contains a huge amount of software to make that seamless M.S. Windows "physical to virtual" thing happen? Now I myself have to migrate Linux machines into vmware for certain clients, I've found easy if application configuration files understood, Linux device naming and assignment priority are understood, fstab understood, and network plugging within vmware done correctly.

I assume you used the VMWare Converter P2V tool to move your servers, which works very well for Windows and not as well for Linux. VMWare Converter fiddles with the underlying Windows configuration so the image will work well on VMWare.

If you had used a Linux cloning tool, such as Clonezilla, you probably would have had a different experience. Of course, some older distros such as RHEL4/CentOS4 also did stupid things like the initrd would only contain the SCSI driver it needs to boot on specific hardware.

And don't forget that if you decide to upgrade from a single core processor to a multicore processor that there's an incredibly annoying procedure that involves doing a repair installation just to activate the other cores. Which I've had to do in the past and it's not fun, all because MS doesn't feel like providing a reasonable way of doing it.

I know what you're talking about as I have heard of doing it in NT4/2K, but I can say for certain that I did not have to do that in either XP or Vista when I upgraded from an Athlon 64 3200+ to an Athlon X2 3800+. Every computer I've worked on since then has been multicore, so I don't know if I just got lucky or what, but it just worked.

Also, at this point I don't think anyone cares anymore, it's unreasonable to expect such an update for old OSes and no one has to worry about this on new builds since only

Generally it only happens if you trade up from a Sempron to one of AMD's pin compatible multicore processors or if you're using nLite OS and got some of the settings wrong. I don't think that Intel had offerings which would allow you to go from single to multicore without changing the motherboard, I could be wrong though. I'm sure it doesn't happen that much these days.

However, considering that XP was sort of the OS that this was most likely to occur with, they should have fixed it. I'm guessing the main re

My understanding is there's several different HAL's: ACPI, ACPI-Uniprocessor, and ACPI-Multiprocessor. If your single core is using "uniprocessor" it will automatically recognize new cores and convert to "multiprocessor". If it's ACPI, it will still only recognize one core. What's more, in Win2K, you can go into device manager and "update driver" to change the HAL. With WinXP you can only change DOWN levels, not up. That is, without an aftermarket hack, a program called "HALu" that's hard to find.

In my first IT job several years ago I made it to create new backup systems there, and by doing so I learned one of the most amazing things about Linux, and that is the cloneability of the entire machine with a single filesystem backup.

I tried to restore one of our webservers in an exercise. From a liveboot environment, I partitioned the disk, formatted it, mounted the filesystems, and rsynced over the root filesystem from backup. After that install the bootloader. I was just amazed that the new system bo

Sure it did. I tried booting Windows 7 32bit installation on different machine after laptop died. Both were Fujitsu-Siemens laptops with Intel cpus bought about 2 years apart, but Windows did not boot even in safe mode. Installation CD has some 'boot repair' mode, but it did not manage to do anything useful.

I'll have to metoo on that. No luck moving installations were it 2k/XP or Win7.
But I've moved same linux installation (originally installed debian/potato(?), then repo-shifted to ubuntu/warty) from a HP Vectra (PPro 200) to self-built AMD 1800MP then to current Intel Q6600. And every single time, even though all underlying devices changed, linux just booted up. Sure I did copying from HD to HD to move from older media, but system itself didn't need major hear surgery.

It does, but you have to explicitly tell Windows before you shut it down "Look out, I'm going to be booting on different hardware next time around".

The purpose of this is to aid deploying to dissimilar hardware, and it works just fine. But the scenario you describe, it wouldn't work at all because you wouldn't get the opportunity to shut Windows down in this fashion.

Never mind changing motherboards... just try changing the mode of your SATA controller in bios settings (without doing registry changes before rebooting to change the bios setting). You'll be lucky if changing it BACK allows Windows to boot normally again without having broken itself. Windows Vista and Windows 7 are a regression in this respect, because they don't probe for storage controllers during boot anymore. (To shave a few precious seconds from the illusory fast startup times)

I had to add a single line to a text file and recompile the kernel (5 minutes once I knew what to do). Windows simply WOULDN'T. I downloaded the sata/ahci drivers but there is this REALLY cool catch-22 in xp. Obviously it won't boot in ahci until it has the drivers, but windows apparently thinks it's smarter than you and PREVENTS you from installing ahci drivers when running in ide mode. Yeah, THAT was helpful!

HAL and lack of standardised DDK is a major Linux turn away factor for many, sometimes you can't go 'open source' if 3rd party technologies and NDAs are involved. It would be more flexible to be able just optionally plug-in stuff without the hassle of sharing..

if only you could forsee that customers will want to use something other than Windows on a 1GHz Geode with 128MB RAM...

Seriously, Linux has huge market share in anything but desktops. If you make hardware you know that someone, somewhere will want to use it with Linux. Making the driver OSS from the start will save you tons of problems in the long run.

I believe CUPS is actually an example of a HAL. A single ppd file will let you drive that printer with any version of CUPS (mac, linux, freebsd, windows, whatever) (x86,x86_64, sparc, alpha, arm. mips, mipsel, PPC).
dbus provices some absraction, libkb quite a bit, Fuse as well. There are some abstraction layers available for linux systems, it's just that it's done though the user space rather than the kernel.

ppd's are postscript printer description files. They are near human-readable and only tell what the limits of the printer are. They are used with native postscript printers.

Native postscript printers have a craptonne (compared to non) of processing power and memory, and do most of the work themselves, hell I can plug in a usb stick with pdfs on it into mine and get it to print without a pc at all. Catch is of course you are generally looking at a few thousand for such printers.

Samsung ML-2850 and similar for instance: costs around $130, has a network interface and is compatible with everything, prints double sided out of the box. Box advertises it as Linux compatible even. I'm not sure if it's possible to plug a stick into it though.

Only downside to it I can see so far is chipped cartridges, but there seem to be workarounds for that.

The dirt cheap ones wind up seriously costing you in operating costs and tend not to live as long, a 5000 black page toner cartridge for the one you listed was seen for $75 cheapest, $150 on average, mine is $40 for 6000.

The dirt cheap ones wind up seriously costing you in operating costs and tend not to live as long, a 5000 black page toner cartridge for the one you listed was seen for $75 cheapest, $150 on average, mine is $40 for 6000.

before the move1- remove hidden intel drivers.2- use something like belarc to get you serial number in case2- sysprep -pnp -mini -reinstall -nosidgen -reseal -forceshutdownmove the drive or clone it to the new machine

upon reboot windows shall detect the new hardware, it may prompt you for the installation files if your hardware differ wildly but that's all, it may also prompt you for your serial for a reactivation but you noted it at step 2.

Except that as you noted, this prompts for activation. That's the purpose of sysprep -reseal, and I hope that it doesn't present any problems. What you are functionally doing with this is reinstalling the non-core OS components, which is... somewhat higher risk than otherwise....

Very well put. I was scratching my head over GP's post. "Why is HAL good again?" I was still trying to form up my thoughts as I read your post. Perfect. And, your are exactly right. I've moved a hard drive from one machine to another, and booted without ANY tinkering. The only tinkering that I've found necessary, is when the video drivers are incompatible, ie, an installed nVidia driver on a new machine that has a Radeon installed. And, I believe that all *nix systems have an easy command line utili

no, it takes massive effort and expense which is why Microsoft dropped those other three architectures, and the ARM port is still being worked on. Any windows admin can tell you what a bare metal restore does with the most minute of variations in hardware, it get screwed up. We can get MS Windows the alternate name of "Failure of HAL"

Take a HDD from a Windows machine and put in in another PC, try booting from it. I am convinced in all but specific circumstances it will not boot.

On the other hand, my current home desktop is a pair of software RAIDed disks that have been in 3 seperate computers now (Motherboard, RAM, video and sound output etc.). I have not had a problem doing this. Sure I now use "eth4" as my default network port but nothing else of note is a problem.

Windows will work, it just has to be configured beforehand to do so, specifically running Sysprep to tell Windows to expect to boot on new hardware, at which point it will perform what is essentially a partial reinstall in order to support that new hardware. It is not as plug and play as Linux generally is, but if done correctly works quite well.

Sell it to me. What does ReactOS aim to provide that a modern Linux based distro doesn't already give me? Games? Bleeding edge graphics drivers for, uh, games?

Windows Apps.

I know you were trying to be snarky, but you failed.

Windows users can run just about anything Linux has to offer. Its either been ported to windows natively or will most likely run with cygwin or the like. Certainly anything with any sort of popularity has been ported to Windows.

On the other hand, the inverse is not true. Games, as you noted, are a big gaping hole on the Linux side, in most places where Linux does have some sort of comparable package it could hardly be considered a professio

Linux kernel is very mature at this point, but some basic functionalities like HAL (hardware abstraction layer) are not present and not even planned.

Perhaps you should read this recent article on LWN about Avoiding the OS abstraction trap. The core point to consider that a HAL is a means to an end, not an end in itself. Linux's development doesn't need nor likely should it have a HAL like other closed OSs precisely because it doesn't deal with binary drivers. Instead, code is frequently refractored, reorganized, etc and the main issue is whether the user space ABI stays intact. All pushing a HAL would do is further constrain the kernel to maintaining another set user space ABI, which would likely end up being suboptimal since no HAL is perfect, and devote developer time to something that instead of forming organically as hardware/code demands would wall the expectations and the ability to provide functionality. Such might be great for a platform that's expected to be deployed, be infrequently changed, and for which driver development is a one-off affair, but that's pretty much the antithesis of the Linux kernel.

Linus is perhaps happy with the current 3.x state of Linux, but lots of people demand more..

I don't think Linus is "happy with the current 3.x state of Linux", but I wouldn't be surprised if he's happy with the development process in place that he's a part of that can change the 3.x line towards something better. The Linux kernel is constantly changing. There's unlikely to ever be a state, ie a one point snapshot, where the Linux kernel will ever make most people happy because there's too many people with too many diverse goals and they all desire to change the Linux kernel from what is to what it could be. That's the great thing about an open development model, where people can make that happen. And if nothing else, they can make their own fork of Linux if the Linus tree doesn't make them happy enough.

I recently ventured to ReactOS website and have seen lots of activity in the SVN. This is maybe thanks to Google Summer of Code 2011 ReactOS involvement, lots of commits on daily basis in the trunk now, the project seams to be getting in motion again.

While that's great news for ReactOS, and with no offense to the ReactOS developers, but if I did Linux kernel development, I wouldn't be jumping on board ReactOS development. ReactOS is a noble project and I'm sure in the future I'll get a lot of use out of it, but I view ReactOS as a stopgap project. That is, it's something like wine, which seems more than anything as a way to run the occasional Windows program and to allow those who are using Windows exclusively now to have a path to switching to using Linux (or OpenBSD or whatever) rather exclusively to run the occasional Windows program.

I say this primarily because Windows is a massive beast of an OS, produced through decades of development. Trying to re-implement it with incomplete documentation, reverse engineering, etc is a task like to take many times as long and as such I can even optimistically only see ReactOS as an open Windows 2000 or Windows XP clone for the 2020s or 2030s. Having more developers might speed up the process a bit, but assuming there's already a critical mass of developers to move development forward, I think the mythic man hour and the law of diminishing returns kicks in pretty quickly, especially when it's hard to delegate a lot of the work on things when the things themselves are most a mass of "stuff we don't have documentation for but needs implemented anyways".

Now, if one has a personal interest in having a complete open Windows clone, then please join ReactOS development. I'm certain they'd appreciate the help, even if it doesn't speed up the completing time very much. I certainly commend anyone who works to better an open project that will give advantage to oneself and others. But, I wouldn't seriously consider

1. It's still in alpha stage and it's aiming at a moving target. The idea is it will eventually be broadly equivalent to Windows XP/2003 - I confidently predict that by the time it becomes even remotely stable, we will look upon XP/2003 in much the same way as we look upon NT 3.51 today.

2. Patents. We've seen what happens when a disruptive Linux-based product comes on the market with Android - everybody and his dog is suing Google. The fact that Linux doesn't try to ape Windows - combined with support from the likes of IBM - has kept Linux on the server relatively free from lawsuits (with the obvious exception of SCO) - ReactOS doesn't have anywhere near the level of support from large commercial organisations; I can't imagine many smaller companies wanting to publicly support something that is essentially painting a big target on its back and shouting "Hey, Microsoft! Aim here!".

I recently ventured to ReactOS website and have seen lots of activity in the SVN [...] lots of commits on daily basis in the trunk now,

"Lots"? Really? Compared to what? How many do you think is "lots"? The Linux kernel was averaging ~70 commits per day from 2.6.13 - 2.6.27 (source [schoenitzer.de] - that's every day, for more than 3 years) and I'm pretty sure the pace has picked up a fair bit from that in the ~3 years since then, as hinted at by the right hand side of that graph.

The kernel IS the HAL. A few commercial OSes have additional ABIs defined inside their kernel is due to their closed source nature needing an open public interface. The entire Linux kernel is open, and the entire thing is the HAL.

And sorry to sound snide but... ReactOS? Seriously? Its a cool concept, but ReactOS by design will always be too out of date to matter. They are reverse engineering an actively developed OS, they have a fraction of a percent of the development resources devoted to it as the

ReactOS will catch, the work seams to be almost done anyway, they only need to reach XP compatibility and that will be enough for 90% of uses. In fact Linux is a moving target, you write a driver today and tomorrow it will not work because Linus in his infinite bazaarish wisdom decided to redesign and rename some parts of the code that you were interfacing with. Now ReactOS, if you have a driver for the WinXP scheme then it will work today and more imporantly it will work forever, thanks to well defined and

Actually, that message is ambiguous -- it doesn't specify whether it's master.kernel.org or github that will be "just a mirror".

Is there a difference? I suppose if Linus runs "git push kernel.org master" before doing "git push github master" then people grabbing from kernel.org might get the latest version a few seconds sooner.

Or maybe Linus will get tired of having to do both and add a "multiple remote alias" feature so that he can push to both simultaneously.

I wouldn't find this surprising at all. I don't see this as temporary by any means, but more of a 'loosing-faith' factor; I'd do the same with my life's prized work as well. I bet from now on, github is the main pickup for latest/stable/greatest kernel releases. I personally hope it doesn't, and perhaps becomes another avenue to get the kernel source.

With a traditional VCS you have all clients acting directly on one repo with a linear history. clones/backups may be taken but in order to present a total mess everyone must agree on which repo is the master. If the master goes down everyone must agree on a new master or a horrible mess will ensue.

With a DVCS every checkout is a repo and changesets are pushed or pulled between the repos and history is designed to be nonlinear. However there is

First, Github has been around for quite some time now and is just hosting for Git - hardly "vague" (is that the word you were looking for even?) and by your argument shouldn't sourceforge also cost money now? You know that massive load also comes with massive numbers of visitors and publicity and bandwidth is cheap now right? They are getting free direct advertising to programmers all over the net. How is that bad for them exactly?

Here's my prediction: right now this site is "free for OSS". kernel.org will raise a massive load (so will slashdot). How long before policies change and people will need to cough up in order to reach kernel source code?

And how long before every OSS project just moves to a different host as soon as those policies change? Somehow I don't think the policy is going to change.

Exactly which capable version control system are you referring? cvs or its step child svn? haha, they're all brittle garbage that don't scale up. Sourceforge gives you a cvs or subversion account (or you can link to your own system, oop that's back to square one)

oh, those commercial unix implementations or freebsd scale from a handheld device to a supercomputer the size of a city block? FreeBSD is still trying to figure out how to run on 8-way or more SMP without seizing up under high load (check the warning on their web site). Whatever cool things from the past it has, Solaris is going down the tubes under Oracle, to be a one trick pony to run Oracle on their (well, Fujitsu's actually) hardware only. Wail and weep, commercial unix boy, your world is collapsing, and Big Blue and a Penguin are stomping it.

So why isn't BSD used on the stock exchanges? It simply can't pass messages as quickly.
In terms of stability, security, and backwards compatibility, the Unix'es may still be better, but in terms of raw performance and the pace of development Linux wins, and has been winning for a long time.

So why isn't BSD used on the stock exchanges? It simply can't pass messages as quickly.

This is 100% false.

In terms of stability, security, and backwards compatibility, the Unix'es may still be better, but in terms of raw performance and the pace of development Linux wins, and has been winning for a long time.

"Linux is unstable, insecure, and breaks compatibility all the time, but it releases new kernel versions all the time!"

Alright there are millions or even billions to be made for system that can pass messages even a small fraction better than the competitors. This is why I'm sure it's 100% true.

It's stable and secure enough. If you absolutely need stability and and security go look at the microkernels (which have their own set of issues). Linux adopts more features and does it more quickly than anyone else, this of course comes because they are willing to break things if necessary, and if something is good enough to let it

It doesn't work that way with niche products that are so closely related. Pretty much anyone that uses Linux and a revision control system knew what Github was a year ago. If they were going to be customers, they would be.