Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ruphus13 writes "As mentioned earlier, there was a kernel bug in the alpha/beta version of the Linux kernel (up to 2.6.27 rc7), which was corrupting (and rendering useless) the EEPROM/NVM of adapters. Thankfully, a patch is now out that prevents writing to the EEPROM once the driver is loaded, and this follows a patch released by Intel earlier in the week. From the article: 'The Intel team is currently working on narrowing down the details of how and why these chipsets were affected. They also plan on releasing patches shortly to restore the EEPROM on any adapters that have been affected, via saved images using ethtool -e or from identical systems.' This is good news as we move towards a production release!"

I know this is News For Nerds and all that, but isn't this a tad specific?

An alpha/beta of the most recent linux kernel patch had a bug fixed, and it hits the front page?

Don't get me wrong, I'm glad they found it, but this is kinda the point of debug cycles.. If we start reporting every bug squashed in all the major open source projects out there this is going to go downhill fast.. (of course, it's possible some may think that the idle. is only a step above..)

I mean, who in their right mind would call a PC without an operating system bricked? Just because you have to put a floppy in to install an MBR and command environment (a la the 3 DOS install disks from yesteryear) a bricked system?
Compare that to running an operating system like DOS on an old Athlon that didn't have a big enough heatsink/fan, no ACPI or 'hlt' commands built in, and the processor overheating to the point of literally burning itself up.

Try Erasing the BIOS on the main board and you will be more accurate in your comparison.

This bug actually flashed the firmware for the network controller and hosed access to it in some unexplained sort of way. That is something note worthy because of the rarity of it. If it was simply hosing something that was readily diagnosable and more common like a boot sector or something, then it would be different. It isn't often the software is associated with hardware damage either purposefully or accidentally.

BTW, I know there are recovery methods for a hosed BIOS. That isn't the point. Simply installing an operating system shouldn't hose it nor should it hose hardware either. Imagine all the people who just thought their card was broken or something and went for a refund under warranty or the bad name Intel or Linux received for the "faulty shipment of devices" or the ability to break a device. This is something that would work in windows, load Linux in a dual boot mode, it would stop working in both windows and Linux without any errors or indication that the car was even capable of being seen by the mainboard.

It was even more fun. Once the card was hosed, not only would it not work, but it required a bit of hacking to get it recognized enough to attempt a re-flash (assuming you had an image of the correct contents to flash in).

The exact cause was mysterious as well since it didn't happen to everyone, nor was it predictable if or when it would happen.

Kinda depends on the definition. Some of the technorati won't consider something bricked until it is literally physically broken and beyond repair. Some include situations that requires hardware intervention (e.g., desoldering and swapping a SMT-mounted ROM), or specialized tools with limited availability (e.g., special reflash equipment only available to the manufacturer of the device) to fix it.

From what I can gather from a quick skim of lkml, it is a bit uncertain as to how bricked these cards are - that

Even for alpha, that's stupid. Something I've come to expect from Linux and its "I've got to be the neatest" mentality.

Even for Anonymous Coward, that was a stupid thing to say. A bug existing in alpha or beta versions does not constitute shoddy software overall. That is, after all, what alpha and beta releases are for. I don't need to catalogue the bugs in Windows that are never even acknowledged, let alone fixed, but production releases of Linux are generally as solid as anyone could wish, and bug repo

No that is what is to be expected from an alpha, anything else means your just taking unnecessary risks. Alpha means the code has been developed and tested internally, NOT with your programs, NOT with your hardware, now if you run linus' or morton's machine then you will probably not come across this kind of bug but anybody else is running essentially untested code. While BSD claim they review their code, the fact that this bug wasn't caused by somebody commenting out #do not break drivers foo means that a

An alpha/beta of the most recent linux kernel patch had a bug fixed, and it hits the front page?

They have not fixed the bug that caused the e1000e ethernet cards to get bricked. This is at least a two part bug. The EEPROM should not have been writable and Something Is Happening to cause bad writes to happen. What that "Something" is, no one knows yet, though it appears they are getting close.

Linus is an absolute, total anal retentive with regards to fixing bugs by understanding and fixing the root cause[1], not just papering over it. This papers over it for the moment, because the bug hasn't been isolated yet, but it allows more people to participate because the side effects were really nasty - this was a true bricking of the ethernet card.

This stage isn't newsworthy for Slashdot.[2] It must be a slow news day.

[1] This is a Good Thing.

[2] Nor will the real bug fix when it comes. A bug is found, a bug is fixed. Life, goes on.

I know this is News For Nerds and all that, but isn't this a tad specific?

That's what sections are for. See the little Tux Icon over there? We all care about Linux. Besides, it's a VERY IMPORTANT BUG. A showstopper, so to speak. And keep in mind that a lot of people in here are kernel freaks. They want to test-drive the latest versions of the kernel. And one of the reasons why people keep coming here (and not to digg) is precisely for this kind of news.

What I found newsworthy is that I can expect the latest windows worm / trojan / virus to brick a whole bunch of network cards (at work, don't throw stones) as it's now more clearly documented that it can be done.
I think it was mentioned in a previous article that the real bug is that bricking through software is possible at all.

If Windows worm writers want to do hardware bricking evil, there isn't exactly a lack of potential targets already out there. It is not impossible to write a program to trash firmware on many video cards, HDs, DVD drives and the like. But you do tend to have to try to be evil in these cases, not just get anaddress wrong.

The difference in this particular situation is that the e1000e fw got trashed by accident as opposed to by a program specifically written to do so.

He's got good reason. It should be impossible for the system to write to the EEPROM without special measures being taken, possibly a jumper that has to be removed to allow it. And, if possible, the card won't work right (in some way that doesn't prevent boot) until the jumper's put back to normal. That way, if you really have to re-flash it, you can, but it's not going to happen by accident.

I remember having a motherboard with a jumper that had to be specially set to update the BIOS. The smart way was to power down, open the case and pull the jumper so that you could flash the EEPROM. Then, of course, once that was done, reverse the procedure for safety. I always regarded anybody who left the jumper off for the rare convenience as fools who deserved anything that might happen.

Well, I haven't needed to do a BIOS upgrade in this millennium, I think, and I only had one motherboard that needed a jumper change. As far as your comedy of errors goes, anybody who didn't plan ahead and make sure the update was already on the hard disk before starting deserves all the problems you described. And, of course, flashing the EEPROM on a NIC should be a rare event. Nice strawman, though.

Well, I haven't needed to do a BIOS upgrade in this millennium, I think

Good for you...

I know my keyboard has had its firmware upgraded at least once. I haven't had to do a BIOS update for awhile...

I do remember a series of incremental improvements to the whole process:

The very first time I flashed a BIOS, it was relatively easy -- just run the BIOS update program (in Windows), which formats a floppy for me, which I then boot off of. After booting the floppy, I still have to dump the BIOS, then load the new one -- from a DOS commandline.

And, of course, flashing the EEPROM on a NIC should be a rare event. Nice strawman, though.

Doing any kind of firmware upgrade should be a rare event. At minimum it should involve first shutting down the driver accessing that piece of hardware. If the peripheral is designed sensibly an "upgrade firmware" command would require some kind of "handshake" and only be accepted as the first command after a reset.

Jumpers are not really used a lot these days. They cost extra, and are clumsy to handle (need to open case). You are right it would be really good if there were some precautions taken so no accidental writes happen (for instance need some special command sequence hard to trigger accidentally), but often those eeprom chips just have a simple serial interface, and reading and writing works almost exactly the same. A couple of years ago you could easily overwrite the eeprom of hauppauge tv cards (though there

Given the cost of EEPROM space, I think the better answer is to double the size. One half is readable, one writable, at any point in time. To update, you write, turn off, flip the jumper across to the other side (or, heck, just use a physical switch) and you're done. Bricking isn't absolutely impossible (you could write a damaged image to one half which wipes the other when it boots), but essentially infeasible.

It is not uncommon to require a set of magic numbers to be written before writing to protected memory. The magic numbers and/or access pattern is designed so that no simple or likely hardware failure will allow unprotected access. Small discrete or integrated EEPROMs often have this functionality built in.

I remember that. Doing it by hand, or with the open-source tool (I think it was called Xconfigurator) was scary and full of warnings like "This may damage your hardware". So scary, that there was a commercial non-free software, which did nothing but configure X "more safely". I remember one friend of mine was excited when it came out, because there was finally warez for linux.:P

Linus has a very good analogy here -- in fact, I love the fact that on the rare occasions I have to set modelines myself, I can pretty much put whatever I want, knowing that if it doesn't work, I can just ctrl+alt+backspace and try again.

But the conclusion does bother me: We're basically saying that all software is buggy, or that we're incapable of preventing this kind of thing from happening (in software). This is true of most modern OS designs -- monolithic kernels do make it possible for pretty much any driver to accidentally ruin any other driver's day.

The proposed workaround, then, is to prevent that memory from being written -- and to prevent this in hardware, for no other reason than to avoid having to write it into every kernel that might potentially allow buggy code to run in Ring 0.

I don't like either solution. Hardware shouldn't be brickable from software, or at least, not so easily. But software shouldn't need hardware to coddle it, either -- why is the SSD in this laptop emulating a hard disk?

ATA's wire protocol uses a hardware abstraction over block storage devices, as does USB Mass Storage Class. The hard disk is emulating an ideal block device, and the SSD is also emulating an ideal block device.

This has been the case for a long time. Even with parallel IDE the drive geometry reported by the controller was typically a complete fiction. Another common feature is the ability for the drive controller to transparently remap failed blocks. Which means that by the time the host actually starts se

At least for consumer hardware we have come to expect that it cannot be damaged by buggy software, but in general it is not true that hardware should always protect itself against bad software. Just consider much of embedded software, e.g. the flight software for aeroplanes. Wrong software will result in "hardware damage", the same for most robots etc.

I am quite sure that even a microprocessor driven washing machine nowadays could damage itself if the (embedded) software were buggy.

At least for consumer hardware we have come to expect that it cannot be damaged by buggy software, but in general it is not true that hardware should always protect itself against bad software. Just consider much of embedded software, e.g. the flight software for aeroplanes.

Hence you'd never upgrade the firmware on all the redundant computers on an airliner at the same time. Typically with there being a minimum time (both by the calender and flying) between such upgrades.

About a year ago we built up some new machines to run Linux and found that multiple e1000 cards would cause the Ethernet connectivity to drop and become useless. We ended up replacing them with much cheaper Realtek cards and all the problems disappeared. I haven't trusted Intel since. It's as if there were some buggy interrupt interaction with the on-board Intel Ethernet in the 915 chipset.

I've never had a problem with their cards. They're about the only NIC that i've never needed to mess with to get Linux to see. NICs built into the motherboard NB/SB are the biggest problem usually. The PCI-X cards work in PCI slots and in the tests I've done they're usually able to push 30-40% more data through the network than other NICs.

3com used to be that way too. I'm not exactly sure what it was but the 3c905's rocked and would run data quite a bit faster then any other card at the time. I know they had a full blown data processors on the cards but I assume the others would to. I used to go to computer shows just to pick them up for $10-$20 used because they had the same effects on data performance as you would see with rendering going from a S3 trident video adapter to a Gforce video card. I because seriously convinced when at a lan party with an AMD Athlon 800 system running windows 98se with 256 memory and we had to pull a 100 meg file from a file server to get the updates in sync to a game to play. I started pulling the file last because of helping others find it, I was on the tail end of the 3rd tire of uplinked switches and I had the file installed while others were still transering it. The funny part is that people with their brand new Windows XP 1.4 and 1.8 gig plus systems were still slower and the only thing I can attribute to it is the NIC.

Intel caught up with 3com in this aspect and despite my older fascinations with 3com, I'm actually an Intel fan in this one respect now.

i know, cheap ethernet interfaces are slower than the fastest cards out there, but your experience, from many years back when a 800 mhz cpu was fast, are a bit dated. a 100 MB file shouldn't take long enough to download from a file server even with a cheap nic unless there is a performance issue with the file server in question. 100 megabytes shouldn't take more than a few seconds to transfer across a lan.

Of course the 800 mhz system was when I First noticed that there was a difference back in 2000/2001 and things have come along faster now.

But to reach the maximum speeds, you have to make sure you have newer equipment capable of hitting the faster speeds and that the lines are in good near perfect order to realize the maximum speeds. You also have TCP overhead that inflates the transmision size of the 100 meg file and other factors to consider like multiple users accessing the same interfaces, the amount of

I had the same thing pop up on a supermicro (ICH-7, IIRC... dual Xeon 5xxx's) at work. Recompiling the modules and reinstalling them seemed to fix the problem. Like most hardware problems, it seems to be just the wrong combination of drivers, hardware, software and luck.

I think a yum update is what triggered it, but I'm not sure; it just popped up out of nowhere and acted in such a way that I couldn't ever corner the thing. Recompiling the modules was one of those things that I did while I was thinking

It's funny you say that. A few years ago, I asked on a mailing list for the most Linux-friendly gigabit ethernet card, and almost everyone said e1000. I've been happy with mine ever since. My distro was a bit too old for the card, but I was able to download the drivers from intel.com and install them without any problems.

Yes, they released a patch so that the NVM can't be overwritten after the e1000e driver is loaded. But from what I can tell, they still don't know what is/was responsible for the overwriting.

FWIW, I'm almost positive that modern CPUs have debug traps for this exact sort of thing...you can trap arbitrary I/O writes via SMM or something...obviously I'm not in the debug loop, but I don't see why this has been so hard to figure out...

It makes me wonder if they have the tools available to do their job. When I did this type of work we had analyzers and ICE machines which makes it easy if you know how to use them. Are the kernel designers getting enough support to buy the needed hardware? Sometimes these things go beyond the software and can happen because of a physical condition that is untrappable in SMM, like a DMA over the top of refresh cycle fault.

From what I've read, the bug causing the overwrite is in somewhere other than the network card's driver. That something is overwriting random memory and it happens to hit the memory region mapped for writing the card's firmware.

Very easy, if the card is designed to have field-updateable firmware. You just need to send it the right (or in this case wrong) command.

Ideally the manufacturer would make it so that you have to go through all sorts of hoops before you've done anything permanent, but this isn't the first time [theregister.co.uk] something like this has happened.

What I am complaining is the lack of proper testing in Linux. If there were proper tests for the module which does the overwriting, the problem would have never occured at all.

Are you trolling or do you honestly not understand the implications of it being an alpha release?

In other words "This release is for testing purposes; by all means report a bug if it breaks but don't be too surprised if the breakage is catastrophic. If you use this on something important, you are nuts and should seek help". In traditional, closed-source development, alpha releases are produced, they may or may not break things. Now, for software living entirely in userland you probably won't cause hardwa

So the thing is, there is more than just a simple "eeprom write interface" on these chips.

Most of the time the the eeprom attached to the nic is a cheap small serial eeprom part, usually just a few kb.. maybe 32 or 64kb. It contains mostly things like a bit of boot strapping, a few "permanent" settings like the MAC address, and the PXE rom.

And that's where the problems come in. This serial interface is usually an afterthought, and if there is noise on that bus, bits can flip. Or if something bad happens in the NIC code, you could accidentally write when you meant to read.

Usually this is recoverable, but I haven't looked into this specific corruption situation. I've had to deal with this kind of thing before. It's not fun.

Flashing NIC eeproms isn't something a normal end-user does all the time. 99% of the time it's written at the factory, stuffed on the board, and forgotten about.

From what I can tell, the bug is only being seen on bleeding edge combinations of software in bleeding edge distros. They're thinking it's a combination of the driver and a new release of X (one allows for the conditions, the other glitches after that), but there's very little 'tried-and-true' stuff in a bleeding edge distro.

From RTFA the cause of the problem has not been identified yet, however the problem is prevented from being able to present itself going forward by maliciously writing/erasing non volatile memory. Since the problem was caught at alpha/beta stages the stable releases were unaffected.
BTW, My boss tried to RTFA over my shoulder and shot cheese out of his ears (he is the non techie type). Its threads like these that absolutely cement/.'s place as the worlds dominant UBER NERD site.

If there's a whoosh, I don't get it either, other than that it has to be...

I don't think Intel makes solid state drives. Nor does Intel make the EEE PC. Nor does any EEE PC ship with an experimental kernel. Nor does an ethernet card have anything to do with a hard drive.

Some quick Googling shows that the 901 may have gigabit, maybe not -- and if it did, and if they were this particular Intel card, you might be affected. Which would still have nothing to do with the SSD.