Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

2U*U2 writes to mention an EWeek article about an entry in the Month of Kernel Bugs. John Ellch has discovered a critical vulnerability in the Broadcom wireless driver: a driver used in machines from HP, Dell, Gateway, and eMachines. From the article: "[The bug] is a stack-based buffer overflow in the Broadcom BCMWL5.SYS wireless device driver that could be exploited by attackers to take complete control of a Wi-Fi-enabled laptop. The vulnerability is caused by improper handling of 802.11 probe responses containing a long SSID field and can lead to arbitrary kernel-mode code execution. The volunteer ZERT (Zero Day Emergency Response Team) warns that the flaw could be exploited wirelessly if a vulnerable machine is within range of the attacker."

Broadcom users on Linux should really be using the bcm43xx kernel module by now.

Anyway the flaw wouldnt affect Linux systems. Why? Different kernel.

NDISWrapper executes the Windows Kernel Mode NDIS driver in the Linux kernel's address space. So it might still result in code injection. It might even extend to FreeBSD when running bcmwl5.sys under its equivalent as well.

> Broadcom users on Linux should really be using the bcm43xx kernel module by now.

Out of the table of ten global "chip family id's" listed here [berlios.de], only 3 are currently listed as supported, the others are at best "unstable".

And personally, I didn't manage to get a BCM4318 "Air Force One"-based card (no, I didn't buy it, it was "inherited") working with the native module (Ubuntu Dapper). Sigh. Guess it's time to fish out the long cables until the Windows drivers get patched.

The BCM4318 in native mode ie using the linux driver will only work at reduced speed and transmit power.currently I think its officially listed as unsupported (11Mbs and 18Dbm)in ubuntu. Using ndiswrapper the driver forces the card from mode0 to mode2 and the card works reliably at 54Mbs and transmits at 25Dbm.whats mode0 whats mode2 you could ask broadcom but they don't answer. Personally I would boycott Broadcom products and go for a more linux friendly companys chipset such as ralink, unfortunately with

Broadcom users on Linux should really be using the bcm43xx kernel module by now.

They want to , but bcm43xx is still unstable in long term use for some chips. It will work happily for a few hours, or even days and then something bad happens (ranging from dropped connections to panics). A lot of people have blacklisted this driver and gone back to Ndiswrapper [google.co.uk] , (eg new installs of Mandriva 2007, Ubuntu 6.06).

I personally had the bcm43xx drivers cause system instability with two very different machines an

There is such a driver in the most recent Linux kernels, but it still uses firmware extracted from Broadcom Windows drivers. So if the bug is in the firmware, it could even affect broadcom native linux drivers.

Don't forget about people using NDISWrapper, which is the only way to get such cards working on Linux at all unless someone has written a driver recently.

There is a native driver [berlios.de], but neither it nor ndiswrapper work worth a damn with my AMD64 Gentoo install. For the time being, I've given up on getting Linux WiFi working and just hang a Linksys WTR54GS [linksys.com] off the network jack when I need to connect to someone's wireless network.

The bcm43xx driver included in the kernel can not function without the firmware contained within bcmwl5.sys. So there isn't any way to determine (from this particular article) if the bug affects linux or not.

If the bug's in the firmware, it would be very difficult to exploit it to run code in the kernel. Not impossible, but very difficult. The description of the bug makes it sound extremely like the problem is in the driver, not the firmware.

Yeah. In third party drivers for a third-party wireless adapter. He still hasn't disclosed any information on a bug in apple-supplied wireless drivers for apple-supported wireless devices, even though he was offered stuff for actually proving what he'd said (John Gruber, for example, offered to give him two brand-new fresh-out-of-the-box macbooks if he managed to hack them)

Gruber challenged them to hack a macbook (not two) with many stipulations. The challenge was to be videotaped and the conditioned were not under the control of the hackers. If the challenge was not met, the hackers would have to pay for the machine. The results of the videotaping were the property of John Gruber.

There are plenty of reasons for not accepting the challenge. They may have felt that there would be too much risk that they didn't want to accept, they may have not given a shit about John Gruber (likely), they may not have wanted to contributed to his pro-Apple site, or they may have had no interest in the lame reward offered. A macbook may be exciting to you and John Gruber but probably not to them.

Just because additional details were not provided on demand to Apple loyalists does not mean that vulnerabilities didn't exist. IMO the test configuration was chosen because it was the easiest one to demonstrate the flaw. That doesn't mean it's the only one that contains the flaw though Apple apologists have always insisted otherwise.

Their problem is that they made ambiguous claims, and were given many chances to clarify their claims. Specifically, do they have a working exploit that can take over a clean, up-to-date MacBook with no user-intervention other than that the AirPort card be enabled? They've never directly answered that question. NEVER. And they've been given multiple opportunities. Gruber's challenge was mean to fina

"The primary reason being that they couldn't do it."Only they know the answer to that, not you. I figure they don't consider themselves circus animals and don't have any interest in jumping through hoops on orders from a clown like Gruber. It's also conceivable that they may have had some sort of agreement in place that restricted what they could publicly disclose.

"Their problem is that they made ambiguous claims, and were given many chances to clarify their claims."

"In every conceivable way, the hackers in question have failed to give me any reason to believe they actually had an exploit against Apple's AirPort drivers."In every conceivable way? The fact that they demonstrated the flaw and said it existed in many configurations including Apple's isn't conceivable to you? You live in a fantasy land where Apple can do no wrong.

"But given they've not given me a reason to believe them, why on earth should I?"

"Yes, you do. You're doing it right now."I have not. All I've done is consistently counter the claims by you and others that the researchers were frauds who failed to support the claims of flaws on the mac platform. I've never defended them other than to support that that could have been right and I've offered reasons to explain non-response to public criticism. It is you that is making indefensible claims here, not me. I have no position one way or another on their claims other than I take them at face

Who says they didn't? Full *public* disclosure isn't necessary for that. Considering that Apple did, in fact, update their products shortly afterward suggests that there was sufficient disclosure for that to happen.

John Gruber is a grandstander who attempted to turn the spotlight onto him. His offer couldn't possibly match what the researchers hoped to gain (and most certainly did gain) through consulting on the issue. There were reasons why the researchers presented the flaw and chief among then certainly wasn't to increase attention for an Apple cheerleader's blog. Odds are that they wanted to get paid (and I don't mean with a single MacBook).

"So you acknowledge that it was a stunt and a lie. Good."I acknowledge that what you just said was a stunt and a lie.

"Uh yeah? His conditions seemed pretty standard stuff it you look at it from a scientific standpoint."

Sure, videotaping and having the camera controlled by john gruber is part of the scientific standpoint. I'm sure that offered plenty of encouragement. Offering a measly reward worth, at best, a small fraction of what they could have gotten from Apple, was part of the scientific standpoint

...A heap buffer overflow exists in the AirPort wireless driver's handling of scan cache updates. An attacker in local proximity may be able to trigger the overflow by injecting a maliciously-crafted frame into the wireless network. This could lead to a system crash, privilege elevation, or arbitrary code execution with system privi

"If their hypothetic flaw is now patched, why don't they demonstrate their attack on an unpatched macbook unless "because they didn't find a flaw in the first place"?"There you go proclaiming the reasons for things you don't understand. Who says they haven't demonstrated it? They just haven't demonstrated it to you.

"You know, scientific method 101 says it's not ours to disprove that they found a flaw, it's theirs to prove they found one in the first place, which they've never done."

I mean, it's bad enough that people always talk about "Computer viruses" instead of "Windows viruses" and so on, but come on, can we please include *some* information in the post itself?

Admittedly, the article to which this newspost links also doesn't mention this until the third or fourth paragraph or so.

At first I thought the article was about the Linux kernel, in that case I would have wanted a (global) list of the OS's/versions affected as well, because my laptop might have been vulnerable in that case!

Quote
"Microsoft's Windows operating system is exploitable without the existence of an access point or any interaction from the user.
The card's background scan of available wireless networks triggers the flaw," the group said.
eWEEK.com Special Report: Mac Security"

The bug was first discovered by wireless security guru Jon "Johnny Cache" Ellch, the researcher who was embroiled in a controversy with Apple over similar bugs in the Wi-Fi driver that ships with the Mac OS X.

I was tempted by wireless, but given I don't have a laptop, I grabbed a couple of these twenty quid each Homeplug devices which plug into a mains socket and send data around the house's main circuit. It not be as 'go anywhere' as Wireless, but in the light of this I guess it's more secure.

{I was tempted by wireless, but given I don't have a laptop, I grabbed a couple of these twenty quid each Homeplug devices which plug into a mains socket and send data around the house's main circuit. It not be as 'go anywhere' as Wireless, but in the light of this I guess it's more secure.}

In fact, I do. But relying on HomePlug to "protect you" much better than WiFiis a little bit of folly. No, you can't have the article's attack made on you.But as the parent poster has pointed out, you're not as protected as you think-someone can snoop in on your traffic if they've got their own home plug and cantap into either phase of the two-phase 220v circuit comng into your house.With clever enough hardware they wouldn't even need to do that- it emits enoughRF-like signal...Fixed wiring Ethernet is pr

I saw some powerline adapters with 56-bit DES encryption. That's not terribly secure. Your security is mostly based on the fact that the bad guys cannot plug into your mains. Which is probably good enough for home use.

How do these things work on different ring mains? Given that the first floor of my house is on a different ring main (not unusual obviously) to the ground floor, how would I be able to communicate between them? The fusebox is the only place they come together. Can the RF make the jump between them?

Fusebox is one place. Power meter's the other. I suspect that most PLC implementationsfor home have relied on the meter to handle the cross-over between phases, etc. If you'vegot two meters, I suspect you'd need a bridge of some type like the X10 booster bridge forhomes to bridge them all without mixing the power from each feed.

They either 1) dont run static analysers or 2) run them but punted the bugWhich is it Broadcom? Either way it is neglegance. Im tired of developers spouting hot air about being Accountable, Responsible and Reliable etc blah blah and especially practicing good engineering and hearing design patterns yawn. I hear it every day, I worked as a dev and left it as its the same old shit every day day in day out, same for test.

Does my "reverse engineered" linux driver have this bug. I guess not. Why is it that a bunch of people who don't get paid come up with bug-free solutions? I guess, either they love their job very very much, or its just the development philosophy or both:)

Well they are doing it for free and in their spare time so I think they would actually *care* about it.Plus they are generally better than the Broadcom devs because they dont have the chip manual infront of them while they are writing the driver.

Why is it that a bunch of people who don't get paid come up with bug-free solutions?

It gets fixed because it's free and therefore it can be. Non free software writers put up with NDA's and code they can't share even if they wanted to. Their code is owned and so their effort and good will is likewise owned. Free software writers are free to share their tools as well as their improvements, so it'

There's a discussion about having user space device drivers for usb wireless sticks and some other drivers as well for linux kernel. I hope this kind of attack vectors encourage kernel developers to go in this way. Keeping stuff in user space as much as it allows would again let Linux to be secure-by-design once again. Currently couple of tools (like wpa_supplicant) running in user space, and I wonder their situation in Windows kernel. If they are not (which I guess they are not -because microsoft is known to be putting huge code into kernel level) then that's a huge problem from security perspective.

It seems that the user who controls the wireless card will have access to the wireless card, and thus in this case you could potentially have a wireless virus.In some cases it could be that the user would have access to all network cards, which would mean that from a virus/spam sending/worm point of view the computer will be usefull to the hacker, even if it is otherwise secure.

Maybe keyloggers will be prevented, and writing to the disc, i.e. malware surviving the next reboot. But in general it seems to me

I believe there is a strong possibility it is related. I've used a third-party wireless card with a Broadcom chipset in G3/G4 Macs before, and it was recognized as an Airport Extreme (b/g) card.

I've heard that Broadcom has been less than cooperative in providing specs for others to write drivers. Perhaps if they were more open they'd have a better scrutinized more secure product. (I can't provide specific links, but I

Why do compilers put buffers on the subroutine-and-return stack? Why not have the compiler use a separate stack for composite data such as buffers, arrays, variable-size data, and all similar stuff, whenever the programmer puts such data on the stack? Wouldn't that stop stack-based code injection?

The added cost in processing time should be quite negligile, as long as simple, fixed-size data, such as integers, are still on the main stack.

Where's the complication? If you define simple, clear rules about what goes where, this has the same complexity as keeping track of the sizes of integers, floats etc, something compilers do all the time.Depending on various considerations you might define, for example, that integers, floats, chars, booleans and pointers go on the main stack, along with all data types that have been defined to be implemented as these types. Everything else goes on the second stack. Thus, for example, all arrays go on the sec

The second stack could be a special part of the heap space. What I mean here is the space where you get memory when you issue memory-allocation calls.There would be a special part of heap space that would grow and shrink as a stack. This special area could be very similar to the ordinary heap, with the difference that allocation and deallocation is very much faster, since it grows and shrinks as a stack.

With this arrangement, dynamically linked modules don't necessarily need to be aware of the second stack.

In reality, C's behavior (and a lot of other languages, really) are governed by behaviors within the CPU hardwarethey were originally intended for. In the case of C, the machines in question only had one hardware stack, so theyintermingled the subroutine return state with the parameters, etc. for speed's sake. Implementing a second stackin software would have been problematic because it would have added extra performance issues and ate into theregister store (you want to probably reserve a register for th

Do modern processors have such a small number of registers? I haven't looked at processors for quite a while, the last processor I knew well was the Motorola 68000. That one had eight address registers and eight data registers. In that one you could easily have spared one address register for a second data stack. Do modern x86 processors have so much fewer suitable registers?

(I wish the PC had been based on the MC68000 architecture, it was so much better in so many ways!)

If memory serves me right, C first came about on a PDP-11. You had a program counter, PC or R7, the main stack, SP or R6. You also had five other general purpose registers (but you could do arithmetic on R6 and R7 as well), and the instruction set being nicely orthogonal, although the return address always went on the stack, any register could be used as the argument list pointer. Note that towards the end of life, the later 11s even implemented seperate I and D spaces which made it impossible to stick code

Dovecot, the POP/IMAP server developed by one of the OpenBSD guys, uses this kind of system. At the library level, it has a separate data stack. Allocations on the data stack are cheap (because you just increment a pointer, rather than doing messy things with the heap), and the data stack can be pushed and popped independently of the control stack. This has some nice features like a sprintf analogue that allocates its own space on the data stack, but then doesn't pop it on return, so you don't need to wo

George Ou at ZDNet has published a procedure [zdnet.com] on how to use the Linksys drivers with devices from other vendors such as Dell and HP. Of course this is not an ideal solution but if it works it's better than nothing.

You can also replace the bcmwl5.sys file, usually located at C:\WINDOWS\SYSTEM32\DRIVERS with the one provided by linksys, just download linksys drivers from here [linksys.com] , extract them, disable your network adapter, copy the new bcmwl5.sys (make a backup of your own bcmwl5.sys just in case...) and activate the card again. It is a temporary solution but it's better than nothing and you don't change the name of your network card. Tested on a Dell MiniPCI 1300 WLAN and it works.

I guess "Johnny Cache" got tired of trolling for media coverage about his non-existent MacOS wireless exploit, and decided to publish the less sensational information about the OS and systems that it actually affects. So, instead of being a big bad boy who rocks the world by pwning Macs, it's just one more of thousands of boring Windows exploits.

By the way, what is this guy's name? I've seen it published as "Erlich" and "Elich" before, and now slashdot says it's Ellch. One thing's for certain. Anybody who c

It's not that simple. C is used in high performance code specifically because it's fast and compact. You get these improvements by avoiding needless length checking. Obviously there are cases where you _do_ need to length check buffers (and exploits are the result of not doing this), but you don't have to length check everything. If you ditch C in favour of a language that does the length checking for you then you will sacrifice speed and compactness since it will be checking _everything_.

What language would you suggest is more suitable for writing high performance kernel code?

C is, essentially, portable assembly language. I love it -- it's one of the languages I know the best, and I continue to work in it. However, I'd love to see the use of Cyclone or special compile-time checked languages for the essentials. I think most device drivers could be easily rewritten to be bullet-proof (stack overflow) this way, and such languages are easier to do state machine analysis on (since most device drivers are simple pieces of software that control the state of the hardware). Provably correct operating system design is not a theory, but no one seems to be interested.

"Provably correct operating system design is not a theory, but no one seems to be interested."

Possibly it has something to do with formal proofs only being realistic on toy systems.Anyone can formally prove a 1 line Hello World program will work to spec but tryto formally prove the 20 odd million lines of code in a modern OS and your descendentswill still be doing it 3 generations from now.

I do not see how bounds checking can really affect performance of a Wi-Fi driver. It is not that we are talking about something that getting 0.1% speedup is crucial.But since you asked, there are two solutions:

1) use Cyclone. It outputs C code anyway, and the language provides enough mechanisms to identify where it does not need to do bounds checking.

2) use Ada. It also outputs C code, and its range types are a good solution to the problem.

You don't have to be stupid to screw up in C, that's the problem. The only way to be safe is to write your own string handling functions and ban all others, in which case you've changed the language: you've made it so fascist that it's not-C.

As the number of cases of these driver-flaw attacks mounts, I think it is fair to say the OpenBSD stance on proprietary driver 'blobs' has been fully vindicated. When they took this stance, a fair number of Slashdot posters were publically knocking them as unrealistic-paranoid-idealists. Well here you have it -- deep-fried crow... yum.

Where are my mod points when I need them.With the exception of the Atheros HAL (which, while semi-closed, has a decent number of people outside of Atheros looking at it - Most of the Atheros HAL ports are done by non-Atheros volunteers who have signed NDAs for the source), pretty much all other binary "blobs" only execute on the card itself. Whether the blob has a bug or not, as long as the code interfacing with the card that runs as part of the OS kernel itself is open-source and secure, a firmware blob t

No idea. OpenBSD has been immune to most of this kind of vulnerability for ages (they can cause crashes, but not compromises). They use a variety of techniques, including a variable size gap between stack frames (making it very difficult for an attacker to put the jump in the right place) and a canary value next to the return address (so that a modified canary shows that the return value is broken).

I believe some of these changes have made their way into the main branch of GCC, although I don't think a

Their day one bug was an exploit of the old Apple Airport - Broadcom - wireless drivers. This day eleven exploit is of Broadcom's Windows wireless drivers. I realize the OS has changed, but is this more or less the same exploit? Or is this leveraging some issue that's actually in the chipset?

Man that's what I get for posting before I've had my coffee. The old Apple wireless was Orinoco - the newer chipset is Broadcom. Should've been blindingly obvious this is related to the Johnny Cache dog and pony show, I guess.

...when programmers FINALLY learn to stop producing buffer overflow conditions?How many years now have they known this is a no-no?

When the hell are programmers going to be adequately trained in proper coding procedures?

When the hell are humans going to stop taking pointless shortcuts contradictory to their end goals? Or start using computers to CHECK FOR their stupid mistakes instead of using them to MAKE their stupid mistakes?

I've just switched back to Opera 9 because Firefox 2.0 is so riddled with stupid