Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

It's been a while since i've used DD-WRT. Last I checked, it was still using the 2.4 kernel with the closed-source drvier on many broadcom devices. Linux 2.6 has been out for 9 years, and the open source broadcom drivers have stabilized much since then.

I highly recommend OpenWRT with its Luci configuration interface. You're going to find it a worthwhile replacement for DD-WRT, including native IPv6 support (provided you go with the broadcom-2.6 kernel). You don't need to know much about using the command line to get things to work (and even if you go this route, there are many people who can help).

I've been using OpenWRT Kamikaze without issue on my WRTSL54GS (very similar in hardware configuration to the WRT54GL), and all the computers in my house have native IPv6 (with "radvd" autoconfiguration) using a 6to4 tunnel on Comcast. If you need details on how I got it set up, just let me know or start a post on the forum. The openwrt community is very friendly with a lot of knowledgable folk. I've loved OpenWRT and not had any reason to look back.

I'm an app developer, and I've had to deal with countless network problems (usually NAT's dropping connections without RST) that ended up being resolved by stupid strategies like "f it, lower the keepalive interval to 5 minutes", and killing a connection if it was not ack'ed in X seconds (you can be more agressive with killing TCP connections by adding protocol-level acks on client&server).

But despite this, I've managed to reduce bandwidth greatly by making my protocol independent of TCP connection -- in other words, I connect, tell the server who I am, and keep going with my connection, slowly making forward progress even if my layer 3 connection is killed every few seconds. At this point, TCP port 443 becomes basically a heavyweight datagram protocol (with a SSL handshake) because you can't rely on anything.

I'd rather use push notifications, but they have two glaring holes: 1) You can't rely on messages arriving on time. This means it's useless for a VOIP app where you expect it to ring within a few seconds. 2) Google C2DM requires that you have android market installed. This means your app won't work on half the phones around the world.

It's funny that you say that, because based on (admittedly half year old) data that an app developer collected about reconnect rates, Japan was by an order of magnitude the worst country with regards to number of reconnects that this app had to perform (DoCoMo was the second-to-worst carrier around the world).

Reconnects happen because the cell carrier closes a connection or times out--a good cell carrier won't change your IP address or RST your connections when you switch towers, but a bad one might decide to assign a new IP address each time. On some apps, reconnection may consume up to 1MB of bandwidth each time as they attempts to resync data (Yes, good apps shouldn't do this, but I have seen it happen.)

The problem is not Android -- the problem is the shitty QoS that most mobile carriers put on their networks, combined with the fact that they often kill connections at the NAT layer without notification, time out connections over unwanted ports and block protocols that they don't like.

The end result is that everything on a cell network has to happen over port 80 or port 443, with the SSL negotiation overhead that involves, combined with sending keepalives every 4 minutes. Yes, Android is unoptimized. DoCoMo might be doing everything right, but they bear the price of all of the terrible cell carriers that go out of their way to block data (AT&T, T-Mobile, I'm looking at you). Android 4.0 has a Data usage monitor that helps a ton in debugging misbehaving apps, but data is a fact of life.

That said, Apple may have made a good decision by forcing app developers to use push notifications when the app is in the background. Android messed up push notifications by tying them to Google Talk and Android Market -- this means apps that require push will not run on a large fraction of android devices around the world (including the Kindle Fire). The result is that apps don't use push and implement their own (often buggy/wasteful) push system.

Finally, if DoCoMo doesn't want users to send/receive data, then limit their bandwidth for crying out loud. Don't whine when you provide fast service and people use it. What is complaining to the OS manufacturer going to do? They provide a platform, not the apps or the service they run on.

Why did this article make slashdot? Who cares that a distro with all the default packages enabled won't fit on a CD? Does Windows Vista fit on a CD if you include all the default packages and a word processor? Does OS X?

As long as they continue to support PXE boot, USB boot and other minimal bootstrap images that require network support, I'm fine. Heck, you can put your harddrive in another system and debootstrap ubuntu onto it if you are in a bind with a bad net connection and no DVD drive.

Yes, and the original standard allowed any site to frame any other site and access any data from it... This isn't 1999, and you shouldn't be quoting a 12-year-old spec to talk about security issues that weren't even known at the time. Read the HTML5 spec and maybe you will start to see just how many nuances there are in keeping things working while having security on top. Not even the HTML5 spec explains all the complicated shit that browsers have to do... Mozilla's documentation is the best resource for this stuff because they describe what a real browser does. Here you go, first google result:https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_response_header

X-Frame-Options is a standard header (despite the "X-" part, it is a standard security feature built into *all* modern web browsers, including IE), and it is up to a site owner to choose to use it. This is the only guaranteed way to solve clickjacking attacks. Other methods require javascript enabled and some nasty hacks. See this page if you don't believe me:http://stackoverflow.com/questions/958997/frame-buster-buster-buster-code-needed

That said, it's like using a hammer to put in a staple, way overkill. Problem is, there is no way to guarantee that your page is not being clickjacked -- there are so many ways to do a clickjacking attack that browsers simply can't guard against all of them, for example, plugins, opacity,...

Yes, users shouldn't be stupid enough to input confidential information when the address bar has an untrusted URL... but the clickjacking attack works by showing users confidential information that only a trusted site could possibly know and giving them a familiar login form... It's very difficult for all but the most trained user to distinguish this type of site from the real thing.

Not all sites use this, but Google decided it was worth adding the header to protect themselves. That's their decision to make. For my web page, I'm considering the javascript-based solution because it allows a more clear message and lets users override the check if necessary, but this may compromise security in one or two cases, so it's a tradeoff.

DD-WRT is open source in the same sense that the original Linksys firmware was open-source. Clearly, the GPL parts are open source, including all kernel modules and command line tools based on BSD/Linux. And yes, it must be possible to compile a bootable image with minimal shell support (otherwise they wouldn't be complying with the GPL). However, (this was true two years ago--haven't checked sense) DD-WRT has several binary blobs and closed-source components that handle higher-level tasks (for example, at the time I was looking into this, it was not possible to extend the webserver.)

Additionally, DD-WRT was still on the age-old nvram model of configuration, rather than using a read-write overlay filesystem to allow editing any configuration files. This means that some things were a pain in the ass to change once you have flashed the router, and building a custom image requires compiling a 10GB svn checkout. I'm sure you got it to compile, but I'm just saying that compiling isn't as easy as it should be. I (as did many other angry slashdotters) wasted several hours trying to compile DD-WRT. This is why the words "open source" in the description gave such a backlash.

Anyway, I didn't bother to figure out the compilation process, and I just went over to OpenWRT for my Linksys WRTSL54GS (kernel 2.6 broadcom with b43 -- works really well), Airlink AR-430W, and D-Link DIR-615. They all work really well.

That said, DD-WRT is a fine firmware if you want something that works and does more than the default images--I have friends who love it. It does Client Bridging which is the one feature I sorely miss from openwrt. So in my opinion it's a good choice if you are the sort of person who wants things to work and doesn't plan to write scripts or tweak things from source. And because fewer things are configurable and Brainslayer tests it on a ton of routers, you can be sure that an image will work on your hardware without tweaking anything (if it's on the Supported Devices list).

This is why an encryption key is never "temporary" -- it shows no discretion on the part of the journalists to leak a key. This is not a password that can be revoked--it's a key. If you have a key for your previous house, you don't ever give the key away while telling people the address -- the lock has probably not been changed.

Honestly I don't know why he didn't use SCP or SFTP, giving the journalist the fingerprint+password over a second channel... It's easy to revoke a password, and hard to MITM the leap-of-faith while maintaining the correct fingerprint. But hindsight is 20-20... I wouldn't have thought of this issue either.

I know most people are complaining about the irony of a leak at wikileaks, but has nobody considered the fact that the gpg-encrypted file was publicly available on a "temporary server", probably for at least a few hours (it must have taken Leigh some time to drive home and start the download).

At the time, wikileaks may not have been as popular, but it's not a stretch to imagine somebody was randomly browsing the IP address of that "temporary server" at the time, and noticed the encrypted file. Wikileaks is not your ordinary file host with uninteresting data on it--every file on there can be considered politically sensitive, and it may have been downloaded by several governments the instant Assange started the http daemon.

So it's not a stretch to imagine somebody downloads the file and leaves it on his hard drive waiting for the password to come out. Heck, I may have done this once or twice to the "insurance" file--and the only thing more obvious than "insurance" is a file named "cables.gpg".

NO, that is not a solution! You're doing exactly what the RI/MPAA want. The RI/MPAA have a secret, and that is that they want people like you to pirate their movies, because some of your friends will then pay money to watch them (since they have convinced the public that piracy=murder).

If you really want to stop the movie industry from bribing our public officials and criminalizing us, you should boycott the movie industry. This means, *do not consume* music/movies covered by RI/MPAA--whether you paid for something legally or not is a moot point (unless you get caught).

If you squint your eyes at MPAA, you will notice that the last letter in the acronym stands for *America*. America does not have the only movies/music in the world, and there is nothing these corporations would fear more than American media becoming irrelevant. A boycott of american media (or election reform to prevent bribery of public officials through "donations") is the only way we can stop corporations from controlling our laws and controlling us.

There are other countries that produce music and movies, and some of it is as good or better than our mass-produced hollywood media. If everyone watched half as much American media and watched some movies from other countries, that means 50% less money going to the RI/MPAA, and 50% less bribes to our representatives.

1. Yep, just run tune2fs and enable the ext4-specific features (google for 'upgrade ext 3 ext4') Then, make sure to edit your/etc/fstab. -O extents is the magic that makes them incompatible, but you don't need to use extents to get the benefits of ext4.

2. The rename issue was about bad assumeptions made by some gnome/kde programs about when to call fsync(), and those have long been fixed. I think it was that ext3 used low time to sync, so it was almost impossible to run into this, and ext4 set the *default* sync delay much higher--it is easy to change this in/etc/fstab, so google about this if you want.

I've been using ext4 since before it was supported in most distributions (soon after the announcement that it was marked stable in linux) and I've had no such issues (or I've never noticed). My/home partition has survived crashes due to the faulty seagate drives of 2.5 years ago (on RAID 1) and survived an abnormally high number of daily motherboard/psu-related crashes with no data loss (of stuff synced to disk).

Saying Slackware doesn't support GRUB is like saying Dell doesn't support Linux. It's a bootloader, and aside from installing it, it's completely unrelated to the OS. They probably kept LILO as the default since it's works easily out of the box.

Just grab a copy of grub2, make, make install, install it to the bootloader, and set up a linux64 menu.lst to load into your OS. Unlike LILO, you can actually type in commands at the boot prompt and tab complete to get a list of OS's, so it's kind of hard to mess up if you have the documentation handy.

That said, there's not much point in changing something that works--unless you're intent on booting on new hardware that uses EFI or something.

Make sure you're using EXT4 for your filesystem... it's really simple to upgrade, and you can basically change/etc/fstab, and optionally run some tune2fs parameters to enable extents if you are happy with making it permanent.

Just changing fstab to say "ext4" instead of "ext3" alone cuts fsck time by about a factor of 10 (but make sure your version of grub supports ext4 before turning on extents). My 900GB ext4 raid partition will fsck in roughly the same amount of time as my 20GB ext3 root partition