Posted
by
Soulskillon Friday April 16, 2010 @06:30PM
from the calling-an-estranged-cousin dept.

MonsterTrimble writes "At the Linux Collaboration Summit, Google and Linux Kernel Developers are meeting to discuss the issues surrounding the Android fork and how it can be re-admitted to the mainline kernel. From the article: 'James Bottomley, Linux SCSI subsystem maintainer and Novell distinguished engineer, said during the kernel panel that forks are prevalent in embedded systems where companies use the fork once, then "throw it away. Google is not the first to have done something like this by far, just the one that's made the most publicity. Hopefully the function of this collaboration summit is that there is some collaboration over the next two days and we might actually solve it."'"

Really? I come to slashdot to read about how Google is taking yet another piece of technology we have taken for granted for many years and turning it into an online, ad-based Clout 2.0 service and tunneling it through HTTP with JSON and SOAP to their servers for a nice intense data-mining session for better targeted ads and predicting future crimes one might commit.

Really? I come to slashdot to read about how Google is taking yet another piece of technology we have taken for granted for many years and turning it into an online, ad-based Clout 2.0 service and tunneling it through HTTP with JSON and SOAP to their servers for a nice intense data-mining session for better targeted ads and predicting future crimes one might commit.

Unbridled capitalism and the apathetic and ignorant citizens are to blame for that. Your personal data can be aggregated and monetized, and for the foreseeable future, there's very little legislation to prevent this and very little awareness of how pervasive such technology is. My whole generation is living with software riddled with government and corporations that have put back doors into everything, freely share data with each other, and those living in urban areas (the majority of the population) are rarely out of contact with some device or another wired into the global network, tracking their movements, purchases, communications, relationships, and every aspect of your life. Remotely-enabled webcams, cell phones that can be turned on silently to broadcast everything it hears and sees, and laptops and routers that can be readily converted into eavesdropping devices, just to name a few of the many things that are out there right now. And the only reason it's not all interconnected more seamlessly is because the technology is still rapidly evolving and hasn't reached a stable plateau where convergence is possible, although the internet has made a giant leap forward in enabling that future. The NSA spends billions each year trying to keep up with infrastructure changes and only is able to harness a fraction of that potential.

But I mean, comeon -- what do you expect from a world where we find it okay to setup metal fences with razor-tipped wire and cameras everywhere as "official protest zones", where we have passports, credit cards, (and soon ID cards) that can be remotely scanned to identify you... put it all together. Where do you think this all ends?

I carry my phone, wallet and even keys in a tinfoil handbag. It looks gay, but keeps the government from spying on my precious bodily fluids. I'm thinking of switching to a new shampoo with tinfoil instead of alumninium as a a major ingredient.

I think your definition of “real nerds” is way off.My dictionary says:iPad users — faggy hipsters who are easily influenced by viral marketing, and usually play with shiny colorful clickable UIs on locked-down appliances.real nerds — people who use text-based UIs and solder their own hardware, have great logic skills at the expense of social skills and are actually really using computers (=automating things).

Google must now balance any desire to respect the wishes of the Linux community for compatibility with the more diverse, competing - and not always logical - interests of those now adopting Android and its own plans.

I did a double take on this statement.

What I've seen on the kernel mailing list is more a conflict of commercial developer's desire for compatibility (across kernel versions) with the core kernel developer's more diverse (and not always logical) desire to push pet projects and make frequent cosmetic changes that creates a hellish torrent of code churn. The lack of well defined kernel driver interfaces means a lot of time spent chasing the latest changes instead of adding features or fixing bugs.

The only people I've seen clamoring for a static, unchanging driver interface are those writing proprietary drivers. Last I checked, changes to the interfaces by someone puts the onus on them to fix all the calls to it in the kernel, which is why getting your driver into the tree is considered better than keeping it closed.

If hardware makers can't include third-party code or processes that they aren't permitted to sublicense as free software, then perhaps they won't write a driver at all. Instead of proprietary drivers, you'll have completely unsupported hardware.

yes, if it's enough of a market for them, they will make sure that they get support from upstream, if enough companies ask for linux support for subassembly Y then maybe it will change. If you really feel you need to keep it closed, do like nvidia, or handle it yourself.

The reason that the market share is so low, is that companies didn’t support it in the first place. Because they did not look at the long-term profits, or were just too greedy. It’s got nothing to do with market share. That is just a straw-man.

But hey, I have yet to see hardware that I couldn’t use under Linux. So the whole driver problem is’t even there anymore.

What's your point, that we should encourage closed drivers by setting the APIs in stone for years on end? Allow the non-open to dictate the actions of the open?

That's not -my- problem. It's theirs. They choose to stay closed, so when the APIs change no one else can fix it but them. They have no room to bitch about unstable APIs in an open kernel that is constantly changing, when they won't commit to being open themselves. Others do, and as a result don't have nearly the problems. It's a cost they must accep

So I, an end user, am inside a Best Buy store, and I don't have a cell phone with a data plan to check what is in stock against the distro's HCL. How do I find peripherals that are definitely compatible with a free OS?

Prefer to buy the object which says it implements a device class standard. AHCI is a good example. Classes rule so much that in a lot of cases all the non-class compatible products just went away - HID and ATAPI are good examples of that. In a few cases products don't advertise their class compliance but there's a well known sign that you can learn before you start looking. For example, if a webcam has the symbol that means it's designed to work with Vista, that means it'll work with

Step one would be: don't shop at Best Buy, as you're probably paying too much.

Step two would be: shop at home, online, where you can compare both prices and compatibility with your OS.

I think these steps are valid whether or not you're a clueless end-user. Clueless end-users are more than capable of comparison shopping online (and if the end-user really wants to buy from Best Buy, they can look at Best Buy's website without leaving home).

How much do return shipping and restocking fees cost if the product A. turns out to be an incompatible revision after they switched from, say, Atheros to Broadcom within the same model number, or B. is a laptop computer that turns out to be incompatible with my hands and/or eyes?

Stores like Best Buy often charge restocking fees on open electronics. What's your point?

At any rate, it's not hard to google the model number and see if people have had trouble getting it to work with your distro, see whether the manufacturer has changed chipsets under the hood, and so on and so forth. Isn't that part of what I said earlier, in step two?

You can't complain that an alternative solution doesn't work if you ignore part of the instructions.

If you buy from a reputable vendor, you have exactly the same recourse you'd have buying from Best Buy - return it.

You're going to complain about cost of shipping or something, I'm sure. That's true. But if you pay less for the item in the first place, and most of the time you'll get the right thing (after all, you're talking about an edge case here), so you'll come out ahead even if once in a while you have to return something.

To respond to you signature, Valve had this idea, and people spurned it at the time. Of course that was before they actually had a bunch of games in their lineup. (At least more than three years ago they did a survey.) The idea was to pay $10-15 and get all the games for free. That idea wasn't bad considering the prices that are paid for games... And you get the kind of support that Steam can offer, such as cloud based services (configuration, saved games, etc.).

I have to agree with the other responders: don't buy at Best Buy! You're just getting ripped off. Go home, and shop on Newegg.com (or zipzoomfly.com, or many others). The prices are much lower, there's customer reviews so you can see what other people say about the product or if there's common problems, and you probably won't have to pay sales tax which should make up for any shipping charges.

"Give away software and sell support." But how do you sell support contracts for a computer game?

You don't. That model obviously doesn't work with games, it works for software used by businesses. For games, you just have to sell it outright. Or, you could give it away and sell access to a central server for multiplayer games (of course, you run the risk of someone reverse-engineering your protocol and making their own compatible multiplayer server).

Yes - as much as possible. I don't think API's should never be fixed or improved, but at some point Linux ought to be 'done' enough that driver API's don't need to be changed, and that backward compatibility isn't such a liability that it outweighs the advantages.

C'mon folks. Linux is way past the experimental phase. It's the basis for many of the devices we use and love. If the API's aren't solid enough to freeze (or maintain backward compatibility) at this point, then it ought to be a priority to make

What's your point, that we should encourage closed drivers by setting the APIs in stone for years on end?

I think the point is that the driver ABI doesnt need to change every cycle just to discourage closed drivers.

..and here is an idea.. an operating system can support more than one driver model and ABI. Pick one, call it BIN_DRV_1. Declare it to be supported for at least N > 5 years, and then continue to fuck around with the SRC_DRV one. After 5 or more years, when there seems to be a significant advantage if BIN_DRV_1 had the same features as SRC_DRV, you define BIN_DRV_2 and then support that one for

If hardware makers can't include third-party code or processes that they aren't permitted to sublicense as free software, then perhaps they won't write a driver at all. Instead of proprietary drivers, you'll have completely unsupported hardware.

Releasing unsupported hardware because you don't like the alternative seems like a case of cutting off your nose to spite your face.

Given the situation you describe, in which hardware makers sub-license proprietary code because it costs them less, it would seem to me that they should be promoting FOSS for all they're worth. No more upstream lock-in for the manufacturers, fewer overheads and almost certainly increased profits per unit sold because of reduced demand for royalties.

Releasing unsupported hardware because you don't like the alternative seems like a case of cutting off your nose to spite your face.

If the (supported) Mac OS X market is an order of magnitude bigger than the (unsupported) GNU/Linux market, and the (supported) Windows market is yet another order of magnitude bigger than that, then cutting off your nose to hide your lies [printfection.com] becomes profitable.

Those proprietary drivers still have to be maintained against the rest of the kernel, and that costs time, and consequently money.

Furthermore, many of these devices are protected by patents, and I'm sure you don't want code for a special model of capacitive multi-touch screen that only one phone uses to be added to the general Linux kernel. There's no point in it.

So that's the problem. All these phones have highly specialized devices that may be protected by patents that in Europe have no weight, but in the

The last thing Linux needs is a set-in-stone kernel interface: 'backwards compatibility' is what has ensured that Windows remains a steaming pile of kludges and security holes as no old components can be thrown away.

I can only presume that you are actually Bill Gates and want to destroy Linux by forcing it to repeat Windows' mistakes.

I can agree with this, but then again I don't see anyone asking for that.

How about something in between, say a well defined interface that is stable for a reasonable period of time with clear points of deprecation and then replacement with improved interfaces? Windows's driver interface is not set in stone with never ending backwards compatibility, you can't use Win 9X drivers on XP. Yet a binary driver that works on Windows 2K has a reasonable chance of running on Vista.

There needs to be a balance between improvements/changes and stability/maintainability.

Not set in stone, but less volatile than "every other release needs some minor fixup." That's all.

For example, we're currently on 2.6.33.2. Why not standardize on an ABI for the minor version number? 2.6 versus 2.8 for example. (Or since they switched development pattern, will 2.7 be a legit release? I don't know.)

The problem is that the volatility is so high that kernel drivers need 24/7 maintenance, or else they're dropped and then it becomes even harder to re-integrate them. Ask Microsoft about their par

Why not standardize on an ABI for the minor version number? 2.6 versus 2.8 for example.

There is no 2.8 on the horizon, the next number over to the right has become the de facto minor version number, and the module ABI is stable within each of those releases. Clearly, you are not involved in actual kernel development, but thanks for playing.

There is no 2.8 on the horizon, the next number over to the right has become the de facto minor version number, and the module ABI is stable within each of those releases.

I don't know where you get that. I've seen and continue to see plenty of changes to kernel functions called by drivers between 2.6.x and 2.6.x+1. Maybe you mean the next number to the right of that, the so called stable branches maintained by Greg Kroah-Hartman?

Those are a step in the right direction, but the x changes every few months an

The problem is that the volatility is so high that kernel drivers need 24/7 maintenance, or else they're dropped and then it becomes even harder to re-integrate them. Ask Microsoft about their paravirtualization drivers. They've submitted two or three versions to the kernel, and each time you had to use the specific version of the kernel that they compiled them on, or it didn't work. That's the problem. Linux. Isn't. Free. Microsoft is however eventually going to have to come to a sad realization: it may co

No, but they're wrong for being unwilling to meet them halfway (even something as simple as a clear schedule for ABI changes and deprecation). There's nothing wrong with adding a little method to the madness.

There's a rather important difference between "catering to only some users" and "catering to zero users".

At least meeting the closed-source driver devs halfway (e.g. by making a schedule for ABI changes) means those closed-source drivers will be usable with the kernel for a time, meaning the target device will actually be able to run Linux, meaning Linux will get a bigger market share.

Refusing to meet them halfway, though, directly results in a smaller market share, because those closed-source drivers never

I believe they already met closed driver devs half way by ignoring the GPL licensing for 3rd party linked modules in the Linux kernel which are usually shipped in end user distros.

Which is better?

I don't believe that really changes anything, since 3rd party driver developers (nvidia, ati etc) tend to target distributions and not upstream and most distributions have stable ABIs for that specific release.

First, I'd like to say, what moron modded this "troll"? I personally don't agree with it, but that doesn't make it a troll.

Furthermore, many of these devices are protected by patents, and I'm sure you don't want code for a special model of capacitive multi-touch screen that only one phone uses to be added to the general Linux kernel. There's no point in it.

Wrong, absolutely wrong. Greg K-H himself has explicitly said that he WANTS people with drivers for even highly obscure devices to merge them into the

Wrong, absolutely wrong. Greg K-H himself has explicitly said that he WANTS people with drivers for even highly obscure devices to merge them into the mainline kernel. It doesn't matter if your capacitive multi-touch screen is only used in one phone; the code is useful to have publicly available in the kernel as a reference. Furthermore, as more drivers for similar devices are merged into the kernel, commonalities between them can be found, and more generic drivers can be created.

Based on what I've seen over the years (as a developer on a project that never made it back into the mainstream kernel), the problems with this approach are threefold:

Nobody maintains most of them. Most of the 5% of drivers that everybody uses are already in the kernel tree. Of the remaining 95%, half of the drivers don't build at all, and most of the other half don't work. If they're barely maintained now, you can bet money that they won't be maintained at all when some kernel tree maintainer gets a hair up his/her backside and decides that a particular fix isn't elegant enough and won't take the changes....

The tree is already too large. If every driver out there were in the tree, checking out an update to the tree would be horribly painful, the source packages that distributions include would become huge, etc. The bigger it gets, the fewer people are going to be willing to maintain their drivers inside that tree, so in the long run, encouraging people to put their drivers in the tree is just going to cause other drivers to move back out of the tree, eliminating any real benefit.

Many such drivers are outside the tree because they require substantial changes to some subsystem in order to build them. Now one could argue that these changes should be made to those subsystems to make them more general, or one could argue that those drivers are so specialized that nothing else will use them, so there's no reason to bother. That's often not an easy question to answer, and tends to result in highly political shouting matches, with the end result being that the driver never goes in, which is usually why those drivers got published outside the kernel tree to begin with.

There are ways to solve these problems, of course; IMHO, they basically amount to:

Design a kernel build infrastructure that can easily bring in driver sources from third-party sites (like a ports collection, but for kernel drivers). With proper categorization, this can provide all the same benefits as having the drivers in the main tree, but also allows for a richer tagging scheme instead of a simple filesystem hierarchy, which should actually make it significantly easier to spot patterns (for example, seeing that there are now eighty-seven different drivers for capacitive touchscreens, or whatever), all without bloating the tree that everybody has to download.

Subject all kernel API changes to a formal API review process in which no API change can go in unless the owners of all drivers in that area agree that the design is acceptable and will meet with their needs. Set up a reasonable set of rules of engagement (e.g. A. don't shoot down the idea just because you don't need it, B. don't shoot down an idea without proposing an alternative). And so on.

Redesign the kernel interfaces in an object-oriented language. Such designs make it more likely that drivers can extend the interfaces without requiring major changes to the core code. The Linux kernel sort of halfway adopts this approach insofar as code reuse is concerned, but does so in ways that aren't particularly clean and neat.

For example, if I were writing an ATA driver and needed to do almost everything the same way but change the behavior of one function in some other library... say down at the block device layer, I'd either have to make a change to the block device layer with some special case detection code or I'd have to copy entire swaths of code at the ATA device layer and c

The tree is already too large. If every driver out there were in the tree, checking out an update to the tree would be horribly painful, the source packages that distributions include would become huge, etc. The bigger it gets, the fewer people are going to be willing to maintain their drivers inside that tree, so in the long run, encouraging people to put their drivers in the tree is just going to cause other drivers to move back out of the tree, eliminating any real benefit.

If you're doing actual development work, what difference does another 100MB make?

The average Linux 2.2 kernel patch was somewhere around 5kB compressed. The average 2.6 patch is somewhere around 150kB compressed. If you were doing development back in 2.2, when you pulled an update, you got a handful of files and a couple of k in lots of little high latency pieces. Half a minute later and you were patched. With 2.6, do the math.

Now imagine in a couple of years when the drivers have bloated that up to 2GB

Odd, I'm looking at http://www.kernel.org/pub/linux/kernel/v2.6/ [kernel.org] and the latest kernel I see is 2.6.33, and that comes in at a whopping 81 megabytes for the compressed tarball. Extracted, it takes almost 434 megabytes. That's over twelve minutes of DVD-quality video. That's two-thirds of a CD-R. That's ten times the size of the Mac OS X kernel. That's two months of bandwidth at the lowest tier of cell phone service.... You get the idea. It's freaking huge. The ke

The thing is though, I've seen that argument since the mid 2.0.x kernels. The ABI hasn't happened and the Linux kernel hasn't shriveled up and died.

It's not like the interface for a driver changes every single release either. There are a number of out-of-tree drivers that compile and work fine for most of the 2.6.x series (perhaps all, I haven't tried them all). So it's not exactly a lot of work to keep current. There's no real call in an embedded device (or server for that matter) to slavishly track every

I should have been more clear. I'm talking about drivers in the main kernel source. I know the linux kernel mantra: binary only drivers are evil (I agree), out of tree open source drivers are slightly less evil. I think out of tree open source drivers can be useful when inclusion to the main kernel is denied because some critical functionality is deemed unnecessary by the gatekeepers who require it to be removed before consideration. But I'm not even talking about that.

Last I checked, changes to the interfaces by someone puts the onus on them to fix all the calls to it in the kernel...

That's the theory. Here is how it works in practice: A pet project or cosmetic change that touches a lot of code is implemented and then dependencies are grepped. The dependencies are fixed up in a cut and paste way. Sometimes more important drivers get some review to make sure nothing breaks. Everything else just gets shipped if it compiles. Then when that kernel is used in a distribution, sometimes years later, many drivers are suddenly broken and you have to back track to see which change took it out. If someone has a lot of time and desire to support a "lesser" driver then they can spend all of their time playing catch up, but that wears out volunteers quickly and annoys commercial vendors.

it is if you accept the status quo. If you took all drivers out of the main tree and created a new tree specially for driver code, not only would the kernel suddenly get smaller and easier to work with (as you at least wouldn't have to download all that useless-to-you driver code) but the distinction between them would help to keep drivers as separate, truly distinct modules.

Of course this only happens with a stable ABI. Break that every version and all that driver

it is if you accept the status quo. If you took all drivers out of the main tree and created a new tree specially for driver code, not only would the kernel suddenly get smaller and easier to work with (as you at least wouldn't have to download all that useless-to-you driver code) but the distinction between them would help to keep drivers as separate, truly distinct modules.

Linux is a monolithic kernel. Just because you can load or unload parts of it at runtime doesn't make those parts of it any less mono

ok, I was trying to advocate a situation where driver writers get to do the least amount of work necessary to produce and maintain their drivers, then they might put the minimal effort into keeping them current (or someone else might, if it becomes easier).

A stable ABI (I think) comes the closest we'll get to this. The alternative is either a lot of effort for driver writers to do their thing; less drivers; or ndiswrapper.

I have had several years of driver hell, when two or three machines have constantly died because FOSS drivers have stopped working in every kernel security update (which occured about monthly for 8.04).

Now the eee.ko driver which gives 900MHz on Eee701 does not even compile (on 10.04beta). How do you explain it? It is FOSS, I did not "bring it myself", but...

You're entirely right. That's why they fund several thousand students worldwide to join open-source projects and contribute code to those projects every summer, even if the projects in question don't directly benefit Google.

in kernel drivers should be an issue. The module ABI/API changes as needed, but this has already been hashed out. Opensource you diver and get it in the kernel and it will work across versions. want your's to live outside the kernel (nvidia) you maintain it.

What I've seen on the kernel mailing list is more a conflict of commercial developer's desire for compatibility (across kernel versions) with the core kernel developer's more diverse (and not always logical) desire to push pet projects and make frequent cosmetic changes that creates a hellish torrent of code churn. The lack of well defined kernel driver interfaces means a lot of time spent chasing the latest changes instead of adding features or fixing bugs.

If you ever used Gentoo, you’ll know that this is true for all of Linux, and the main thing to really really hate about it. (I love Linux in general, but this is destroying most of that love.) Your distribution maintainers just usually shield you from it.

Stallman had good intentions, but it seems he never was at a bazaar himself, since otherwise he would have known that every bazaar is a totally chaotic mess. ^^Interfaces, just like standards, are a good thing.Maybe we should do it like the Germans wo

It's a real problem -- Android is easily the most hackable phone out there. And that's exactly the kind of thing cell phone manufacturers in this country don't want. It's bundled services that they make their fortunes on -- selling overpriced phones, contract cancellation fees, locking in devices, and more. Android threatens to separate the market into service providers and device providers and up until now, the service provider dictated what the device providers could do.

Imagine if you could just eject your SIM card from your phone, plug it into your computer, and browse the net, take phone calls, etc., then eject it like it's a memory card, slap it back into your phone, and go off to school, work, wherever. Or using bluetooth so that as soon as you get home, it automagically resyncs all your e-mails, text messages, and more. There's so much the technology can do -- and the only reason it's not happening is because service providers want to charge for everything, rather than simply flat-rating everything on a per minute, day, or megabyte use.

My Sidekick recently lost the ability to send files to my computer over bluetooth. Why? Because of an OTA update that disabled that. So now I can't just sit my phone near my laptop and transfer my pictures out of it, I have to open the back up, eject the little card, plug it into my system, copy the files, and then do the reverse. Very cumbersome when before it was 'click icon, drag files'.

It's complete and utter bullshit that cell phones are as powerful now as desktops were ten years ago sitting in the palm of my hand, and yet they have less than a third of the capability. And not a one of them is really interoperable with any other except on the most primitive level. Hell, the dialup days of computing offered more functionality and standardization than the cell phone market does. Why should a 14.4k modem and an antiquidated pentium 133 have more communication functionality than today's devices? Hell... it even cost less.

I still waiting for some kind of phone etc that isn't crippled in that way. I just don't have a mobile phone now. The operators are creaming way too much off the top and giving so little back.

I have a 10 meg line for a tenner a month to my house and can do pretty much whatever I want with it. The fact that operators charge 5p or what ever to send a 160 byte SMS message, or if I Pay 25 a month for 24 months I can send 500 or 1000 just sucks. It needs to be £5 for a month and including internet ac

I still waiting for some kind of phone etc that isn't crippled in that way.

Then buy one from the manufacturer instead of from a carrier.

I have a 10 meg line for a tenner a month to my house and can do pretty much whatever I want with it.

Spatial multiplexing of RF signals over a wired connection is easy: just pull another insulated cable through existing conduits. Doing so over the air is harder because there's no copper or fiber waveguide to keep your signals from mixing with other subscribers' signals.

With cable, DSL, or FTTH, the ISP just has to put another "refrigerator" on the corner to handle more signals. But with USB cellular modems, it costs a lot for AT&T to build more towers to handle more subscribers.

Yeah, but who's heard of the Nokia N900, or even knows what that means, outside geek circles? On the other hand, billboards and TVs everywhere are blasting out "Droid does". For bringing a hackable system to the masses, Android has it beat.

Yeah, but who's heard of the Nokia N900, or even knows what that means, outside geek circles? On the other hand, billboards and TVs everywhere are blasting out "Droid does". For bringing a hackable system to the masses, Android has it beat.

But "the masses" aren't interested in hacking it, thus making said hackability essentially irrelevant to anyone who isn't in "geek circles" anyway.

But "the masses" aren't interested in hacking it, thus making said hackability essentially irrelevant to anyone who isn't in "geek circles" anyway.

They said the same thing about the internet, twenty years ago. And yet look what the hackers of the world built out of the refuse of wires and chips that the corporations of then said was useless and had no commercial value. Now they're fighting to tax it, control it, and some countries have declared it an inalienable human right to have it.

Maybe it has no value to them, but that's because they don't know the value of it yet. It's our job to find it and tell them. You just haven't been around long enough to realize the purpose of your own learning yet. Your individuality, your knowledge and talents, are not for your own gratification. The purpose of the democratic process, which the internet comes closest in form and function, is not to create a great country, or great works, but to create great people.

Hacking is therefore the highest form of the democratic process; Not because of what we do, but for what we share.

They said the same thing about the internet, twenty years ago. And yet look what the hackers of the world built out of the refuse of wires and chips that the corporations of then said was useless and had no commercial value. Now they're fighting to tax it, control it, and some countries have declared it an inalienable human right to have it.

You seem to have a much different recollection of the internet 20 years ago than I do.

Maybe it has no value to them, but that's because they don't know the value of

Maybe they're not interested in "hackability" because they've never had the opportunity.

No, they're not interested in "hackability" for the same reason they're not interested in turning their own engines, lighting their fireplaces by rubbing two sticks together, or washing their clothes by beating them next to the local river.

Are you kidding? Maybe in the stone-age US. But here in Germany, I have yet to see a Droid in any shop on in any person’s hands. Motorola, Apple and Google are niche companies in our phone market. You barely ever see someone owning such a phone. Nokia and Samsung rule the market.

Also what do you mean “outside geek circles”? We were talking about hackable phones. “Outside of geek circles” is off-topic.

The N900 is the only phone I’d call hackable at all. Android phones are

I'm confused by your second paragraph, because that's pretty much exactly how it does work. My SIM enables the device that contains it to make calls or use the data service. I can drop it into a phone or a computer, although mostly if I want to use a computer while mobile and be online I use the bluetooth DUN profile on my Phone, because it's less effort than removing the SIM and doesn't prevent me from receiving calls. I've never come across a firmware update for any phone I've owned removing functional

Or maybe he has a phone that doesn't allow OTA updates like Windows Mobile? Never had a problem with OTA updates since I've had WinMo smartphones for the last 8 years... And I can load pretty much any program I'd like, independent of what the carrier desires.

"My Sidekick recently lost the ability to send files to my computer over bluetooth. Why? "

You bought a phone controlled by the operator.

----"It's complete and utter bullshit that cell phones are as powerful now as desktops were ten years ago "

Actually I dare say the phone is more powerful than the PC of 10 years ago. My phone can drive 720p straight to my TV. No way a PC I could afford could do that 10 years ago. My phone also communicates at very good broadband speed over 3 techs. bluetooth, 802.11g,

Google has proven to be benevolent, but I am not sure I want their hooks in my Linux kernel. Google exists to make money and do things in their own self interest. The problem is if their fork gets merged that they will become the maintainers for this. I believe as long as it remains in their self interest they will maintain the code but as soon as it is no longer in their self interest it will be abandoned and where will that leave us should we all decide to begin uses that functionality?

I think they should put the parts that are different out there, lets us all examine them and then let us decide if we want their frankencode or not.

Android is a fairly large paradigm shift and it remains to be seen just how it will fall out. It is already being morf'd by the carriers and tied up and tied down. There is no longer one consistent version of Android and they are now attempting to bring it back into one stream sort of like the kernel itself, but I doubt this will be successful given that all the carriers are using it in different ways and have wildly different attitudes when it comes to giving back.