This is exactly how some games work on Mac OS X, for instance Source-based games like Portal and Half-Life 2. They don't muck with the actual screen resolution, but just render into an offscreen buffer at whatever resolution ant blit it stretched to the full screen. Switching from the game back to other apps doesn't disturb the desktop in any way.
Would definitely love to see more Linux games using this technique.

Ah, who gives a rat's ass then? As a user I don't want my search keywords going to third-party sites to begin with, and I don't click on paid links in search results. (It wasn't clear from the original article that this referred to *ad links on the Google page* and not *links to sites with google ads*.)

Far more worrying is that the redirect always goes through HTTP, giving a chance for MitM attacks to sniff or alter your target traffic -- for instance redirecting you from what you thought was a nice safe HTTPS link to a phishing site.

I don't have any Google ads on my site, so I guess this would be in the "Ordinary Site (http: = non-SSL)" category, which TFA claims gets no referer -- but I do get a referer, and it's an intermediary redirect that's on http, leading the browser to happily send that as referer info.

Following the same link from https://encrypted.google.com/ shows no referer, indicating that it either went through no intermediate redirect, or an https one (you can see by testing that there is one, also on https://encrypted.google.com/) that didn't pass on referer info from the browser.

SSL pages on my own site don't seem to be in index, but the intermediate redirects I see on other things like mailing list archives that are in there look the same -- http: redirects from https://google.com/ and https: redirects from https://encrypted.google.com/

I think it's just sending everything through an http redirect so everyone sees referer data, unless you search from encrypted.google.com.

Every desktop Linux distro I know of releases source along with beta packages throughout the entire development process. I still think it's a shame that Android instead develops in secret, sharing development sources only with exclusive partners.

On the flipside, look at what happened with pre-Honeycomb Android appearing on tablets and giving people a bad impression of the OS in that formfactor... you can hardly blame Google for holding back for the moment.

Or did getting that hardware to market and into peoples' hands help to provide pressure on Google's Android developers to actually come through with a tablet-oriented version?

And could that tablet-oriented version have been made even better if the vendors and other developers who had been pushing for netbook and tablet-friendlier Android earlier on could have participated in the actual development of Honeycomb instead of duplicating partial effort by half-assing a few tweaks on top of Gingerbread?

That's pretty much how the Android platforms works -- if you want the branding and the Google apps, you have to license it and work within additional restrictions beyond just the open-source base. That hasn't stopped LG, Samsung, and Motorola -- official paying licensees all -- from making UI customizations that a lot of people complain about, so I suspect they need to adjust their partner agreements rather than restrict the offbrand open-source redistributors.

And of course we can expect the result of this decision to not actually be "small manufacturers don't try to stick broken Honeycomb on their off-brand handsets", but rather "small manufacturers who already don't license the Google-branded bits anyway keep putting Froyo or Gingerbread on their off-brand tablets, keeping them at least as bad as the previous generation of on-brand tablets".

A rushed update can still be released without destroying the overall brand image. Google's own Chrome browser (under its 'Chromium' alternate brand) as well as Mozilla's Firefox, and the Linux kernel itself, are all developed much more openly, with warts and all exposed during the whole development and clean-up process. Chromium and Firefox even provide regular installable binary snapshots, so you can test in-development versions without compiling, and always have the source for both unreleased and ALL released versions.
This gives them several important advantages:

Debugging: App developers encountering problems with the system actually have the opportunity to dive in and see what's going on in the core libraries. This can save hours of mystery questions if it turns out you misunderstood a function's requirements for instance, or if there's actually an internal error -- which you know have a chance to report or even fix directly. If we have to wait for source until months/years AFTER the binary releases, we might as well be developing on iOS.

Testing: you get more pre-release testing, with both app developers and system integrators able to give feedback and provide patches.

Better bug reports: while not every bug reports is going to be magical, the fact is that there ARE power users and devs who can make use of the source to aid in debugging, either to narrow down a problem or to actually provide a patch. (Even if an initial patch isn't the right solution, it can help in identifying the right solution, and as importantly it can serve as a workaround for particular folks.)

Unexpected innovation: Sure, not every non-traditional customization is going to be good. But some of them will, and those are things that can't happen until the source is out there.

This can actually help you clean up your messy code base... Google's Android group is giving those advantages up here, and that's a shame.

I tend to simply to keep servers' local timezone set to UTC, especially if you have admins distributed over multiple time zones themselves; this avoids any confusion about what times listed in logs mean.

Users won't need to recompile or reconfigure anything -- they'll get the updated system installed for them by the distro packagers in upcoming versions. You only need to do anything if you want to enable this *right now by yourself*, and there are indeed a few different ways to do it.

The differences between the change to the kernel and the shell script are basically two: one, they apparently have slightly different algorithms for choosing how to group the processes. That's not due to it being in-kernel vs out-of-kernel though -- that's just because they are slightly different. Both can be implemented in both ways, and both work with the same actual implementation mechanism -- simply one works from userspace through the interfaces and one's built-in to the kernel.

Auto-tuning behavior that's built in will probably be the most reliable, easiest, and best-performing way to do this, rather than requiring every Linux distribution to ensure that they're running the same extra scripts and keeping the userspace stuff in sync. Do it once and leave it built-in to the kernel.

b)There really isn't a clean way to talk between applications. You can send files, but it's really a drop box, I can COPY(not link!) something into another apps area, but after that the file is no longer mine. So if I want to send something to another app to process and then get it back to do some processing by my application I have to hope the app tells me about the changes, and considering the app may not even know I exist(nor should it, thats the beauty of decoupling), thats a lot to ask.

Indeed, there's not a great way to share data between apps on iOS; the 'file sharing' in iOS 3.2/4 seems pretty dreadful and awkward to use. You can push some data around via URLs, but I've not been able to find a system for discovering URL handlers, or having a way to declare support for particular types of data instead of manually listing some application-specific URL schemes.

Android's system for "Intents" is a bit nicer; you can combine some typed or structured data (say text/plain) and an action ('send') and just shove that off to whatever apps will take it. That's how the 'share' buttons in Gallery, Twitter, etc are implemented, and how you launch email dialogs, etc. Much more flexible, though still tends to be UI-driven rather than behind the scenes.

I can *sort* of understand 1 from a performance standpoint, if you allow user created dynamic libraries every time the application is swapped out of memory you have to find which dynamic libraries it uses, make sure nobody else is using them, then unload them. However as memory increases the rationale behind needing to constantly load/unload them starts to disappear.....

Dynamic libraries don't really work that way; when your program is loaded, the linker pops over to your libraries and pokes a few bits in memory that make the function & data references work correctly. The untouched parts of the library can be shared between processes because the executable code is memory-mapped from the file into address space directly; the kernel's memory manager deals with knowing what's using it, so at the system level there's no special need to go looking for what process is using what library.

There can be performance issues with dynamic libraries because the dynamic linker has to, well, link more things when your program is loaded.:) But the biggest issue here is probably simply that of filesystem management. The preferred application model on Apple's systems (both Mac OS X and iOS) is for most individual apps to be self-contained: any libraries that aren't bundled with the system should be bundled into your application, so they don't have to be separately installed or uninstalled.

On iOS you're even more restricted because user-installable apps are kinda funkily sandboxed from each other, and the app distribution/installation infrastructure is totally geared towards individual, standalone app bundles. If you've got no place to put shared libraries that will share them, there's not much point to using dynamic linking (unless you're going so far as to manually load/unload the libraries and link symbols yourself to keep from having to load them, which is probably not very beneficial these days; it might be better to just link statically and avoid fixups.:)

(The survey wasn't limited to users of Titanium, but they did advertise it via Twitter etc.)

Your basic widgets are pretty straightforward to implement on multiple systems, but what eats up time and effort is indeed things like getting layout to feel like it fits in the system, and to integrate with native widget styles, dialogs, or UI conventions that are different. (Use a system icon there, a menu here; a nav bar at top here, submit/cancel buttons at the bottom there.)

For StatusNet Mobile which we built with Titanium we've had to do a lot of special-casing to get various parts of the UI looking and feeing a little more native on each system, and we've still got a number of dialogs that need more work. The majority of our UI though is in a webview, which is nicely universal.;)

Tying into low-level platform integration can be a bit more difficult too; being able to 'share' messages out to other apps that accept the Action.SEND intent or text/plain for instance required tossing in a low-level module to hook into the Android system code directly, which was more awkward than I'd prefer.