First, let me make one thing clear. This isn't constructive criticism. This is just criticism. It's directed at software that's so wrong-headed that there's no way to make it significantly better, and everyone involved would be much better spending their time doing something else instead of trying to fix any of what I'm about to describe. It's not worth it. Sit in a park or something instead. Meet new and interesting people. Take up a hobby that doesn't involve writing shell scripts for Linux. You'll be happier. I'll be happier. Everyone wins.

Anyway. I wrote about Automatix some time ago. It died and the world became a better place. More recently it's been resurrected as something called Ultamatix. In summary, don't bother. It's crap. And dangerous. But mostly crap. Again, I'm going to utterly ignore the UI code and just concentrate on what it runs.

In other words, "Remove a bunch of packages that might have nothing to do with anything Ultamatix has installed, and don't ask the user first. Oh, and assume yes when asked whether to do anything potentially damaging". This gets called 103 times in various bits of Ultamatix.

Oh, notice the sudo in there? Ultamatix is running as root already. Despite this, there are 429 separate calls to sudo.

It turns out that ia64 is not especially good at running x86_64 binaries. Never mind, eh?

rm -rf $AXHOME/.gstreamer-0.10
gst-inspect
sudo gst-inspect

Which translates as "Delete any self-installed plugins, run gst-inspect as root in an attempt to regenerate the plugin database, really run gst-inspect as root in an attempt to regenerate the plugin database". The flaws in this are left as an exercise for the reader.

sudo apt-get --assume-yes --force-yes remove --purge

Used 111 times. Will remove the packages it installed, but also any other packages the user has installed that happen to depend on them. Without asking.

The Swiftweasel install that checks your CPU type and then has some insane number of cut and paste code chunks that differ only by the filename of the tarball it grabs. Rather than, say, using a variable and writing the code once.

The cutting and pasting of the same code in order to install swiftdove.

Code that installs packages differently depending on whether they happened to be in your home directory to start with or whether it had to download them for you

What, create a directory and then immediately delete it? How is this useful in any way whatsoever?

There's almost certainly more. I got bored. The worrying thing about this is that the Ultamatix author read my criticisms of Automatix and appears to have attempted to fix all of them. The problem with this is that there's clearly a complete lack of understanding of the fundamental problem in several cases. For example, one of my criticisms of Automatix:

sudo sed -i "s/^vboxusers\(.*\):$/vboxusers\1:$AXUSER/" /etc/group

- assumes that the system isn't using some sort of user directory service.

and the Ultamatix response:

Fixed...Got rid of Virtualbox

Except exactly the same problem is present at other points in Ultamatix, as noted above. Taking a bug list and slavishly fixing or deleting all the bugs isn't helpful if you then proceed to add the same bug back in 24 other places. In that respect, it's even worse than Automatix - the author's managed to produce a huge steaming pile of shite despite having been told how to avoid doing so beforehand. He may be no newbie to programming, but if not it's a perfect example of how experience doesn't imply competence.

Don't install this package. Don't let anyone else install this package. If you see anyone advocating the installation of this package, call them a fool. There's absolutely no excuse whatsoever for the existence of this kind of crap.

Minor update:The above was looking at 1.8.0-4. It turns out that there's a 1.8.0-5 that's not linked off the website. There's no substantive difference, but some of the numbers may be slightly different.

The Linux desktop currently receives (certain) key events in two ways. Firstly, the currently active session will receive keycodes via X. Secondly, a subset of input events will be picked up by hal and sent via dbus. This information is available to all sessions. Which method you use to obtain input will depend on what you want to do:

If you want to receive an event like lid close even when you are not the active session, use dbus

If you only want to receive an event when you are the active session (this is the common case), just use X events

The ACM have chosen my article on power management from Queue last year as a shining example of such things, and republished it in Communications where you may now peruse it at your leisure. Fanmail may be sent to the usual addresses.

I'll be in Boston from the 7th to 11th of December, and New York from the 11th to 15th. I will be endeavouring not to break any bones in the process. Might actually ensure I have travel insurance this time.

I'll be presenting at LCA next January. Current plans involve spending a week in Melbourne afterwards and a few days in San Francisco on the way back.

Things I want to do:

Visit Iceland. It sounds like it might be relatively cheap soon.

Make this I²C code work.

Get dynamic power state switching on ATOM-based Radeons working. This probably involves actually plugging the card in.

The open nature of the PC wasn't inherently what brought it greater success. The open nature of the PC meant that it could spawn an ecosystem of third party hardware vendors, sure. It also meant that it could be cheaply cloned by other manufacturers, ensuring competition that drove down the price of hardware. The net result? x86 is ubiquitous, sufficiently so that even Apple use a basically standard[1] x86 platform these days. Low prices and the wide availability of software that people wanted to run bought the PC the marketplace, with Microsoft being the real winners. Apple hardware remained more expensive for years, and the compelling MacOS software was mostly limited to areas like DTP. Nobody else had any incentive to buy a Mac.

Now, let's look at the phone market. Third party hardware vendors? No real distinction between the iphone and anything else. Sure, anything remotely clever has to plug into the dock port, but developing something to work with that also gets you into the ludicrously huge ipod market. Other phone accessories are either batteries, chargers or headphones. That's really not going to be what determines market success.

Competitive cheapness? When you have a multivendor OS like Android or Windows Mobile, you might expect there to be more opportunity to compete to undercut each other, offering equivalent platforms for less cost. But that's missing something. In the same way that the home computer market has basically consolidated towards PCs, the phone market has already consolidated. Your smartphone has an ARM in, probably along with an off the shelf GSM chip and some 3D core (generally something from Imagination, though in Android's case Qualcomm seem to have come up with their own core - I haven't been able to find out if it's derived from something else). There's no realistic way to make a phone with equivalent hardware functionality and quality to the iphone and sell it for significantly less money. And if you figure out how to, Apple get to take advantage of the same price reductions in their next generation hardware. And, being Apple, they'll probably find some compelling wonderful design feature that costs them nothing extra but makes you want it more anyway. So hardware competition probably isn't going to be what determines market success.

Which leaves two things - advertising and applications. Apple are good at marketing. This is unfortunate, because I'd really rather live in a world where everyone running MacOS was running Linux instead, but we seem to suck in comparison. The good news is that Microsoft also seem to, so maybe we'll have our act together some time between now and Apple crushing us to death. So, assuming current trends continue, Apple's marketing probably isn't going to kill the iphone. Which leaves one thing: applications.

The obvious argument against the iphone's success is that, as a closed software distribution platform, it's less attractive to developers. I don't think that's true. If we look at the console market, the gp2x was hardly a PSP killer. Or a DS killer. You could possibly argue it was a Gizmondo killer, but only if you ignore the Finnish mafia. Being an open platform doesn't immediately result in you killing closed platforms. You need developers, and you need applications. Otherwise nobody's going to buy your hardware, even if it costs $10 less than an iphone and has a few extra bits of plastic. What attracts developers? An attractive development environment and a revenue stream. Android has one real thing going for it here - it's not tied to Objective C, and so there's probably a larger number of potential developers. But let's be realistic. If you're a competent developer, you can move from C++ or Java to Objective C without too much effort. And if you're an incompetent developer, you're not going to be deciding the future of a platform.

Apple have made it easy for people to write applications that share the iphone's delightful[2] UI. There's almost active encouragement to write beautiful programs that integrate well. Sure, the platform limitations bite you in weird ways (like the no background running thing), but Apple have come up with hacks to smooth most of those over. The iphone is a wonderful device to develop for. Sufficiently delightful that there was a huge developer base even before Apple had released the SDK. What does that tell you? Developers actively want to write for the iphone. In fact, they wanted to even before there was a real revenue model. Mindshare means a lot.

What are we going to see in response from Android? To begin with, uglier applications. I'm sure that'll get better over time, but right now the Android UI just isn't as well put together[3]. It's functional, even attractive. But it's not beautiful. And lowering the bar to developer involvement means the potential for more My First Phone Application. Windows Mobile and Symbian have huge numbers of applications. They're mostly dreadful lashups of functionality you'd never want and a UI that's ugly enough to make you want to stab out your eyes, coupled with a nag screen asking for a $10 donation to carry on using it assuming it hasn't crashed before it got that far. To be fair, a lot of iphone stuff isn't much better. But proportionately? Right now the Apple stuff has it. I never want to see another listing of Symbian freeware.

At the moment, Apple wins at providing compelling applications. They may be a gatekeeper between the developer and the user, but right now that's not causing too many problems. Well. It wasn't. The recent fuss about Apple dropping applications because of perceived competition with their own software is an issue. If a developer is going to spend a significant amount of time and money on an application, they want a reasonable reassurance that they're going to be able to ship it. And, right now, Apple's not giving that. It remains to be seen whether this has long term consequences, but there's some danger of Apple alienating their developer base. If those developers move to another platform, and if they create compelling software, Apple might stand some real competition. At the moment? Apple has the hardware, the OS and the applications. They have the potential to take over broad swathes of the market. But they also have the opportunity to throw it away. And that's what's going to decide the success of the iphone - a closed platform is not inherently a problem, but it gives the vendor the option of removing one of their key advantages. If Apple get through this with their developer popularity intact, I don't see the open/closed distinction as having any real-world importance at all.

The relevance to Linux? We're not going to succeed by being philosophically better. We have to be better in the real world as well. Ignoring that in favour of our social benefits doesn't result in us winning.

[2] And yes, I genuinely do think that the iphone's UI is better than anything else on the market. There's no reason someone else, including us, couldn't have got there first. But we didn't, and now everyone gets to play catch up. Shit happens

[3] I have no idea where Apple gets its UI engineers from, but someone needs to find the source and start waving huge piles of money to pick them up first.

Not being from Oakland, Meth has a relatively small impact on my life. Until I get a cold, at which point I get to curse the lack of effective drugs. What's surprising is how phenylephrine does have a noticable effect. What's even more surprising is that it has this effect about thirty seconds after I've swallowed the capsules, indicating fairly strongly that it hasn't actually hit my bloodstream in any sensible way at that point (though I've had astonishing difficulty in finding figures on how long the plastic capsules take to dissolve in the stomach). So, even though I know it's got no measurable effect beyond that of a placebo, the placebo effect still works. Damn you, psychology.

It's the X Development Summit in Edinburgh this week, so I've been hanging out with the graphics crowd. There hasn't been a huge amount of work done in the field of power management in graphics so far - Intel have framebuffer compression and there's the lvds reclocking patch I wrote (I've cleaned this up somewhat since then, but it probably wants some refactoring to avoid increasing CPU usage based on its use of damage for update notifications). That still leaves us with some fun things to look at, though.

The most obvious issue is the gpu clock. Intel's chipset implements clock gating, where unused sections of chip automatically unclock themselves. This is pleasingly transparent to the OS, and we get power savings without any complexity we have to care about. However, there's no way to control the core clock of the GPU - it's part of the northbridge and fiddling with the clocking of that would be likely to upset things. Nvidia and Radeon hardware is more interesting in this respect, since we can control the gpu clock independently of anything else. The problem is trying to do so in a reasonable way.

In an ideal universe, we can change these clock frequencies quickly and without any visual artifacts. That way it's possible to leave it in the lowest state and clock it up as load develops. There's a couple of problems with this - non ideal hardware, and the software in the first place. Jerome's been testing a little on Radeon and discovered that changing the memory clock through Atom results in visual corruption. It's conceivable that this is due to some memory transaction cycles getting corrupted as the clock gets changed. If we could ensure that the reclock happens during the vertical blank interval, that's something that could potentially be avoided (of course, then we have the entertainment of working out when the vertical blank interval actually is when you have a dual head output...). The other problem is that 3D software tends to consume as many resources as are available. Games will produce as many frames per second as possible. Demand-based clocking will simply ramp the gpu to full speed in that situation, which isn't necessarily what you want in the battery case (as the number of frames per second goes up, so does the cpu usage - even more power draw) but is probably pretty reasonable in the powered case.

Handwavy testing suggests that this can save a pretty significant amount of power, so it's something that I'm planning on working on. Further optimisations include things like making sure that we're not running any PLLs that aren't being used at the time (oscillators chew power), not powering up output ports when you're not outputting to them and enabling any hardware-level features that we're currently ignoring. And, ideally, doing all of this without causing the machine to hang on a regular basis.