Like this:

Two big thoughts strike me as a result of the literature review I have just completed for my PhD:

Linux is not the centre of the universe, in fact it is a bit of an intellectual backwater;

The UK may have played as big a role in the invention of the electronic computer as the US, but these days it is hardly even in the game in many areas of computing research.

On the first point I am in danger of sounding like Andy “Linux is obsolete” Tanenbaum – but it is certainly the case that Linux is far from the cutting edge in operating system research. If massively parallel systems do break through to the desktop it is difficult to imagine they will be running Linux (or any monolithic operating system).

In fact the first generation may do – because nobody has anything else right now – but Linux will be a kludge in that case.

Doing my MSc which did focus on a Linux related problem, it seemed to me that we had come close to “the end of history” in operating system research – ie the issue now was fine tuning the models we had settled on. The big issues had been dealt with in the late 60s, the 70s and the 80s.

Now I know different. Operating systems research is very vibrant and there are lots of new models competing for attention.

But along the way the UK dropped out of the picture. Read some papers on the development of virtual memory and it will not be long before you come across the seminal experiment conducted on EMAS – the Edinburgh University Multiple Access System – which was still in use when I was there in 1984. Now you will struggle to find any UK university – with the limited exceptions of Cambridge and York (for real-time) – making any impact (at least that’s how it seems to me).

It’s not that the Americans run the show either – German, Swiss and Italian universities are leading centres of research into systems software.

I am not sure how or why the UK slipped behind, but it feels like a mistake to me – especially as I think new hardware models are going to drive a lot of software innovation in the next decade (well, I would say that, wouldn’t I?)

Like this:

But, here goes: an update of my Raspberry Pi failed about a week ago. As I then went to Paris with the family for a few days I only got round to trying to fix it this weekend.

I managed to get the thing booting again by copying some files from the distributed boot partition into the boot partition on my RasPi, but on booting the networking failed (possibly kernel modules needed were not available as a consequence of the failed update, not sure). Anyway, one of biggest weaknesses of the RasPi is the whole flakiness of the system when it comes to power and a small error there is likely to cause corruption on the SD card – and this happened to me.

But, why on why oh why does fsck.ext4 insist on a manual check? After an hour (with a figure of Robbie Burns on the enter key) I have only got a little way through this and had to give up.

I am a grown up. If I want to risk fsck corrupting my partition that should be up to me – I should have an automatic option.

Share this:

Like this:

The market for desktop computers is in desperate trouble (and that for laptops not much healthier) – the latest sign being the decision to take Dell private.

The issue is not that we don’t need desktops and laptops anymore, but rather that we do not need new ones: while Moore’s Law continues to increase the number of transistors we can fit on silicon, we cannot drive those transistors at ever faster rates as we cannot dissipate the heat.

So instead of having an option of shelling out to buy a new desktop (or laptop) to match the speed of our rivals’ machines, we can soldier on with the old machines, get a smaller, low energy device (such as a tablet – Moore’s Law won’t deliver faster devices but will deliver smaller ones of equivalent computing power or lower power consumption) or maybe buy a multicore device (but these too have their limits – bus based designs start to eat up power as they get more processors and the speed increase from putting on an extra processor falls far off a linear increase).

We might, of course, just opt for a no more powerful machine but just one that looks better – something Apple has profited from.

In the end this means that the economics of desktop computers is likely to shift fundamentally and as the market consolidates prices may even start rising.

There are still technological advances that will drive performance improvements – faster storage is the obvious example. But the golden age of the PC is over – indeed it probably has been for a few years now.

Microsoft’s desperation to get Windows 8 out the door and across all the possible platforms is one of the reactions to this: but right now I have to wonder if Redmond’s finest will still be with us in ten years. Win8 seems to be something of a turkey and is not making any headway in the smartphone/tablet world and if we do not buy new machines every 24 months, why should we shell out for a new copy of “Office”? And, of course, Linux is still nibbling away.

In the longer term new hardware designs – such as thousands of CPUs on a “network on a chip” could turn things upside down again (I should be researching this now and not writing this blog) – but to fully exploit the power of such systems we are going to need to rethink most of our software and programming models. And it’s still not clear to me if those sorts of machines will ever get to the desktop (as opposed to powering an ever more powerful internet of things through embedded computers).

Share this:

Like this:

My eldest daughter got a laptop for Christmas and, of course, it came with Windows 8 pre-installed.

This evening she asked me to replace that with Linux.

Having used Windows 8 on her machine I am left wondering what is the problem with Windows 7 that it is trying to fix? Maybe all those tiles work on a touchscreen but on a bigger computing device they just get in the way.

Like this:

I have sort-of abandoned my Apple Air Book for serious work this last week – going back to a 2008/9 Toshiba laptop (another Morgan Computers purchase) running Linux.

The Apple is a lovely device to travel with and is beautiful, if extremely expensive, device with which to browse the web, but a decade of conditioning to Linux and its command-line power and orthogonal tool set means I am much happier even with a slower machine when it comes to doing things like drawing figures with Metapost.

But having extolled the power of the command line I am wondering whether I should build a GUI for Metapost – essentially an editor panel coupled with a EPS display panel.

Metapost users seem thin on the ground – though maybe that is because a GUI tool doesn’t exist – but anyone who does use it care to comment?