Pages

Thursday, August 31, 2006

Back in March I described problems compiling and installing new applications onto Linux and suggested that until this problem is solved, Linux will not take over the desktop; see On Linux. But in the last few weeks a colleague has introduced me to Ubuntu. It's a lovely distribution which feels clean, stable and nicely integrated. However, the real revelation of Ubuntu is the online package system, which means that finding and installing new applications is unbelievably straightforward. Of course this system doesn't work for any Linux application at all, just the ones (actually many) that have been placed in the Ubuntu package server, but it's clearly the way to go.

I strongly recommend Ubuntu, and commend the good people supporting this distribution who really do appear to live up to the ideals implied by the word "Ubuntu".

Thursday, June 15, 2006

The really great memes* don't just emerge by chance. They have to be thought of, discovered, invented, created; not necessarily intentionally, or purposefully, but certainly recognised as valuable, either by the meme-originator or meme-copier.

And surely many of those early world-changing memes are things that can't be half-invented or discovered bit by bit. Like how to make fire, for example.

From a meme-perspective, it's hard for us to imagine what it must have been like for early memes. At that time our ancestors were animal-smart, instinctive creatures, probably living much as we see modern higher primates: social groups of chimpanzee or gorilla. Then development was slow, driven by gene- rather than meme-evolution. But there must have been a cusp, a point in evolutionary time when memes start to take hold and gene-meme coevolution starts up. How long was that cusp? Thousands... tens of thousands of years, perhaps.

Think of that period. There must have been countless instances when one smart individual in a group hits upon something useful, but for any number of reasons that discovery dies with its creator. Take fire-making, as an example. Perhaps none of the other individuals in the group are smart enough to recognise the value, or utility of fire-making. Or, worse-still, they are so terrified of the fire-maker's magic that they banish or kill the unfortunate innovator. Alternatively, there may be one or two individuals who do see that this is not something to be feared, but valued. But what if they're just not smart enough to be able to mimic the actions of the fire-maker? To propagate, memes need meme-copiers just as much as meme-originators, and so the fire-making recipe is lost because no-one can copy it. Now consider the larger context. Imagine that one tribe has learned fire-making, and is able to refine and pass the technique from one generation to the next. But then another tribe, larger and stronger, wipes out the fire-maker tribe because of fear, or envy. Or they get wiped out anyway because of famine, or any number of other natural disasters.

Life was precarious then, and so it was for memes too. My point is that many memes were probably thought-of, discovered or created, only to be lost again. Then a few hundred or few thousand years passes before they are thought-of all over again. How many times over did those early memes have to be re-invented before they finally found a foothold and became so widespread that only a major catastrophe affecting the whole population would threaten the meme?

One reason, I think, that this is hard for us to imagine (and construct models of), is that we are used to living in a time when life is easy for memes. Too easy perhaps. We are all surrounded by unbelievably expert meme-copiers. Indeed human beings have become so good at it that meme-copying is surely something that now characterises us as a species. Modern society, from a meme-perspective, is a rich and fertile substrate in which even the most inconsequential memes can thrive (like mobile phone ring-tones).

Wednesday, May 31, 2006

A few weeks ago we had a visitor to the lab who asked a seemingly straightforward question: "what is a robot?". A perfectly reasonable question, in fact, given that we are a robot lab.

Of course he got a different answer from everyone who offered a definition. No surprises there then. But that got me thinking, what is a robot? Of course the word has a well known dictionary definition from the Czech word Robota (meaning 'compulsory service'), the 'mechanical men and women' in Capek's play Rossum's Universal Robots.

In fact the OED gives a second definition 'A machine devised to function in place of a living agent; one which acts automatically or with a minimum of external impulse'. For me, this definition is also somehow archaic, since it does not admit to the possibility that a robot could have a function other than as a subservient machine. We can now contemplate robots that are not designed as servile machines, but are perhaps designed or evolved to exist because, well for no reason, they just exist.

Wikipedia gives a fullsome definition for 'robot' which starts with the (for me) deeply flawed statement 'A robot is a mechanical device that can perform preprogrammed physical tasks'. The parts I have a problem with here are 'preprogrammed' and 'tasks'. There are many research robots that are not preprogrammed - their behaviours are either learned, evolved or emergent (or some combination of those). My objection to 'tasks' is that some robots may not have tasks in any meaningful sense.

I believe we need a new definition for 21st century robots, that shakes off 20th C notions of subservient machines performing menial tasks for lazy humans. A definition that instead encompasses the possibility of future robots as a form of artificial life, neither preprogrammed or task oriented.

Ok, so here's my attempt at a definition.

A robot is a self-contained artificial machine that is able to sense it's environment and purposefully act within or upon that environment.

An important characteristic of a robot is, therefore, that its sense-action loop is closed through the environment in which it operates (the act of moving changes a robot's perception of its environment, thus giving the robot a fresh set of sense inputs). Thus, even simple robots may behave in a complex or unpredictable way, when placed in real-world environments. This is why designing robots for unsupervised operation in real-world environments is difficult.

A second characteristic of a robot is the degree of autonomy.

In respect of 'control' autonomy there is a spectrum of autonomy from none (i.e. a tele-operated robot) to full autonomy (no human supervision). A robot that has a high level of control autonomy will require an embedded and embodied artificial intelligence in order that the robot can choose the right actions and, perhaps, also adapt or learn from changes in the environment or its own actions.

Finally, and often overlooked is 'energy' autonomy. A robot typically requires its own self-contained power source, but if sustained operation over extended periods is required then the robot will need to have the ability to autonomously replenish its power source.

Wednesday, March 08, 2006

I just spent a gruelling weekend installing the excellent open source Player/Stage/Gazebo robot simulation suite of programs. I know... sad or what. But Player/Stage/Gazebo is really essential toolkit for hard-core roboticists.

Now I'm no Linux virgin. I first installed Linux on some real lab robots early in 1998. It was quite a challenge to shoehorn Linux into a 25MHx 386 processor with 4MB RAM and a first generation 80MB solid-state disk drive. We had first generation wireless LAN cards (well before the IEEE 802.11 aka WiFi specification was established), and the Linux drivers were somewhat experimental and needed a good deal of tender loving care to compile, install and coax into reliable operation. More by luck than judgement I used the excellent and highly respected Slackware distribution of Linux. Slackware's organisation into (floppy) disk sets made it very easy to install just the parts I needed. For example, the robots have no keyboard or display. Access is wirelessly via telnet/ftp/http so there is no need for X-windows or any of the usual GUI superstructure that desktop installations need. So Slackware lent itself to a lean, mean stripped-down embedded installation.

The other thing I liked (and still do) about Slackware is that it is not at the bleeding edge of Linux, but takes a very cautious and conservative approach to keeping up with new versions of Linux kernel, libraries and so on. For this reason and deservedly so Slackware has a reputation for reliability. It's an operating system you can install and forget. It was a good decision because the LinuxBots, as they then became known, have been used since in many multi-robot projects with very high reliability.

Having said all of that you may be surprised that it was only about two years ago that I switched from MS Windows to Linux on my trusty workhorse laptop. I tried with Windows, I really did. On my previous Toshiba Libretto Windows 95 was fine and reliable, but this HP laptop came with Windows 98 pre-installed. Hopeless. I migrated fairly quickly to Windows ME (even more hopeless) then to XP. It crashed inexplicably on average about once a week. I got used to that. I got used to having to worry about up-to-date virus checkers, and then windows security updates, and then spyware checkers. In retrospect it was amazing - I was nurse-maiding my computer's operating system! (Mostly because of one killer application: MS Outlook.) Finally after one crash that proved unrecoverable (FAT table corrupted) I gave up and installed Slackware.

Bliss. It boots in a quarter of the time. Gone are the inexplicable flurries of disk or network activity that happen when you've done nothing. Gone is the paranoia of worrying about viruses or spyware or security updates. Running Linux my laptop is sweeter, cooler, more responsive and, best of all, in two years it has never, yes never, crashed.

So why am I complaining?

Well the achilles heel of Linux is that installing new software is not as straightforward as it should be. I should first explain that in Linux it's quite normal to download source code then compile and install; actually that's the easy part, since there are very simple command line scripts to automate the process. The problem is deeper. In fact there are two problems: pre-requisites and version dependencies.

Now Player, Stage and Gazebo are complex packages. Not-surprisingly therefore they require other software (toolkits, libraries, and so on) to be installed first. These are the pre-requisites. Gazebo, for instance (which provides a 3 dimensional world, with physics modelling, in which the simulated robots run) required me to first install no less than five packages: the Geo-spatial Data Abstraction Library (GDAL), the Open Dynamics Engine (ODE), the Simplified Wrapper and Interface Generator (SWIG), the Python GUI wxPython and the open GL utilities library GLUT. Phew! But wxPython is itself a complex package, with its own pre-requisites. The pre-requisites have pre-requisites!

And as if that isn't enough to contend with when I tried to install Stage I discover that it needs the GIMP toolkit GTK+ of at least version 2.4. My GTK+ is only version 2.2. That's a version dependency.

These are the reasons Linux (marvellous as it is) isn't about to take over the world just yet.

What GNU/Linux needs is a distribution independent universal installer that will analyse your existing system, figure out the pre-requisites and version dependencies for the new package you want to install (and do that recursively), then get on and do it while you take the weekend off. Maybe there's already a sourceforge project to do just that, in which case I say 'huzzah!'.

But was it all worth the effort? As Keanu Reeves would say "hell yeah!". Player/Stage/Gazebo is a robot simulator of truly awesome power and versatility.

Friday, February 24, 2006

I came across the Edge website last week, whose home page declares the rather grand aim:

To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

It appears that the Edge asks an Annual Question, "What is the answer to life, the universe, and everything", that sort of thing, and then publishes the answers by the contributing illuminati.

The 2006 question is "What is your dangerous idea?".

So it was with some excitement that I started to read the assembled responses of the great and the good. Very interesting and well worth reading but, I have to say, the ideas expressed are, er, not very dangerous. Quite dangerous, one might say, but by and large not the sort of ideas that had me rushing to hide behind the sofa.

So, I hear you say, "what's your dangerous idea?".

Ok then, here goes.

I think that Newton's interpretation of his first law of motion was wrong and that there is no such thing as a force of gravity. Let me say right away that this is not my idea: it is the result of a lifetime's work by my friend Science Philosopher Viv Pope. But I have played a part in the development of this work, so I feel justified in evangelising about it.

Recall your school physics. Newton's first law of motion states that every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. In other words, that the 'natural' state of motion is in a straight line. Of course in an abstract sort of way this feels as if it is right. Perhaps that is why it has not been seriously challenged for the best part of 400 years (or it could be because Newton's first law has become so embedded in the way we think about the world that we simply accept it unquestioningly).

Consider an alternative first law of motion: the natural (force less) state of motion is orbital. I.e. that bodies continue to orbit unless an external force is applied. Now the Universe is full of orbital motion. From the micro-scale - electrons in orbit around nuclei, to the macro-scale - moons around planets, planets around stars, rotating galaxies etc. If this alternative first law is true, it would mean that we don't need to invent gravity to account for orbital motion. This appeals to me, not least because it leads to a simpler and more elegant explanation (and I like Occam's Razor). It would also explain why - despite vast effort and millions of dollars worth of research - no empirical evidence (gravity waves or gravity particles) has yet been found for how gravity propagates or acts at-a-distance. A common-sense objection to this idea is "well if there's no such thing as gravity what is it that sticks us to the surface of the earth - why don't we just float off?". The answer is (and you can show this with some pretty simple maths), that the natural (force less) orbital radius for you (given the mass of your body), is quite a long way towards the centre of the earth from where you now sit. So there is a force that means that you weigh something, it's just not a mysterious force of gravity but the real force exerted by the thing that restrains you from orbiting freely, i.e. the ground under your feet.

This has all been worked out in a good deal of detail by Viv Pope and mathematician Anthony Osborne, and its called the Pope Osborne Angular Momentum Synthesis, or POAMS.

Thursday, February 23, 2006

Software is remarkable stuff. Ever since writing my first computer program in October 1974* I have not lost that odd but exhilarating sense that writing a program is like working with pure mind stuff. Even now, over 32 years later, when I fire up Kylix** on my laptop and crack my fingers ready to starting coding I still feel the excitement - the sense of engineering something out of nothing in that virtual mind-space inside the computer.

But there is an even more remarkable place that I want to talk about here, and that is the place where hardware and software meet. That place is called microcode.

Let me first describe what microcode is.

Most serious computer programming is (quite sensibly) done with high-level languages (C++, Java, etc), but those languages don't run directly on the computer. They have to be translated into machine-code, the binary 0s and 1s that actually run on the processor itself. (The symbolic version of machine code is called 'assembler' and hard-core programmers who want extreme performance out of their computers program in assembler.) The translation from the high-level language into machine-code is done by a program called a compiler and if, like me, you work within a Linux environment then your compiler will most likely be the highly respected GCC (Gnu C Compiler).

However, there is an even lower level form of code than machine-code, and that is microcode.

Even though a machine-code instruction is a pretty low-level thing, like 'load the number 10 into the A register', which would be written in symbolic assembler as LD A,10, and in machine-code as an unreadable binary number, it still can't be excuted directly on the processor. To explain why I first need to give a short tutorial on what's going on inside the processor. Basically a microprocessor is a bit like a city where all of the specialist buildings (bank, garage, warehouse, etc) are connected together by the city streets. In a microprocessor the buildings are pieces of hardware that each do some particular job. One is a set of registerswhich provide low level working storage, another is the arithmetic logic unit(or ALU) that will perform simple arithmetic (add, subtract, AND, OR etc), yet another is an input-output port for transferring data to the outside world. In the microprocessor the city streets are called data busses. And, like a real city, data has to be moved around between say the ALU and the registers, by being routed. Also like a real city data on the busses can collide, so the microprocessor designer has to carefully avoid this otherwise data will be corrupted.

Ok, now I can get back to the microcode. Basically, each assembler instruction like LD A,10 has to be converted into a set of electrical signals (literally signals on individual wires) that will both route the data around the data busses, in the right sequence, and select which functions are to be performed by the ALU, ports, etc. These electrical signals are called microorders. Because the data takes time to get around on the data busses the sequence of microorders has to carefully take account of the time delays (which are called propagation delays) for data to get between any two places in the microprocessor. Thus, each assembler instruction has a little program of its own, a sequence of microorders (which may well have loops and branches, just like ordinary high level programs), and programming in microcode is exquisitely challenging.

Microcode really is the place where hardware and software meet.

----------------------------------------------------------------------------*in Algol 60, on a deck of punched cards, to run on an ICL 1904 mainframe.**which I am very sorry to see has now been discontinued by Borland.

Wednesday, February 15, 2006

Is it just me or has anyone else noticed a spate of predictions of human-equivalent or even super-human artificial intelligence (AI) in recent weeks?

For instance the article futurology facts (now there's an oxymoron if ever there was one) on the BBC world home page quoted the British Telecom 'technology timeline' including:

2020: artificial intelligence elected to parliament2040: robots become mentally and physically superior to humans

A BT futurologist is clearly having a joke at the expense of members of parliament. Robots won't exceed humans intellectually until 2040 but it's presumably ok for a sub-human machine intelligence to be 'elected' to parliament in 2020. Hmmm.

First let me declare that I think machine intelligence equivalent or superior to human intelligence is possible (I won't go into why I think it's possible here - leave that to a future blog). However, I think the idea that this will be achieved within 35 years or so is wildly optimistic. The movie i,Robot is set in 2035; my own view is that this level of robot intelligence is unlikely until at least 2135.

So why such optimistic predictions (apart perhaps from wishful thinking)? Part of the problem I think is a common assumption that human level machine intelligence just needs an equivalent level of computational power to the human brain, and then you've cracked it. And since, as everyone knows, computers keep doubling in power roughly every two years (thanks to that nice man Gordon Moore), it doesn't take much effort to figure out that we will have computers with an equivalent level of computational power to the human brain in the near future.

That assumption is fallacious for all sorts of reasons, but I'll focus on just one.

It is this. Just having an abundance of computational power is not enough to give you human level artificial intelligence. Imagine a would-be medieval cathedral builder with a stockpile of the finest Italian marble, sturdy oak timbers, dedicated artisans and so on. Having the material and human resources to hand clearly does not make him into a cathedral builder - he also needs the design.

The problem is that we don't have the design for human-equivalent AI. Not even close. In my view we have only just started to scratch the surface of this most challenging of problems. Of course there are plenty of very smart people working on the problem, and from lots of different angles. The cognitive neuroscientists are by-and-large taking a top-down approach by studying real brains; the computer scientists build first-principles computational models of intelligence, and the roboticists take a bottom-up approach by building at-first simple robots with simple brains. But it's an immensly hard problem because human brains (and bodies) are immensly complex.

Surely the really interesting question is not when we will have that design, but how. In other words will it be by painstaking incremental development, or by a single monumental breakthrough. Will there (need to) be an Einstein of artificial intelligence? If the former then we will surely have to wait a lot longer than 34 years. If the latter then it could be tomorrow.

Perhaps a genius kid somewhere has already figured it out. Now there's a thought.

Monday, February 06, 2006

Consider that humblest of automata: the room thermostat. It has a sensor (temperature) and an actuator (boiler on/off control) and some artificial intelligence, to decide whether to switch the boiler on - if the room is getting cold, or off - if the room is getting too warm. (If the thermostat has hysteresis the 'on' temperature will be different to the 'off' temperature - but that's not important here.)

I said that the theromstat's AI 'decides' whether to switch the boiler on or off, which implies that it has free will. Of course it doesn't, because its artificial intelligence is no more than a simple rule, 'if temperature <> 60 then switch boiler off', for example. So, depending on the temperature, what the thermostat decides is completely determined. With this simple deterministic rule the thermostat can't simply decide to switch the boiler off regardless of the temperature just for the hell of it.

Well all of that is true for 99.99..% of the time. But consider the situation when the temperature is poised on almost exactly the value at which the thermostat switches. The temperature is neither going up nor down but is balanced precariously at a value just a tiny fraction of a degree away from the switching value. Now what determines whether the thermostat will switch? The answer is noise. All electrical systems (actually all physical systems above absolute zero) are noisy. So, at any instant in time the noise will have the effect of adding or subtracting a tiny amount to the temperature value, either pushing it over the switching threshold, or not.

For 99.99..% of the time the thermostat is deterministic, but for the remaining 0.00..1% of the time it is stochastic: it 'decides' whether to switch the boiler on or off at random, i.e. 'just for the hell of it'.

But, I hear you say, that's not free will. It's just like tossing a coin. Well, maybe it is. But maybe that's what free will is.

Consider now that oldest of choices. Fight or flee. Most of the time, for most animals, there is no choice. The decision is easy: the other animal is bigger, so run away; or smaller, so let's fight; or it's bigger but we're trapped in a corner, so fight anyway. Just like the thermostat, most of the time the outcome is determined by the rules and the situation, or the environment.

But occasionally (and probably somewhat more often than in the thermostat case) the choices that present themselves are perfectly evenly balanced. But the animal still has to make a choice and quickly, for the consequences of dithering are clear: dither and most likely be killed. So, how does an animal make a snap decision whether to fight or flee, with perfectly balanced choices? The answer, surely, is that the animal needs to, metaphorically speaking, toss a coin. On these rare occasions its fate is decided stochastically and brains, like thermostats, are noisy. Thus it is, I contend, neural noise that will tip the brain into making a snap decision when all else is equal - the neural equivalent of tossing a coin.

Monday, January 23, 2006

Just started reading Sue Blackmore's new book: Conversations on Consciousness. Great introduction - but I have stalled already on Q2 of the 1st conversation - Baars. He says "the primary function of the nervous system is to encode knowledge...".

Surely not.

Brains are control systems for bodies. Pretty amazing control systems of course, but control systems all the same. Bodies have sensors (senses) and actuators (muscles). In very simple animals the outputs from the senses are almost directly connected to the muscles, so that the animal always reacts reflexively to stimuli. More complex animals have more brain in between senses and muscles and so may deliberate before reacting. Of course even very complex animals still have reflexes - think of the classic reflex test on your knee.

Of course I accept entirely that complex brains do encode knowledge, probably in a multiplicity of ways some of which we can discern - like the apparent spatial mapping of images into the visual cortex - but many ways that are not (yet) at all understandable.

Sunday, January 01, 2006

Gosh this is interesting. Although an internet user since before there was an Internet*, I am completely new to blogging; a newbie blogger in fact - is there a word for that?

So, why now blogging..? A number of reasons I guess.

First, a bit of vanity I suppose. I guess you have to be just a little bit vain to suppose that anyone else might read, or indeed be the slightest bit interested in your musings.

Second, as an academic and professional communicator, and someone who believes very strongly that ideas should be freely exchanged and communicated, I am interested in blogging as a medium for just that.

Thirdly, because I think the internet is changing human culture in some deep and surprising ways. The internet is becoming a new kind of dynamic collective memory, allowing us to offload (or upload, to be more accurate) stuff that we used to either have to remember or carry around with us. A small thing, perhaps, but have you noticed that business cards have become pretty much obsolete (at least in academia): people say "just google me". Wired telephones also, I think the one in my office doesn't get used more that once a week now; made more or less obsolete by email (and therein lies another blog entry!). Dictionaries, encyclopedias, libraries all going the same way (but not books, interestingly). Even CDs. As I write this I am listening to internet radio (bach-radio.com), being sucked wirelessly from my broadband connection and played on my HiFi by an amazing Philips Streamium. It's weeks since I played a CD! But it's more than these things: the way that we work, play and interact has changed profoundly. For the better? Well, maybe - that's a moot point. So, this is a rather long winded way of saying that blogging is, I think, a very interesting part of that change.

A final thought. Writing this feels strangely different to publishing web pages (which I've been doing since the web was invented). It feels much more personal, a kind of message in a virtual bottle. So, here it is, my first message cast out onto the ocean of the internet. Who knows where it will wash ashore.

*I recall sending emails to Caltech in c.1994 when you had to explicitly address the gateway between JANET and the ARPANET. It was quite a feat! I also built my research lab's first web server and hand coded the lab's first web pages sometime in 1996. The amazing wayback machine doesn't quite go way back enough - but here is it's earliest recorded IAS lab web page from April 1997.