He describes in an interview with NewScientist, "Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program. Nature isn't like that. Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that."

So the pair set out to make a new hardware and a new operating system, capable of handling tasks differently from most current machines, which even if "parallel" deal with instructions sequentially.

The new machine has instruction set pairs that tell what to do when a certain set of data is encountered. The instructions-data pairs are then sent to multiple "systems", which are chosen at random to produce results. Each system has its own redundant stack of instructions, so if one gets corrupted, others can finish up the work. And each system has its own memory and storage; so "crashes" due to memory/storage errors are eliminated.

Comments Prof. Bentley, "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions."

The results will be presented at an April conference in Singapore.

The team is currently working on coding the machine so that it can reprogram its own instructions to respond to changes in the environment. That self-learning, combined with the redundant, pseudorandom nature of the system would make it quite a bit more similar to a human brain than a traditional computer.

Potential applications for such a system include military robotics, swarm robotics, and mission critical servers. For example, if an unmanned aerial vehicle sustained damage or was hacked, it might be able to reprogram itself and escape errors thanks to the redundancy, allowing it to fly home.

IBM demonstrated 'uncrashable' computing with OS/2 by multi-threading the kernel into separate memory spaces. As soon as one became inconsistent it was eliminated, the memory space was recovered, and a kernel thread that passed integrity was re-loaded. Obviously performance was low, but it eliminated the need for ECC memory and other hardware checks since most integrity was done in software.

It was considered uncrashable and many airports and mission-critical operations worldwide still run it 20 years later.

I remember running OS/2 Warp and OS/2 Merlin server in the 90's and they were damn smooth OS's. Microsoft didn't license WIN32 compatibility, killing WINOS/2 when Windows 95 and Windows NT emerged, so most non-corporate environments lost interest and IBM canned the project. It was a big F#(^-you to IBM from Microsoft even though IBM licensed Microsoft's operating system a decade earlier.