I have a titanium screw in my head. It is a dental implant (root-form endosseous) covered with a crown.

Note: This is a representative photo from Wikipedia, not my personal implant

Osseointegration (fusing implants with bone) is used for many things these days, such as bone-anchored hearing aids and bone-anchored leg prostheses.

This is cool, but there’s a major interface problem if you have a metal rod poking out of your skin–it’s basically an open wound. Researchers have found a solution however based on deer antlers, called ITAP (Intraosseous Transcutaneous Amputation Prosthesis), in which they can get the skin to actually grow into the titanium.

Deer antlers go through the skin, like bone-anchored prosthetics

They do this by carefully shaping the titanium and putting lots of tiny holes in it. ITAPs are what the momentarily famous “bionic” cat Oscar received last June.

In these examples, biology is doing most of the work. Sure, the chemical properties of titanium make it compatible, but when will the artificial technology pull its weight? Where are the implants that integrate seamlessly with your body and with other implants? Where are the computer interfaces that automatically and robustly integrate with any person’s nervous system?

But there’s a lot of things that don’t work well yet, such as direct neural interfaces–although there are glimmers of hope such as optical interfaces to the nervous system. And besides medical technology, what about all machines–why are they so inflexible and high-maintenance? And it’s not just hardware–the software realm seems to be particularly behind with “soft” and flexible interfaces.

In recent article called “Building Critical Systems as a Cyborg”, software architect Greg Ball compares the von Neumann algorithmic approach of most conventional software to the cybernetics approach. He says:

Don’t assume those early cyberneticists would be impressed by our modern high-availability computer systems. They might even view our conventional approach to software as fatally arrogant, requiring a programmer to anticipate everything.

What if instead of fighting changes and new interactions, our software embraced them? A cybernetic approach to software would more oriented around self regulation, including parts that are added in to the system from outside.

You might argue that regulation with feedback loops has been part of engineering systems for a long time. But we still have a lot of brittleness in the interfaces. It’s not easy to make systems out of components unless the interfaces match up perfectly. In the software realm, things are pretty much the same. Most of our technology behaves very differently from biology in terms of interfacing, adaptation, learning and growth. Eventually we can do better than biology, but first we need to be as soft as biology. This will help us not only for making machines that operate in the dynamic real world of humans, but will also help us make devices that directly attach to humans.

Do We Need Fuzzy Substrates?

Computers are embedded in almost all of our devices, and most of them are digital. Information at the low levels is stored as binary. Biology, in contrast, often makes use of analog systems. But does that matter? Take fuzzy logic for example. Fuzzy logic techniques typically involve the concept of intermediate values between true and false. It’s a way of dealing with vagueness. But you don’t need a special computer for fuzzy logic–it’s just a program running on the digital computer like any other program.

Fuzzy logic, probability and other soft-computing approaches could go a long way to cover the role of adaptive interfaces in the computer code of a cyborg. But are adaptive layers running on digital substrates enough?

USCD has been doing research with electronic neurons, which are made from analog computers. So unlike most computers, the substrate does not represent information with discrete values.

Joseph Ayers and his lab members at Northeastern University were at one point attempting to use these electronic neurons in biomimetic lobster robots. The electronic nervous system (ENS) would generate the behaviors of the robot, such as the pattern of signals to cause useful motion of the legs. The legs are powered by nitinol (an alloy of titanium and nickel) wires, which expand and shrink thus causing movement.

biomimetic lobster robot from Northeastern University

The robots already had a digital control system, so the main point of moving to the ENS was for chaotic dynamics. As Ayers described the situation:

The present controller is inherently deterministic, i.e., the robot does what we program it to do. A biological nervous system however is self-organizing in a stimulus-dependent manner and can use neurons with chaotic dynamics to make the behavior both robust and adaptive. It is in fact this capability that differentiates robotic from biological movements and the goal of ENS-based controllers.

Besides the dynamic chaos in nervous systems, the aforementioned USCD also researches synchronized chaos. It sounds paradoxical, but it actually happens. It could potentially be used for certain kinds of adaptable interfaces. For instance, synchronized chaos can achieve “asymptotic stability,” which means that two systems can recover synchronization quickly after an external force messes up their sync.

I have given you a mere taste of soft cybernetics. Its usage may have to increase, although it is not clear yet whether we need new information substrates such as analog computers.

Some have pointed out the supposed increase in multitasking during recent decades. An overlapping issue is the increase in raw information that humans have access to. It is certainly a fascinating sociocultural change. However, humans are not capable of true multitasking. First I will describe what humans do have presently, and then I will discuss what future humans might be capable of.