Most of the work in my career has been developing hard real-time, embedded and safety relevant systems designs. These have ranged from medical systems such as cardiac assist devices (e.g. pacemakers), monitoring equipment, anesthesia systems and patient ventilators to avionics systems such as OFPs (operatiohnal flight programs), navgigation systems, and fire controls, to automotive system such as hybrid drive trains. Although I am a huge believer in model-based engineering (a good thing since I'm on the MITRE MDE Steering Commitee) I've recognized that disparate, unconnected models are a challenge because they easily fall out of sync. To this end, I've develloped a UML profile for Safety Analysis, which is in use today in a number of customer sites and will be included in a future Rhapsody release.

One of the key advantages of using a profile for this rather than a dedicated external tool is the linkage from your safety analysis, done using FTA, FEMCA and Hazard Analysis (all within the Rhapsody UML/SysML environment) into your requirements and design models. This gives you great traceability between these different models. Plus, you only need to work in one tool for your analysis and design.

Here is a series of three articles I wrote for Embedded.com on the topic.

As a result, at around 2040, humans will transcend biology and live forever

Ray points to many different advances in different fields of technology, such as neuroscience (an area of interest of mine – more on that later), computer hardware, computer software, medicine, and nanotechnology. I should point out that Ray is one of the world’s leading inventors and has a history of success prognosticating about things such as when the Soviet Union will fall and when a computer would defeat the reigning (human) chess champion. This book, although dry in spots, is extremely compelling largely because it points out how technology is advancing to the point where humans will be able to significantly augment themselves with technology and bypass biological limitations in performance, intelligence, and longevity. He points to the successful implantation of electronics in Parkinson’s patient’s brains to take over from the failure of the dopamimergic neurons of the substantia nigra (although he fails to point out that this appears to help many but is not a cure for all patients). He points to the advances in explosion in computation power of supercomputers such as IBM’s Deep Blue. He further discusses the tremendous advances in nanotechnologies.

Ray paints a rosy picture, but I take him to task for what I believe is the overly optimistic prediction of the ability of nanotechnology to create detailed (micron-level resolution) of the brain to build a complete simulatable brain model. The problem is not powering thousands of nanosensors small enough to bypass the blood-brain barrier – glucose-driven power supplies can do that. The problem is localization of the sensors in real-time (they won’t know precisely where they are). With thousands of sensors reporting local (submicron) conditions, a model can only be constructed if you know exactly where all the sensors are. Good luck with that one.

Ultimately, this is a quibble, but an important one because of the prominence Ray places on the construction of a representative brain model capable of replicating human cognitive and autonomic functions. Back in 1980, I started my doctorate work in neurocybernetics (a field also known as “neuro-computation” in some circles) at the USD Medical School. The position held by the resident neurophysiologist was that the real problem was understanding the functionality of individual neurons; once the neuron was understood well enough, the remaining problems of neurophysiology would be simple. I felt that position was preposterous on the face of it – something that didn’t endear me to the professors in medical school. I ended up working with Dr. David Hastings, a biophysicist who was comfortable with my mathematical approach to understanding macro-level information processing in neural systems. I ended up developing some mathematical tools for analyzing information from many neurons to quantify how information was being propagated and transformed. With this background, The Singularity is Near met with a responsive chord in my cortex. How can we understand neural processing well enough to extend it? How do we interface with it? Interestingly, advances have been recently made in the creation of artificial neurons (See this pape by Peter Fromherz of the Max Plank Institute for Biochemistry), connecting artificial neurons to motor neurons of hirudo medicinalis (Leech). and connecting artificial photoreceptors enabling blind humans to see (I’m personally hoping for neural augmentation to improve my spelling).

There are many limitations in our intelligence. Brain size, slow neural computation (thousands to millions of times slower than silicon transistors), and organizational structures that support pattern recognition far better than algorithmic computation. On the horizon are technological breakthroughs that will allow us to overcome all these limitations by augmenting our slow biological systems with far faster technological ones. In addition, this technology won’t be limited by the size of the brain case, so the potential for enhancement is unlimited. (I want to point out that while the organization of the brain has limitations but also has significant benefits that we have not yet begun to realize in our software technologies. I’ll talk about that in a future blog.)

Do you want to know the really cool thing about all this? You (and I) get to make this happen. We are inventing the technologies and making the scientific advances that will power this transcendence of biology. Every time we improve an algorithm, have an epiphany about architecture, or invent a new way to compute something, we move towards the day when we can truly improve ourselves as a species. It’s heady stuff.