To follow this blog by email, give your address here...

Wednesday, May 20, 2009

(This email summarizes some points I made in conversation recently with an expert in reinforcement learning and AGI. These aren't necessarily original points -- I've heard similar things said before -- but I felt like writing them down somewhere in my own vernacular, and this seemed like the right place....)

Reinforcement learning, a popular paradigm for AI, economics and psychology, models intelligent agents as systems that choose their actions in such a way as to maximize their future reward. There are various ways of averaging future reward over various future time-points, but all of these implement the same basic concept.

I think this is a reasonable model of human behavior in some circumstances, but horrible in others.

And, in an AI context, it seems to combine particularly poorly with the capability for radical self-modification.

Reinforcement Learning and the Ultimate Orgasm

Consider for instance the case of a person who is faced with two alternatives

A: continue their human life as would normally be expected

B: push a button that will immediately kill everyone on Earth except them, but give them an eternity of ultimate trans-orgasmic bliss

Obviously, the reward will be larger for option B, according to any sensible scheme for weighting various future rewards.

For most people, there will likely be some negative reward in option B ... namely, the guilt that will be felt during the period between the decision to push the button and the pushing of the button. But, this guilt surely will not be SO negative as to outweigh the amazing positive reward of the eternal ultimate trans-orgasmic bliss to come after the button is pushed!

But the thing is, not all humans would push the button. Many would, but not all. For various reasons, such as love of their family, attachment to their own pain, whatever....

The moral of this story is: humans are not fully reward-driven. Nor are they "reward-driven plus random noise".... They have some other method of determining their behaviors, in addition to reinforcement-learning-style reward-seeking.

Reward-Seeking and Self-Modification: A Scary Combination

Now let's think about the case of a reward-driven AI system that also has the capability to modify its source code unrestrictedly -- for instance, to modify what will cause it to get the internal sensation of being rewarded.

For instance, if the system has a "reward button", we may assume that it has the capability to stimulate the internal circuitry corresponding to the pushing of the reward button.

Obviously, if this AI system has the goal of maximizing its future reward, it's likely to be driven to spend its life stimulating itself rather than bothering with anything else. Even if it started out with some other goal, it will quickly figure out to get rid of this goal, which does not lead to as much reward as direct self-stimulation.

All this doesn't imply that such an AI would necessarily be dangerous to us. However, it seems pretty likely that it would be. It would want to ensure itself a reliable power supply and defensibility against attacks. Toward that end, it might well decide its best course is to get rid of anyone who could possibly get in the way of its highly rewarding process of self-stimulation.

Not only would such an AI likely be dangerous to us, it would also lead to a pretty boring universe (via my current aesthetic standards, at any rate). Perhaps it would extinguish all other life in its solar system, surround itself with a really nice shield, and then proceed to self-stimulate ongoingly, figuring that exploring the rest of the universe would be expected to bring more risk than reward.

The moral of the above, to me, is that reward-seeking is an incomplete model of human motivation, and a bad principle for control self-modifying AI systems.

Goal-Seeking versus Reward-Seeking

Fortunately, goal-seeking is more general than reward-seeking.

Reward-seeking, of the sort that typical reinforcement-learning systems carry out, is about: Planning a course of action that is expected to lead to a future that, in the future, you will consider to be good.

Goal-seeking doesn't have to be about that. It can be about that ... but it can also be about other things, such as: Planning a course of action that is expected to lead to a future that is good according to your present standards.

Goal-seeking is different from reward-seeking because it will potentially (depending on the goal) cause a system to sometimes choose A over B even if it knows A will bring less reward than B ... because in foresight, A matches the system's current values.

Non-Reward-Based Goals for Self-Modifying AISystems

As a rough indication of what kinds of goals one could give a self-modifying AI, that differ radically from reward-seeking, consider the case of an AI system with a goal G that is the conjunction of two factors:

Try to maximize the function F

If at any point T, you assess that your interpretation of the goal G at time T would be interpreted by your self-from-time-(T-S) as a terrible thing, then roll back to your state at time S

I'm not advocating this as a perfect goal for a self-modifying AI. But the point I want to make is this kind of goal is something quite different from the seeking of reward. There seems no way to formulate this goal as one of reward maximization. This is a goal that involves choosing a near-future course of action to maximize a certain function over future history -- but this function is not any kind of summation or combination of future rewards.

Limitations of the Goal-Seeking Paradigm

Coming at the issue from certain theoretical perspectives, it is easy to overestimate the degree to which human beings are goal-directed. It's not only AI theorists and engineers who have made this mistake; many psychologists have made it as well, rooting all human activity in goals like sexuality, survival, and so forth. To my mind, there is no doubt that goal-directed behavior plays a large role in human activity -- yet it also seems clear that a lot of human activity is better conceived as "self-organization based on environmental coupling" rather than as explicitly goal-directed.

It is certainly possible to engineer AI systems that are more strictly goal-driven than humans, though it's not obvious how far one can go in this direction without sacrificing a lot of intelligence -- it may be that a certain amount of non-explicitly-goal-directed self-organization is actually useful for intelligence, even if intelligence itself is conceived in terms of "the ability to achieve complex goals in complex environments" as I've advocated.

I've argued before for a distinction between the "explicit goals" and "implicit goals" of intelligent systems -- the explicit goals being what the system models itself as pursuing, and the implicit goals being what an objective, intelligent observer would conclude the system is pursuing. I've defined a "well aligned" mind as one whose explicit and implicit goals are roughly the same.

According to this definition, some humans, clearly, are better aligned than others!

Summary & Conclusion

Reward-seeking is best viewed as a special case of goal-seeking. Maximizing future reward is clearly one goal that intelligent biological systems work toward, and it's also one that has proved useful in AI and engineering so far. Thus, work within the reinforcement learning paradigm may well be relevant to designing the intelligent systems of the future.

But, to the extent that humans are goal-driven, reward-seeking doesn't summarize our goals. And, as we create artificial intelligences, there seems more hope of creating benevolent advanced AGI systems with goals going beyond (though perhaps including) reward-seeking, than with goals restricted to reward-seeking.

Crafting goals with reasonable odds of leading self-modifying AI systems toward lasting benevolence is a very hard problem ... but it's clear that systems with goals restricted to future-reward-maximization are NOT the place to look.

Wednesday, May 13, 2009

(This may seem a hackneyed topic, but there are some moderately original points near the end here, if you bear with me ...)

As a card-carrying, future-thinking transhumanist, I take it as obvious that most of the particulars of current religions are relics of earlier eras in human cultural development, which currently do a lot of harm along with doing some good.

But I still find it interesting to ask what aspects of religion reflect underlying phenomena that are essential, meaningful and necessary -- and are likely to continue as humanity transcends the traditional "human condition" and enters its next phase of development....

The basic point Fish makes is that religion offers something science by its very nature cannot.

Eagleton acknowledges ... many terrible things have been done in religion’s name — but at least religion is trying for something more than local satisfactions, for its “subject is nothing less than the nature and destiny of humanity itself, in relation to what it takes to be its transcendent source of life.”

He notes that science cannot address what he calls "theological questions", where

By theological questions, Eagleton means questions like, “Why is there anything in the first place?”, “Why what we do have is actually intelligible to us?” and “Where do our notions of explanation, regularity and intelligibility come from?”

He also notes that the author is

... angry, I think, at having to expend so much mental and emotional energy refuting the shallow arguments of school-yard atheists like Hitchens and Dawkins.

I haven't read Eagleton's book and I'm unlikely to do so -- I have a long list of more interesting-looking reading material -- but Fish's summary did resonate with a paper I'm in the middle of writing (it's paused while I work on more urgent stuff) on the limits of science.

My basic point in that paper will be a simple one: science is based on finite sets of finite-precision observations. That is, all of scientific knowledge is based on some finite set of bits, comprising the empirical observations accepted by the scientific community.

To extrapolate beyond this bit-set, some kind of assumption is needed. To put it another way, some kind of "faith" is needed. Hume was the first one to make this point really clearly ... and we now understand the "Humean problem of induction" well enough to know it's not the kind of thing that can be "solved."

The Occam's Razor principle tries to solve it -- it says that you extrapolate from the bit-set of known data by making the simplest possible hypothesis. This leads to some nice mathematics involving algorithmic information theory and so forth. But of course, one still has to have "faith" in some measure of simplicity!

So: doing or using science requires, in essence, continual acts of faith (though these may be unconscious and routinized rather than conscious and explicit). To the extent that Dawkins, Hitchens or other anti-religion commentators de-emphasize this point, they're engaging in judicious marketing. (It's hard for me to feel too negative toward them about this, however, given the far more explicitly and dramatically dishonest marketing that religion has carried out over the last millennia.)

My paper will focus on what the limits of science tell you about AI, machine consciousness and so forth -- and I'll save that for another blog post, or the paper itself. (Don't worry though, my conclusion is not that scientifically enginering AGI is impossible ... I haven't lost the faith!)

Anyway, I certainly agree with Fish and Eagleton that religion addresses very important questions that science cannot, by its nature, answer.

But I find it rather screwy that Eagleton refers to

“Why is there anything in the first place?”, “Why what we do have is actually intelligible to us?” and “Where do our notions of explanation, regularity and intelligibility come from?”

and so forth as theological questions.

Surely, these are philosophical questions.

One can answer them in various ways without invoking any deities or demons!

"Why does God exist?" is a theological question ...

"Why does anything exist?" is philosophical...

(Though, for the record, I don't think "Why does anything exist?" is a very useful philosophical question. I'm more interested in questions like

"Why do separate objects exist, instead of just one big fluid cosmic mass?"

"In what sense could the universe be considered compassionate?"

"How much ethical responsibility should I feel toward (which) other minds?"

"Why does my mind perceive such a small subset of the space of all possible patterns?"

"How much can a mind grow and expand without losing its sense of self and becoming, experientially, a 'fundamentally different being'?"

"What is it like to be a rock?"

etc.

)

Theology is one way of providing answers to philosophical questions ... but by no means the only way.

I think that religion addresses some very important questions, that are beyond the scope of science -- and by and large provides these questions with extremely bad answers.

One of the many limitations of religion as conventionally conceived is indicated by the quote, given above, that religion's

“subject is nothing less than the nature and destiny of humanity itself....”

From a transhumanist perspective, the qualifier "nothing less than" is misplaced, as this is actually a very limiting subject. The nature and destiny of humanity are important; but one of the things that science has opened our minds to is the relative insignificance of humanity in the space of possible minds. I'm more interested in philosophies that address the nature and destiny of mind itself, rather than just the nature and destiny of one species on one planet.

It is of course a subtle matter to compare and judge different explanations to philosophical questions. You can't compare them using scientific or mathematical methods ... and of course the question of how to evaluate philosophical views becomes "yet another tough philosophical question", tied in with all the other ones.

A crude way to say it, is that it comes down to an intuitive judgment ... which leads into questions of how one can refine and improve one's intuition ... and these questions, of course, possess numerous answers that are philosophical- or religious- tradition -dependent...

Science-synergetic philosophy

It does seem to me, though, that there is an interesting notion of science-synergetic philosophy lurking somewhere in all this.

Suppose we take for granted that doing science -- just like other aspects of living life -- relies on a constant stream of acts of faith, which can't be justified according to science....

One may then note that there are various systems for mentally organizing these acts of faith.

Religions are among them. But religions are quite detached from the process of doing science.

It seems sensible to think about philosophical systems -- i.e. systems for organizing inner acts of faith -- that are intrinsically synergetic with the scientific process. That is, systems for organizing acts of faith, that

when you follow them, help you to do science better

are made richer and deeper by the practice of science

One can broaden this a little and think about philosophical systems that are intrinsically synergetic with engineering and mathematics as well as science.

Now, one cannot prove scientifically that a "scientifically synergetic philosophy" is better than any other philosophy. Philosophies can't be validated or refuted scientifically.

So, the reason to choose a scientifically synergetic philosophy has to be some kind of inner intuition; some kind of taste for elegance, harmony and simplicity; or whatever.

One prediction I have for the next century is that scientifically synergetic philosophies will emerge into the popular consciousness and become richer and deeper and better articulated than they are now.

Because Fish and Eagleton are right about some things: people do need more than science ... they do need collective processes focused on the important philosophical questions that go beyond the scope of science.

But my prediction is that we are going to trend more toward philosophical systems that are synergetic with science, rather than ones that co-exist awkwardly with science.

What will these future philosophical systems be like?

There's nothing extremely new about the concept of science-synergetic philosophy, of course.

Plenty of non-religious scientists and science-friendly non-scientists have created personal philosophies that don't involve deities or other theological notions, yet do involve meaningful approaches to personally exploring the "big questions" that religions address.

Among the many philosophers to take on the task of creating comprehensive science-synergetic philosophical systems, perhaps my favorite is Charles Peirce (who also developed a nice philosophy of science, though one that IMO is significantly incomplete ... but I've discussed that elsewhere.)

Building on work by Peirce and loads of others, I tried to lay out a science-synergetic philosophical system in my book The Hidden Pattern -- but like Peirce's writings, that is a fairly academic work, not an informal tract designed to inspire the common human in their everyday life.

My friend Philippe van Nedervelde likes to talk about this sort of thing as a "TransReligion/ UNReligion", but I confess to not finding that terminology very compelling.

Philippe is interested in (among many other things!) developing vaguely religion-like rituals that coincide with some sort of science-synergetic philosophy. There has been talk about formulating a "TransReligion/ UNReligion" as an outgrowth of the futurist group now called "The Order of Cosmic Engineers." Which I think is an interesting idea ... yet I'm not really sure it's the direction things will (or should) go.

I'm not sure there will emerge any one "Bible of science-synergetic transhumanist philosophy" ... nor any science-synergetic-philosophy analogues of speaking in tongues, kneeling at the altar, or consuming the simulated blood and flesh of the Savior the Son of God who gave his life for our sins. Perhaps, science-synergetic philosophy may wind up being something that pervades human culture in more of a broad-based, implicit way.

Friday, May 08, 2009

Is the human brain, at the levels directly relevant for analysis of cognition, best modeled as a classical or quantum system?

(For instance, a baseball in some sense needs to be modeled as a quantum system -- in the sense that the way its molecules hold together can be described only using quantum not classical physics; but classical physics can be used to explain the normally relevant aspects of its macroscopic behavior. So at the levels directly relevant for analysis of a baseball game, a baseball is best modeled as a classical system. OTOH, at the levels directly relevant for analysis of the electromagnetic behavior of a Superconducting Quantum Interference Device (SQUID) -- a small but macroscopic device, used in magnetoencephalography machines, and demonstrating macroscopic quantum coherence in its magnetic field -- the SQUID is best modeled as a quantum system. Classical physics models just won't explain why the SQUID, a device you can hold and pinch between your fingers (though it only works when supercooled, which would freeze your fingers!), makes MEG machines work.)

Current brain theory indicates that for understanding its role in giving rise to the mind, the brain is most effectively modeled as a classical system (i.e. the brain is more like a baseball than a SQUID) ... but of course current brain theory could be incomplete.

(Even if the brain is a macroscopic quantum system, this of course doesn't prove that quantum dynamics are necessary for intelligence or consciousness or anything like that. Those are bigger and deeper questions, and I've argued in the past that sufficiently complex "classical" systems might need to be treated using quantum logic ... but this gets into a lot of deep issues that I don't want to digress onto here.)

Stuart Hameroff is one of the more vocal proponents of the "quantum brain" idea, and he has a new paper reporting a new theory in this direction, arguing that dendro-dendritic synapses are mediated via macroscopic quantum dynamics, thus posing a quantum neural net that operates in complex coordination with the classical neural net formed by axonal-dendritic synapses.

I don't have a strong opinion on that particular theory of Hameroff's. I look forward to discussing it with him at the Toward a Science of Consciousness conference in Hong Kong next month.

But I was struck by one of the references at the end of his paper, a Nature paper entitled

This is a 2007 paper that I had not noticed before, and it's interesting because it gives solid evidence of macroscopic quantum coherence in a biological process.

To quote part of the abstract:

Here we extend previous two-dimensional electronic spectroscopy investigations of the FMO bacteriochlorophyll complex, and obtain direct evidence for remarkably long-lived electronic quantum coherence playing an important part in energy transfer processes within this system. The quantum coherence manifests itself in characteristic, directly observable quantum beating signals among the excitons within the Chlorobium tepidum FMO complex at 77 K. This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path.

In the comments to an earlier edit of this blog post, someone pointed out this more recent paper

The intricate biochemical processes underlying avian magnetoreception, the sensory ability of migratory birds to navigate using earths magnetic field, have been narrowed down to spin-dependent recombination of radical-ion pairs to be found in avian species retinal proteins. The avian magnetic field detection is governed by the interplay between magnetic interactions of the radicals unpaired electrons and the radicals recombination dynamics. Critical to this mechanism is the long lifetime of the radical-pair spin coherence, so that the weak geomagnetic field will have a chance to signal its presence. It is here shown that a fundamental quantum phenomenon, the quantum Zeno effect, is at the basis of the radical-ion-pair magnetoreception mechanism. The quantum Zeno effect naturally leads to long spin coherence lifetimes, without any constraints on the systems physical parameters, ensuring the robustness of this sensory mechanism. Basic experimental observations regarding avian magnetic sensitivity are seamlessly derived. These include the magnetic sensitivity functional window and the heading error of oriented bird ensembles, which so far evaded theoretical justification. The findings presented here could be highly relevant to similar mechanisms at work in photosynthetic reactions. They also trigger fundamental questions about the evolutionary mechanisms that enabled avian species to make optimal use of quantum measurement laws.

This of course is even more intriguing than the green sulphur bacteria stuff, because it has to do with perception in an intelligent macroscopic animal.

Hameroff's point in citing the paper on green sulphur bacteria (and it's a good one) seems to be: if long-lived quantum coherence can play an important role in photosynthesis, couldn't it also play a role in the brain somehow ... e.g. maybe via dendro-dendritic synaptic gap junctions?

The extrapolation from these other results to neuroscience is speculative, sure.... But this kind of result does make the possibility of quantum coherence impacting human cognition seem a bit less fanciful.

After all, I often recall that in the late 90's all the neuroscientists I talked to told me there was no neurogenesis nor synaptogenesis in adult mammals. Oops. Now they've got new data and changed their mind. My point isn't that quantum coherence is related to neuro or synapto genesis (though, who knows...), but rather that neuroscientists -- simultaneously with displaying the usual humility of biologists regarding the complexity of the systems they're studying -- have a long-standing habit of assuming the concept-set underlying their current understanding is much more adequate than it really is.

Our ignorance of the brain is why my own AI work is not based on trying to closely model the brain. Of course, it's possible that intelligence is fundamentally based on some freaky neuroquantum phenomenon, so that all digital-computer AI work is doomed by some intrinsic limitations ... but I doubt it. My own guess is that, even if the brain does involve macroscopic quantum coherence in some interesting sense, one can still make transhumanly intelligent systems using digital computers. And of course, if this doesn't work -- or if these transhumanly intelligent systems turn out to lack some crucial aspect of self-awareness as the quantum-consciousness advocates argue -- then we can always add some funky quantum computing chips into our AGI server farm!

In that article, as well as reviewing the film, I also recount some moderately interesting dialogue btw me and Ray Kurzweil that occurred in the moderated discussion at the end of the film's premiere at the Tribeca Film Festival...

After that conversation with Ray I discuss at the end of the article, the discussion-moderator asked me another question (which I didn't put in the review article): he asked me what my goal was. What was I trying to achieve?

What I said was something like this: "I would like me, and any other human or nonhuman animal who wants to, to be able to increase our intelligence and wisdom gradually ... maybe, say, 37.2% per year ... so that we can transcend to higher planes of being gradually and continuously, and feel ourselves becoming gods ... appreciate the process as it unfolds."

Sunday, April 26, 2009

I thought a bit about innovative ways we might be able to communicate better with our cetacean planet-mates...

1. Teach Dolphins Lojban

A couple decades ago, efforts were made to teach dolphins simple English, without dramatic success. Discussions were also had regarding creation of some sort of species-independent interlingua, which humans and dolphins could use to communicate with each other.

It occurred to me that using Lojban for that interlingua could make sense. Potentially, one could create special Lojbanic vocabulary for the shared human/dolphin environment. Lojban grammar is simple and unambiguous, and certainly has less species-specificity than any human natural language.

Also, one could create a form of Lojban "phonology" that generally follows the sound-production patterns habitually by dolphins, and speak to dolphins in this "Delphic Lojban" alongside the usual "human Lojban."

The biggest disadvantage of this approach is that it requires some human cetaceologists to learn Lojban.... But this cost seems worth paying, as the odds of success seem much higher than with human natural languages.

Note that there is no straightforward way to make a "phonologically Delphic" version of English. But because Lojban syntax is just a linearization of logical relationships, one could make a Delphic version of Lojban by translating those same logical relationships into sound in a wholly different way than is done in the human version of Lojban.

2. Give Dolphins Prosthetic Hands

Inside a dolphin's flippers, are bones that look like they should correspond to claws or fingers.

What if we created prosthetic fingers and thumbs for dolphins, and connected them to these bones ... and also connected them to the dolphin nervous system?

Admittedly, these modified dolphins would suffer impaired swimming ability, though one would hope the degree of this phenomenon could be palliated via appropriate design. (For instance, perhaps the fingers could be made retractable, so the dolphin could retract them when it wanted to swim, and extend them when it wanted to manipulate objects.)

This would be a highly experimental adventure in Brain-Computer Interfacing. But, as BCI research advances in the context of human-enhancement applications, I see no reason why it shouldn't advance in the context of dolphin-enhancement applications in parallel.

My thinking is that much of which distinguishes human intelligence from cetacean intelligence is our focus on complex manipulation of tools, and building things (including advanced phenomena like tools that make tools, etc.). If a dolphin brain self-reorganized to adapt to its prosthetic fingers, then the dolphin would have the capability to use tools in a more humanlike way.

Since the cetaceans' evolutionary progenitors had claws of some sort, there may be some vestigial neural wiring in the dolphin brain that will ease the self-reorganization that the dolphin brain needs to go through to make use of the prosthetic fingers.

Another possibility would be to build in the capability for human operators to periodically "take over" the dolphin fingers using remote control. This would serve to show the dolphin what to do with the fingers, both on the conscious reflective level, and on the level of unconscious habituation.

Of course discussions of what to build with the fingers, and how to use tools, could be carried out using Lojban (human or Delphic) ;-D

...

Ahhh ... all the really fascinating research that would get funded if I happened to receive a billion-dollar inheritance from some long-lost uncle ;-p

This is just a brief follow-up to my last post, and a prelude to the one that will follow, which is already brewing in my brain....

In case some of y'all are wondering why I ... whose main intellectual obsession is the creation of AGI systems with general intelligence at the human level and beyond ... have suddenly started ranting about cetacean intelligence, I suppose I should be more explicit about my research-related motivation for digging into the topic

Of course there's a personal motivation -- I love nonhuman animals ... at the moment as well as some humans I share my house with a parrot, 2 dogs and 5 bunnies; and there have been friends of a lot of other species in my life at various times.... In fact the parrot named Abaca might be classified as my best friend over the last few months ;-) ....

I've encountered wild dolphins up close in the shallows of the Indian Ocean when I lived in Western Australia 13-15 years ago, and was certainly struck by the experience, as brief and superficial as it was. I definitely wanted more (and would have sought out more if I hadn't left Western Oz to move to New York and start an AI company)

But what I want to focus on here is my intellectual motivation for, as an AGI researcher, finding cetacean intelligence important.

How often do I hear, among AGI researchers, words to the effect of "Of course we need to model our AGI systems on the human brain and mind, since after all it's the only example we have of a highly generally intelligent system."

I tend to resist this line of thinking ... I think we understand enough about the general mathematics and computer science of cognition that we can understand general intelligence in a manner going beyond the human-specific. My own AI work is an amalgam of aspects directly inspired by human intelligence, and aspects inspired by a broader understanding of intelligence.

But still, there is some point to the common observation that we only know one example of a highly generally intelligent system: the human brain.

But is it actually true?

What if the subset of cetacean intelligence researchers who believe cetaceans have general intelligence comparable to, or greater than, human intelligence ... are actually correct? (Which is my suspicion.)

Then in fact there's another example available -- and we're just not taking the trouble to study it as carefully and thoroughly as we should.

In my immediately previous blog post I gave some links into the cetacean intelligence literature, and some speculations as to what I think the broad nature of cetacean intelligence might possibly be.

In my next blog post I'll discuss some cutting-edge approaches that we might take over the next couple decades to more thoroughly understand cetacean intelligence.

I'm not suggesting that resources be taken from AGI and redirected to cetacean cognitive science: I think that both areas are distressingly underfunded.

In the case that we create AGI programs with superhuman general intelligence before we understand cetacean minds, I think we might still have something to learn from the minds of dolphins. Because cetacean minds may possess a quite different form of intelligence than either us or our AGI creations. And it's hard to tell what may be learned by studying some advanced, fundamentally different incarnation of intelligence.

There could possibly be interesting implications for the study of AGI ethics here, for instance. Cetacea are certainly not optimally ethical creatures ... they're capable of violence just like most other mammals ... but based on what we can understand today, it seems their social organization may have fewer egregious ethical issues than ours. As one example, they seem to have achieved a large-scale, global social organization without warfare. (Evidence of the global nature of their social organization is tentative, but provided by observations such as the way repeated phrases in whale "song" tend to arise in one part of the globe, then spread through whales in all the world's oceans, then die out after a time, replaced by others.)

I'm certainly not suggesting that study of cetacean society will magically provide the answer to the AGI ethics problem, or the problem of generally understanding general intelligence, etc. However, I think it would be very interesting to understand how a fundamentally different sort of general intelligence works, and how it has approached the society/ethics problem, as an additional body of evidence to utilize as we shape the minds of the future.

Friday, April 24, 2009

I've been reading many of the writings of John Lilly lately, and also poring through the literature on cetacean intelligence ... and I have to say it's fascinating stuff ....

I'm fascinated by Lilly's cetacean intelligence/communication work, his isolation tank work, even his obsessive (and, apparently, excessive) experiments with ketamine injection leading to long conversations with various hallucinated (?) extraterrestrials ;-)

(I read his stuff a couple decades ago but I've been through a lot of experiences since, and I can read it with different eyes now. I remember how inspirational his book "Programming and Metaprogramming the Human Biocomputer" was for me, when I read it at age 13 or 15 or whatever.)

Anyway ... plenty of scientists by now have followed up Lilly's intuitions about the deep intelligence of dolphins and other cetaceans. A bunch of research papers by various scientists (not under the influence of ketamine ;-) are here:

but I've found no up-to-date comprehensive review book, so you really gotta read the journal literature and various books to understand what's known so far...

As of now there is no definitive scientific proof that cetaceans are extraordinarily intelligent ... though there's pretty solid proof that they're at least as clever as great apes, I would say (though different in mentality) ...

However, my qualitative impression from reviewing all the evidence is that they are, in some senses, dramatically more intelligent than great apes

I will write something systematic on this topic at some point, when I get more time and have read the literature more thoroughly (obviously this is just a background interest for me, so my reading is going pretty slowly...)

What got me musing about this topic right now was thinking about how the naive physics of our everyday world has impacted human intelligence, and what this might mean for engineering and educating AGI.

Last month Allan Combs and I wrote a paper for the NASA CONTACT workshop, discussing how the radically different environments of extraterrestrials might impact their mind-states and varieties of intelligence:

And this is also related to a paper I wrote a couple months back, musing about how the lack of fluids, powders, fabrics and other such substances in virtual worlds may impact their utility as homes for humanlike artificial minds:http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf

(In that paper I also explored how it might be possible to enhance virtual worlds to largely remedy this shortcoming, using a special physics-engine technique I called "bead physics".)

In writing that NASA paper, I started wondering how it would impact a mind to evolve in an environment dominated by fluids rather than solids.

My speculation was that, in such a mind, notions of causing and building would be replaced by notions of flowing and shaping .... which would lead to all sorts of other differences.

All this has spurred me to some of my own entertaining speculations (synthesizing various speculations of Lilly and others) ... to wit:

... what if (as Lilly speculated) the everyday states of mind of cetaceans are more like the states of mind that humans get into while on psychedelic drugs, than they are like our everyday consciousness?

After all, these creatures are breathing deeply and rhythmically ... they're floating in liquid ... generally they're living the sort of physical life that would put humans in a deep semi-meditative state ...

What if their big neocortices are devoted essentially to collaboratively composing and improvising music for each other to listen to?

... but perhaps something more advanced and subtle than human music, reflecting intricate patterns of social interaction, and holistic observations about the state of the underwater ecosystem, and emergences between these social and ecosystem patterns...

This would be a type of intelligence not focused on building tools or solving puzzles in the humanlike sense....

As with human intelligence, the main spur for the evolution of such intelligence would be social. Once the composition/improvisation of this kind of communicative/depictive music became a critical aspect of membership in cetacean society, then there would be evolutionary force to compose/improvise more and more appealing music....

In this hypothesis, the crux of dolphin communication might not be one-to-one conversation, but rather multi-player musical improvisation, with both spatial and temporal aspects. Dyadic conversation with practical import might occur, yet have vastly less complexity and subtlety than other aspects of the musical communication...

One interesting thing about this speculation is that, if it were true, it would mean that probing cetacean intelligence using concepts and methods developed for studying human intelligence, could push the researcher in badly wrong directions.

By analogy, imagine that a species whose main focus of intelligence was collaborative spatiotemporal music improvisation, tried to judge and explore human intelligence. Most humans would be judged as hopelessly moronic ... and then a few gifted musicians might be viewed as moderately intelligent. Due to the other species focusing on collaborative spatiotemporal music improvisation, they would miss what is really the crux of human intelligence: our dyadic linguistic communication, and our tool-building.

John Lilly wanted to probe cetacean communication with computer tech, back in the late 1970s and early 1980s. Computers are a lot better now, so someone could take a much better shot at it. But rather little research seems to be going on at the intersection of advanced AI pattern analysis and cetacean communication, at the moment. Too bad.

More ambitiously, one can envision creating an AI that shared both a humanlike body, and a dolphin-like body, and letting it exist in both worlds.

Lilly did make a good point, that we should probably take some of the $ we are spending on looking for alien lifeforms in space, and devote some of it instead to trying to communicate with these alien intelligences that apparently exist in our oceans. If we can't even communicate with the other intelligences on our own planet, cracking the codes of the minds and languages of beings on alien planets may not be realistic yet (though, of course, there is massive uncertainty in all these domains...).

There is some inordinately silly stuff written about cetacean intelligence -- I read one book on the theme that "Jesus was a dolphin"!! And Lilly certainly complicated his message about cetacean intelligence by mixing it up with some of his other messages, for instance about extraterrestrials whom he felt he contacted while in isolation tanks and on ketamine. But all that is really beside the point. When you look at the scope of existing qualitative evidence about cetacean intelligence, the picture is striking....

Whether the speculations I've made above are on-point or not, I'm convinced there is something very interesting going on in cetacean minds and societies -- which we are not putting nearly enough effort into understanding.