To follow this blog by email, give your address here...

Sunday, December 20, 2009

This blog post is an edited version of a comment I made on Robin Hanson's recent China Ascendant post on the Overcoming Bias blog. So, read Robin's post before reading this one!

Also, it is best read as a sort of post-script to my recent article on the Chinese Singularity in H+ Magazine. So maybe you should read that article first too ;-)

Ultimately, it may not be so important whether the US or China or India or Europe leads the advance of science and technology during the next decades.

Certainly, if you're a Singularitarian like me -- the Singularity is about the fate of mind and humanity, not the fate of nations ... and if/when it comes, it will quickly bring us beyond the state where national boundaries are a big deal.

But in current practical terms, the "where" question is an interesting one.... Especially, if a lot of the relevant developments are going to happen outside the Western world, this is worth knowing because it's going to affect a lot of decisions people have to make.

So, to get to the topic of Robin Hanson's blog post: China ascendant ???

My answer to that question is always: Maybe.

In his post, Robin makes the statement:

If China continues to outgrow the West, it will likely be because they do a few things very right, as did the West before.

The point I want to make here is a simple one: One of the things China is doing much better than the US, these days, is thinking medium-term and long-term rather than just short-term.

Perhaps long-range planning will be one of the "few things" China does "very right," to use Robin's language.

China is planning decades ahead, in their technology and science development, in their energy and financial policies, and many other areas as well.

Whereas in the US, we seem to be mired in a "next quarter" or "next election" mentality.

However, the matter isn't as simple as it seems...

It's interesting to observe that the American system sometimes does great mid-range planning accidentally (or, to use a more charitable word: implicitly)...

For instance, the dot-com boom seems kinda stupid in hindsight (trust me; I played my own small part in the stupidity!) ... but on closer inspection, a lot of the "wasted" venture $$ that went into the dot-com boom funded

the build-out of Internet infrastructure of various kinds

the prototyping of technologies that later became refined and successful.

Those VCs would not have funded infrastructure buildout or technologically prototyping explicitly, but they funded it accidentally, er, implicitly.

So in this case, the US system planned things 10 years in advance implicitly, without any one person explicitly trying to do so.

We can't explain the dot-com boom example by simplistic "market economics" arguments -- because on average, the investment of time and $$ in the dot-com boom wasn't worth it for the participants (and they probably weren't rational to expect that it would be worth it for them). Most of their work and $$ ultimately went to benefit people other than those who invested in the boom. But we can say that, in this case, the whole complex mess of the US economic system did implicitly perform some effective long-range planning.

Yet, this kind of implicit long-term planning has its limits, and seems to be failing in key areas like my own research area of AGI. The US is shortchanging AGI research badly compared to Europe as well as Asia, because our economic system is biased toward shortsightedness.

There are strong arguments that long-range state-driven planning and funding has benefited developing countries -- Singapore, South Korea and Brazil being some prime examples. In these cases, it supported the development of infrastructures that probably would not have developed in a less state-centric arrangement like we have in the US.

So, one interesting question is whether explicit or implicit long-range planning is going to be more effective in the next decades as technology and science continue to accelerate (or, to put the question more honestly but more confusingly: what COMBINATIONS of explicit and implicit long-range planning are going to work better)?

My gut feel is that the "mainly implicit" approach isn't going to do it. I think that if the US government doesn't take a strong hand in pushing for (and funding) adventurous, advanced technology and science development, then China will pull ahead of us within the next decades. I don't trust the US "market oligarchy" system to implicitly carry out the needed long-range planning.

The reason I have this feeling is that, in one advanced, accelerating technology area after another, I see a contradiction between the best path to short-term financial profit and the best path to medium-term scientific progress. For instance,

In AI, the quest for short-term profits biases toward narrow AI, yet the best medium-term research path is to focus on AGI

In nanotech, the best medium-term research path is Drexler's path which works toward molecular assemblers, but the best path to short-term profits is to focus on weak nanotechnology like most of the venture-funded "nano" firms are doing now

In life extension, the best short-term path is to focus on remedies for aging-related diseases, but the best medium-term path is either to understand the fundamental mechanisms of aging, or to work on radical cures to aging-related damage as Aubrey de Grey has suggested

In robotics, the path to short-term profit is industrial robots or Roombas, but the path to profound medium-term progress is more flexibly capable autonomous (humanoid or wheeled) mobile robots with capable hands, sensitive skin, etc. (and note how all the good robots are made in Japan, Korea or Europe these days, with government funding)

In area after area of critical technology and science, the short-term profit focus is almost certainly going to mislead us. What is needed is the ability to take the path NOW that is going to yield the best results 1-3 decades from now. I am very uncertain whether such an ability exists in the US, and it seems more clear to me that it exists in China.

The Chinese government is trying to figure out how to combine the explicit planning of their centralized agencies, with the implicit planning of the modern market ecosystem. They definitely don't have it figured out yet. But my feel is that, even if they make a lot of stupid mistakes as they feel their way into the future, their greater propensity for thinking in terms of DECADES rather than years or quarters, is going to be a huge advantage for them....

China has a lot of disadvantages compared to the US, including

a less rich recent science and engineering tradition

an immature ecosystem for academic/business collaboration

a culture that sometimes discourages effective brainstorming and teamwork

a less international scientific community

an unfortunate habit of blocking parts of the Internet (which doesn't prevent Chinese researchers from getting the world's scientific knowledge, but does prevent them from participating fully in the emerging Global Brain as represented by Web 2.0 technologies like Twitter, Facebook and so forth)

However, it may be that all these disadvantages are outweighed by the one big advantage of being better at long-range planning.

As Robin points out, dramatic success is often a matter of getting just a few things VERY RIGHT.

Tuesday, December 15, 2009

Today (as a consequence of my role in the IEET), I gave a brief invited talk at the National Defense University, in Washington DC, about the ethics of autonomous robot missiles and war vehicles and "battlebots" (my word, not theirs ;-) in general....

Part of me wanted to bring a guitar and serenade the crowd (consisting perhaps 50% of uniformed officers) with "Give Peace a Chance" by John Lennon and "Masters of War" by Bob Dylan ... but due to the wisdom of my 43 years of age I resisted the urge ;-p

Anyway the world seems very different than it did in the early 1970s when I accompanied my parents on numerous anti-Vietnam-war marches. I remain generally anti-violence and anti-war, but my main political focus now is on encouraging a smooth path toward a positive Singularity. To the extent that military force may be helpful toward achieving this end it has to be considered as a potentially positive thing....

My talk didn't cover any new ground (to me); after some basic transhumanist rhetoric I discussed my notion of different varieties of ethics as corresponding to different types of memory (declarative ethics, sensorimotor ethics, procedural ethics, episodic ethics, etc.), and the need for ethical synergy among different ethics types, in parallel with cognitive synergy among different memory/cognition types. For the low-down on this see a previous blog post on the topic.

But some of the other talks and lunchroom discussions were interesting to me, as the community of military officers is rather different from the circles I usually mix in...

One of the talks before mine was a prerecorded talk (robo-talk?) on whether it's OK to make robots that decide when/if to kill people, with the basic theme of "It's complicated, but yeah, sometimes it's OK."

(A conclusion I don't particularly disagree with: to my mind, if it's OK for people to kill people in extreme circumstances, it's also OK for people to build robots to kill people in extreme circumstances. The matter is complicated, because human life and society are complicated.)

(As the hero of the great film Kung Pow said, "Killing is bad. Killing is wrong. Killing is badong!" ... but, even Einstein had to recant his radical pacifism in the face of the extraordinary harshness of human reality. Harshness that I hope soon will massively decrease as technology drastically reduces material scarcity and gives us control over our own motivational and emotional systems.)

Another talk argued that "AIs making lethal decisions" should be outlawed by international military convention, much as chemical and biological weapons and eye-blinding lasers are now outlawed.... One of the arguments for this sort of ban was that, without it, one would see an AI-based military arms race.

As I pointed out in my talk, it seems that such a ban would be essentially unenforceable.

For one thing, missiles and tanks and so forth are going to be controlled by automatic systems of one sort or another, and where the "line in the sand" is drawn between lethal decisions and other decisions, is not going to be terribly clear. If one bans a robot from making a lethal decision, but allows it to make a decision to go into a situation where making a lethal decision is the only rational choice, then what is one really accomplishing?

For another thing, even if one could figure out where to draw the "line in the sand," how would it possibly be enforced? Adversary nations are not going to open up their robot control hardware and software to each other, to allow checking of what kinds of decisions robots are making on their own without a "human in the loop." It's not an easy thing to check, unlike use of nukes or chemical or biological weapons.

I contended that just as machines will eventually be smarter than humans, if they're built correctly they'll eventually be more ethical than humans -- even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

There was some understandable concern in the crowd that, if the US held back from developing intelligent battlebots, other players might pull ahead in that domain, with potentially dangerous consequences.... With this in mind, there was interest in my report on the enthusiasm, creativity and ample funding of the Chinese AI community these days. I didn't sense much military fear of China itself (China and the US are rather closely economically tied, making military conflict between them unlikely), but there seemed some fear of China distributing their advanced AI technology to other parties that might be hostile.

I had an interesting chat with a fighter pilot, who said that there are hundreds of "rules of engagement" to memorize before a flight, and they change frequently based on political changes. Since no one can really remember all those rules in real-time, there's a lot of intuition involved in making the right choices in practice.

This reminded me of a prior experience making a simulation for a military agency ... the simulated soldiers were supposed to follow numerous rules of military doctrine. But we found that when they did, they didn't act much like real soldiers -- because the real soldiers would deviate from doctrine in contextually appropriate ways.

The pilot drew the conclusion that AIs couldn't make the right judgments because doing so depends on combining and interpreting (he didn't say bending, but I bet it happens too) the rules based on context. But I'm not so sure. For one thing, an AI could remember hundreds of rules and rapidly apply them in a particular situation -- that is, it could do a better job of declarative-memory-based battle ethics than any human. In this context, humans compensate for their poor declarative memory based ethics [and in some cases transcend declarative memory based ethics altogether] with superior episodic memory based ethics (contextually appropriate judgments based on their life experiences and associated intuitions). But, potentially, an AI could combine this kind of experiential judgment with superior declarative ethical capability, thus achieving a better overall ethical functionality....

One thing that was clear is that the US military is taking the diverse issues associated with battle AI very seriously ... and soliciting a variety of opinions from those all across the political spectrum ... even including out-there transhumanists like me. This sort of open-ness to different perspectives is certainly a good sign.

Still, I don't have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good ... but there are a lot of other scenarios as well.

My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.

Wednesday, December 02, 2009

This interesting article presents data indicating that it takes around half a second for an unconscious visual percept to become conscious (in the human brain)...

This matches well with Libet's result that there is a half-second lag between unconsciously initiating an action and consciously knowing you're initiating an action...

(Of course, what is meant by "consciousness" here is "consciousness of the reflective, language-friendly portion of the human mind" -- but I don't want to digress onto the philosophy of consciousness just now; that's not the point of this post ... I've done that in N prior blog posts ;-)

My Chinese collaborator ChenShuo pointed out that, combined with information about the timing of neural firing, this lets us estimate how much neural processing is needed to produce conscious perception.

As I recall, the firing of a single neuron's action potential takes around 5 milliseconds ... It takes maybe another 10-20 milliseconds after that for the neuron to be able to fire again (that's the "refractory period") .... Those numbers are not exact but I'm pretty sure they're the right order of magnitude...

So, the very rough estimate is 100 cycles in the neural net before consciousness, it would seem ;)

This fits with the view of consciousness in terms of strange attractors ... 100 cycles is often enough time for a recurrent net to converge to into an attractor basin ...

But of course the dynamics during those ~100 cycles is the more interesting story, and it's still obscure....

Is it really an attractor we have here, or "just" a nicely patterned transient? A terminal attractor a la Mikhail Zak's work, perhaps? Etc.

Monday, November 16, 2009

(I usually reserve this blog for speculations on intellectual topics, but last night I had a dream that seemed sufficiently interesting to post here. So, here goes ;-) ....

In this dream, I moved to a strange foreign nation, and met a beautiful girl there whose ex-boyfriend was making her life very difficult, yet who she was still somehow attached to....

His martial arts expertise alarmed me, and so together with the mother of a friend who lived in this same strange place -- a very short, hunchbacked old lady who walked with a cane and wore a funny straw hat -- I went to a weird old-fashioned section of the city, where we did two things.

First, we paid some old white-bearded "witch doctor" to cast a magical spell on the ex-boyfriend, which caused him to forget having ever known the girl, haha.

Then, we went to a strange store full of ancient relics, and bought this cylindrical wooden container, which I was supposed to keep in my bedroom for good luck, but not to open.

The girl and I walked along the beach and the ex-boyfriend walked right past and showed no sign of recognizing her. This freaked her out a bit, and she asked me to have the spell undone on Dec. 21 2012.

Then I went back to my house, which I suddenly shared with the girl, and of course I had to open the wooden cylinder. She kept telling me not to, but I had to anyway. I opened one end of it, prying it open with a screwdriver, and inside the small cylinder was an infinite space -- a whole multiverse of possibilities.

She just kept staring inside it, looking intent but not saying anything. I asked if she wanted me to close it; but she shook her head no. There were millions of these little intelligent creatures in there, which could see our (and everything's) past and future.... Clearly she was absorbing a lot of knowledge from them ... and so was I ... but it was also clear that we were absorbing somewhat different things.

Then, we looked at each other and, without words, asked each other if we should dive into one of those universes or stay in this one. It was clear that in those universes we could still exist as individuals (and could still be with each other); but would exist in radically different form (some form not constrained by time, though there were other constraints not comprehensible in human terms).

Gradually, we collectively realized that we did not feel like entering that other multiverse at that particular time.

Then, she gave me a look that meant something like: "I will never be afraid of anything relating to human society anymore, nor be afraid of my own emotions, because I can see that this whole world of you and me and humanity and Earth is just a sort of artistic construction, which exists for aesthetic purposes. We have chosen to remain in this universe so as to remain part of this artwork ... "

... and then her unspoken thought faded out before it was done, because someone was in the house walking around and we got distracted by wondering who it was...

... and then I woke up because of the noise of my dad walking around downstairs in my house (he was visiting last night)

... and I tried to fall back asleep so as to re-enter the dream, but failed ...

Thursday, October 15, 2009

I've pointed out before (and it's not my original observation) that no branch of modern science contains a notion of "cause" more than vaguely similar to the folk psychology notion -- causation, as we commonsensically understand it, is something that we humans introduce to help us understand the world; and most directly, to help us figure out what to do....

[David Orban, in a comment on an earlier version of this post, noted that some formulations of relativity theory contain the term and concept of "causation." But causation as used in that context is really just "influence" -- the restrictions on light cones and so forth tell you which events can influence which other ones, but don't tell you how to distinguish which events are causal of which other ones in the stronger, commonsense usage of the "causation" concept.]

Cause, in our everyday intuitive world-view, is tied to will: "A causes B" means "I analogically intuit that if I were able to choose to make A happen, then B would happen as a result of my choice."

And, I think cause is also tied to storytelling. Causal ascription is basically a form of storytelling.

Think about the archetypal story structure:

If we envision a typical causal ascription as fitting into this structure, we have:

The Setup is the situation in which the causation occurs, the set of "enabling conditions." For instance, we rarely would say that oxygen is the cause of us being alive -- oxygen is considered an "enabling condition" rather than a cause of our life ... it's part of the set-up....

The Confrontation is the introduction of something unusual into the Setup. This must be something that is not always there in the Setup, otherwise one wouldn't be able to isolate it as the cause of some particular events. It's not necessarily a violent confrontation, but it's a violation of the norm. Could be someone shooting a gun, could be a couple having sex, could be a finger pushing down on a computer keyboard. The less expectable and frequent it is, the better -- i.e. the more convincing it will be as a potential cause of some event.

The End is the event being caused.

My suggestion is that, if one digs into the matter deeply, one will find that many of the same patterns distinguishing compelling stories from bad ones, also distinguish convincing causal ascriptions from unconvincing ones.

What Would Aristotle Say in a Situation Like This?

The Aristotelian distinction between efficient and final cause is also relevant here.

"Efficient cause" is what we usually think about as causation these days: roughly, A causes B if

P(B|A,Setup) > P(B|Setup)

and there is some "plausible causal mechanism" (i.e. some convincing story) connecting A and B in the context of the Setup.

"Final cause" is telos, teleology -- A causes B if B, as a goal, somehow reaches back in time and makes A happen as part of the inevitable movement toward B's occurrence.

Modern physics theories have no place for final causes in the Aristotelian sense. But, human psychology does! Very often, when a human seeks a cause for something, what they're doing starting with some event they've observed and trying to find a "good reason" for it.

Why did I fall in love with her? It must have been because she was beautiful ... or smart ... or rich ... or whatever.

Why did my business succeed? It must have been because I was smart ... or because it was the right time ... etc.

Storytelling generally mixes up efficient and final causation in complex ways. Many stories give a feeling of inevitability -- final causation -- by the end. And when postmodernist stories avoid giving this feeling, it's generally done intentionally, with a view toward violating the known psychological norm and doing something disconcerting or shocking.

Convincing causal ascriptions, like compelling stories, tend to mix up efficient and final causation.

Cause and Will

Nietzsche wrote that (paraphrasing) "free will is like the army commander who takes responsibility, after the fact, for the actions of his troops."

Experiments by Gazzaniga, Libet and other neuroscientists have validated that in many cases the reflective, willing portion of the human brain-mind "decides" to do something only well after some other part of the brain has actually already started to do it.

This fits in fine with the notion of causal ascription as storytelling. Willing is a matter of making up a story about how one came to do something. It had better be a compelling story or the illusion of free will will fall apart, which is bad for the maintenance of the self-model!

Causation and Storytelling in Neuroscience, AGI and Early Cognitive Development

One concrete hypothesis that comes out of this train of thought is that, when the neural foundations of causal ascription and storytelling are unravelled, it will turn out that the two share a large number of structural and dynamical mechanisms.

Another hypothesis is that, if we want our AGI systems to be able to ascribe causes in humanlike ways, we should teach our AGI systems to tell and understand stories in a humanlike way.

I strongly suspect that one of the major roles that storytelling plays in human childhood, is to teach children patterns of narrative structure that they will use throughout their lives in constructing causal ascriptions (along with many other kinds of stories).

...

Ahh ... I would love to improve this blog post with a bunch of concrete examples but that will need to wait for later ... I'm tired and need to wake up early in the AM ... at least "I" think that is the cause of me not wanting to improve it right now ;-D ...

Wednesday, September 09, 2009

Earlier this year I gave a talk at Yale University titled "Ethical Issues Related to Advanced Artificial General Intelligence (A Few Small Worries)" ...

It was a verbal discussion focused rather than PPT focused talk but I did show a few slides (though I mostly ignored them during the talk): anyway the brief ugly slideshow is here for your amusement...

The most innovative point made during the talk was a connection between the multiple types of memory and multiple types of ethical knowledge and understanding.

I showed this diagram of different types of memory and the cognitive processes associated with them (click the picture to see a bigger, more legible version)

and then I showed this diagram

which associates different types of ethical intuition with different types of memory.

To wit:

Episodic memory corresponds to the process of ethically assessing a situation based on similar prior situations

Procedural memory corresponds to "ethical habit" ... learning by imitation and reinforcement to do what is right, even when the reasons aren't well articulated or understood

Attentional memory corresponds to the existings of appropriate patterns guiding one to pay adequate attention to ethical considerations at appropriate times

I presented the concept that an ethically mature person should balance all these kinds of ethics.

This notion ties in with a paper that Stephan Bugaj and I delivered at AGI-08, called Stages of Ethical Development in Artificial General Intelligence Systems. In this paper we discussed, among other topics, Kohlberg's theory of logical ethical judgment and Gilligan's theory of empathic ethical judgment. In the present terms, I'd say Kohlberg's theory is declarative-memory focused whereas Gilligan's theory is focused on episodic and sensorimotor memory. We concluded there that to pass to the "mature" stage of ethical development, a deep and rich integration of the logical and empathic approaches to ethics is required.

The present ideas suggest a modification to this idea: to pass to the mature stage of ethical development, a deep and rich integration of the ethical approaches associated with the five main types of memory systems is required.

Tuesday, September 08, 2009

Any sensible person with the choice of going to the polls to vote has almost surely asked themselves “Why should I bother voting when it’s incredibly unlikely my vote will make any difference, given the large number of people voting?”

[Note for non-US readers: in the US, unlike some countries, there is no legal requirement to vote; it's an option.]

I've discussed this issue with dozens of people and have never really heard any sensible answers.

I say "If I stay home and work or play Parcheesi instead of voting, then the election will proceed exactly the same way as if I had voted. The odds of me affecting this election are incredibly tiny."

They say: "Yeah, but if EVERYBODY thought that way, then democracy couldn't work." ... as if this were a counterargument.

Or: "Yeah, but if everyone intelligent enough to have that train of thought followed it and avoided voting, then only stupid people would vote and we'd have a government elected by the retarded.... Oh, wait ... would that be any different than what we actually have now?"

I've thought about this a lot, off and on, over the years, and finally I think I've come up with an interesting, novel answer to the question. To have a handy label, I'll call it the "multiversal answer."

This is a somewhat philosophically complex answer, which requires a deviation from our ordinary ways of thinking about our relationship between ourselves and the universe.

I'll run through some details and probability calculations, and then get back to philosophy and free will and such at the end.

Rational Agents

The multiversal answer pertains to agents who make choices based on expected utility maximization. That is, it pertains to agents who: Given a choice between two actions, will choose the one with the property that, after the choice is made, the agent’s utility will be highest. Or, to put it informally, it pertains to agents who follow the rule: “Choose the option that, in hindsight, you will wish you had made.”

Of course people don't always follow this sort of rule in determining their actions; people are complex dynamical systems and don't follow any simple rules. But, my point is to argue why voting might make sense for an agent following a simple rational decision-making procedure. I.e.: why voting might be a reasonable behavior even though, in the sense indicated above, the odds of your vote being decisive in an election are minimal.

Vote So That You'll Live in a Universe Where People Like You Vote

The conceptual basis of the multiversal answer is the principle that “you should vote because, if you vote, this means that after you’ve voted, you’ll know that you probably live in a universe where people similar to you vote.

On the other hand, if you don’t vote, this means you probably live in a universe where people similar to you don’t vote.”

Clearly, you would rather live in a universe where people similar to you vote.

(Yes, this could be formalized based on the degrees to which individuals with varying degrees of similarity to you vote. But we won’t worry about the math details for now.)

Your vote may not count much on its own, but it’s a bad thing if everyone similar to you (with the same preferences as you) doesn’t vote.

Note that it’s not good enough to intend to vote but then back out at the last minute. After all, if you do that, then probably everyone similar to you is going to do the same thing! So if you do that, it means you’re in a universe where people similar to you are likely to almost vote, rather than a universe where people similar to you are likely to vote.

Possible Worlds

Underlying this answer is a “possible worlds” philosophy, holding that there are many possible universes we could live in -- and we don’t know exactly which one we do live in, based on the limited data at our disposal.

So, given a predicate P like “the degree to which people similar to me vote,” we can estimate the truth value of P by a weighted average of the product

(degree that P holds in possible world W) * (probability that world W is the one I live in)

Basically, if the left hand side of this equation is small, this means that the effect of me voting on the probability that I live in a good world, is almost entirely contained in the effect of people like me voting on this probability. But, this seems quite sensible.

So, if this condition holds, then voting increases the odds of being in a good world, so it makes some sense to vote to increase the odds of being in a good world.

There’s still a quantitative calculation to make, though. Voting has some cost, so one needs to estimate whether the increase in the expected goodness of {the world one estimates oneself to live in}, induced by voting, outweighs the cost of voting. This devolves into a bunch of algebra that I don’t feel like doing right now. But note that it’s a totally different calculation than the calculation as to whether one’s individual vote makes any difference in a particular election.

Free Will

Underlying the above perspective is an attitude toward "free will" which is different from the one conventional in the modern Western mindset.

In the conventional interpretation of "free will", a person can choose whether to vote or not, and this doesn't impact their estimate of what kind of universe they live in -- it's an independent, free choice.

In the interpretation used in the multiversal answer to the voting problem, a person can (in a sense) choose what to do, but then when they study their choices in hindsight, they can infer from the pattern of their choices something about the universe they live in.

Combining this with the "expectation maximization" approach, which says you should make the choices that you'll be happiest with in hindsight (after the choice is made) ... one comes up with the principle that you should make the choice that, in hindsight, will yield the most desirable implications about what kind of universe you live in.

And it's according to this principle that, in the multiversal answer, voting may be a sensible choice regardless of the small chance that your particular vote impacts the election.

The point is that, after voting, the fact that you voted will give you evidence that you live in a nice universe, where people like you vote and therefore things tend to go in a favorable way for you.

On the other hand, if you don't vote, then afterwards the fact that you didn't vote will give you evidence that you live in a universe where people like you don't vote, and therefore things tend to go against you.

So, I think the decision to not vote because your vote is very unlikely to impact the election, is based partly on a naive folk theory of "free will." In a more mature view of will and its relation to the universe, the decision to vote or not isn't exactly a "free and independent decision" ... but there is rationality in making the "not quite free or independent decision" to vote.

(Perhaps this is related to the intuition people have when they say things like "If everyone thought that way, then no one would vote." Statements like this may reflect some intuition about what it means to live in a good branch of the multiverse, which however conflicts with modern Western folk psychology intuition about free will.)

Sunday, July 26, 2009

One of the reasons I spent 4 weeks in China this summer, organizing the First AGI Summer School and collaborating on research with my friend Hugo de Garis's Artificial Brain Lab at Xiamen University, was to qualitatively investigate Hugo's stories about the great potential for AGI R&D allegedly existent in the country of China.

One thing I learned about China is: the answer to almost any nontrivial question is some complex, multidimensional form of "maybe" or "sort of."

(Eventually the maybes and sort-ofs must collapse: Chinese do make hiring and firing decisions, get married, publish papers, and take other definitive actions ... but on the whole I found that in China there is a much greater willingness to embrace uncertainty than in the US, and a much smaller desire to make things clear and definite.)

In this spirit, I can't claim I came to a definite conclusion about the potential of China to lead the world in advanced AGI R&D.

But I can say that it's a definite possibility ;-) ....

Though the situation is complex, my gut feel is that Hugo is probably right, at least in the following sense: If a moderate number of AGI researchers (from the West and China both) apply their energy to pursuing the R&D opportunities that China offers, there is a strong potential that AGI research could advance there much faster than in other parts of the world.

Here are some relevant facts, militating in favor of China's role in AGI technology:

Even in the current dismal world economy, China's economy is still growing

Chinese students and researchers are willing to work long hours (of course some Americans and Europeans are too ... but my impression is that this willingness is greater in China)

The Chinese education system is very good at teaching advanced mathematics and algorithm theory, which are important for AGI

China is interested in pursuing advanced technologies -- both with practical applications in mind, and with a nationalistic motive of displaying their technological strength relative to other nations

Unlike the US, the Chinese research funding establishment has no "chip on its shoulder" about AI or AGI -- it has the same status as any other advanced technology. There was never an "AI winter" in China, nor is there dramatically more skepticism about AI than about other computer technologies

Due to the centralized system of government, if the central administration decides they value a certain technology, there is the potential for a massive amount of resources to be directed to that technology in a relatively rapid time-frame. Things often move very slowly in China, but sometimes they can also move much faster than in less centrally organized economies.

The cost of highly educated labor is low in China, so that if funding from outside China is found to help support China-based projects, this funding can go a long way!

Here are some other relevant facts, which are challenges China has to overcome if it wants to lead the world in AGI:

Because China lacks a large, robust, cutting-edge software industry, there isn't that much "cultural knowledge" about how to manage complex software projects using "agile" methodologies. Yet, AGI is a complex software project that really demands an agile methodology. (Note that China has a load of great software engineers; the issue I'm pointing out regards software project management, not software engineering.)

Due to the nature of Chinese culture, it is fairly common for work environments to arise in which participants don't feel free to share their innovative ideas, and to point out problems with the ideas others are pursuing (especially if these "others" have higher social status according to Chinese tradition). However, some Chinese work environments are very friendly to innovation and criticism; so IMO this is best considered as a problem that can be overcome with attention to the personalities involved and the management mechanisms

Due to various factors including the restrictions on travel and Internet sites that the Chinese government places, China often feels "cut off" from the rest of the world, which results in a less-than-optimal degree of interaction with the international research community

Given the above factors, my conclusion is that IF China is able to attract sufficient foreign AGI experts, it may well be able to leapfrog ahead of other nations in the race to create AGI.

What the foreign experts would bring is not just AGI expertise and ideas, but expertise in allied areas like

interdisciplinary education

agile software project management

the management of innovation-friendly, "flat-hierarchy" research groups

Further, the presence of foreign experts in China full-time would result in other foreign experts more frequently traveling to China to speak, and in Chinese students more often traveling outside China for conferences and research visitation -- all of which would decrease the "China isolation" factor, and increase the intellectual potency of Chinese AGI research labs.

It's also the case that in Chinese academia right now, foreigners can get away with "shaking things up" a little more than Chinese nationals can. So even if a Chinese national showed up with exactly the same expertise and personality as a foreigner, they would have different strengths and weaknesses in the context. They would be able to get some things done more easily due to their Chinese-ness, but would also not be able to "get away with" as many disruptive methodologies.

So: the reason I think foreign AGI experts are critical is not that Chinese lack good ideas about AGI. (Yes, I think my own ideas about AGI are the best ones, but that's not my point right now!) It's that I don't think any one country has a monopoly on great AGI ideas and people, so the prize is likely to go to a country that can build an international AGI research community ... and also that there are certain organizational skills that are very useful for AGI, but not that well developed in China right now.

But there are serious challenges involved in recruiting foreign AGI experts to China:

The salaries are low by international standards. The salaries foreign faculty get at Chinese universities allow a very nice lifestyle in China -- but even so, they don't go that far in terms of international travel, purchase of electronics, or helping family members in the West.

Many AGI researchers have spouses and children who don't want to live in China. There are good international schools for researchers' children; but finding appropriate jobs for spouses can be difficult due to the language barrier and the different nature of the economy

The Chinese system of government is alien and off-putting to some foreigners. (As an example, this blog cannot be read by most Chinese Internet users right now, because blogger.com is blocked by the Chinese government. This sort of thing really bothers some foreign researchers and/or their families)

Some researchers will fear "career damage" if they go to a university without international name recognition (though, this factor would disappear quickly after a critical mass of researchers went to China)

So, the sixty-four trillion dollar question is to what extent these latter factors can be overcome.

I believe they could be overcome if Chinese universities or research agencies made very clear, very clearly research-friendly offers to foreign AGI researchers -- and then followed through on these offers once the researchers arrived. AGI researchers are a dedicated bunch, and many would put up with the problems cited above in order to have a good chance to lead a team of brilliant, qualified students at implementing powerful AGI systems.

As you may have guessed, one reason I'm cataloguing these factors so systematically is that I'm debating trying to rearrange my life to either move to China or (more likely) spend one semester per year in China.

At the moment, in my own discussions with Chinese universities about AGI research funding, I am finding things mildly confusing. The discussions are going interestingly, but I feel much less clarity than I would in a comparable discussion with a US university. This is nobody's fault -- it's a natural consequence of "cultural differences" -- but it's a factor that will have to be smoothed-out, IMO, if China is going to recruit a sizeable number of foreign AGI researchers.

So we now reach the conclusion of the above chain of thought. IF Chinese universities manage to fine-tune the art of recruiting foreign AGI researchers, then I think that China has a real chance of leading the world in the development of AGI.

I predict that, if China doesn't adopt the world lead in AGI, it will be because it fails at the things I cited above. They will fail to dominate in AGI if it turns out that the Chinese way of recruiting and retaining foreigners is too alien, causing a failure to accumulate a critical mass of foreign R&D leaders. This could certainly happen. Time will tell.

What Are the Risks if China Pulls Ahead in the AGI Race?

So China might plausibly take the lead in the AGI race.

As a citizen of the USA and Brazil (not China), does this worry me?

Not really.

Like Buckminster Fuller, I consider myself a "passenger on Spaceship Earth" (and I won't hesitate much to board another vessel when one becomes available -- or better yet I'd like to send multiple copies of myself on multiple vessels! But, I digress ;-).

One thing I'm being insistent on in my collaboration with Hugo de Garis's Artificial Brain Lab at Xiamen University, is that all our work be released as open-source code. The university folks there have no problem with this. So, in the case of my and Hugo's work, it's not a situation where we're trying to develop an AGI that will be exclusively owned by the Chinese government.

And, I feel strongly that anyone else doing AGI work in China -- or anywhere else! -- should take the same approach. The main reason I decided to open-source my own AGI project (OpenCog) (while keeping some valuable AGI-related technologies proprietary within Novamente LLC), is the intuition that AGI is a sufficiently big and thoroughgoingly important thing that it should be developed by the human collective mind as a whole, not by a small group or even a single nation.

Of course, if more and more AGI research gets done in China, then more and more of the world's AGI expertise will exist in China -- which will give China a substantial leadership position in AGI, regardless of whether the AGI code is open-source. But this really doesn't worry me much either, partly because I've been so impressed with the character and spirit of the younger generation of Chinese, who (in these scenarios we're discussing) will be doing most of the AGI work.

I met wonderful Chinese people of all ages during my visit to China. But there are huge generational differences among Chinese, and the Chinese who grew up with the Internet have a drastically different view of the world than the immediately prior generations. Most of the Chinese I met aged under 30 had a reasonably modern, international understanding of the world -- and some of the Chinese I met aged under 22 had such modern attitudes that they really could have been youth anywhere. The Internet is spreading international ideas and culture around the world, just like it's spreading the AGI meme around the world -- and may increasingly start spreading AGI researchers around the world ... we'll see.

Although I used a picture of Alfred E. Neuman above (like Hugo, I'm kind of a sucker for dramatic effect), I want to be clear that I don't have a cavalier attitude about the threat that could be posed if ANY government took control of the world's first AGI for their own parochial ends.

But I think this is a problem we need to work around, regardless of which country we do our research in.

By developing open-source code (made available on SourceForge, Launchpad, Google Code and so forth), and by carrying out our research in a way that emphasizes linkages with the international research community, we'll guarantee that AGI comes about as a product of the international collective mind of AGI researchers. This provides no grand guarantee of "AGI safety" (nothing can do that), but I strongly feel it's the best approach.

I returned home 2 weeks ago from the First AGI Summer School, which was held in the Artificial Brain Lab at Xiamen University in Xiamen, China at the end of June and the beginning of July.

Ever since I got back I've been meaning to write a proper summary of the summer school -- how it went, what we learned, and so forth -- but I haven't found the time, and it doesn't look like I'm going to; so, this blog post will have to suffice, for the time being at any rate.

First of all, I need to express my gratitude to Hugo de Garis

and Xiamen University for helping set up the summer school. Coming to Xiamen to do the Summer School was a great experience for me and the others involved -- so, thanks much!

Some photos I took in Xiamen are here (mixed up with a few that YKY took on the same trip). (Viewer beware: some of these are summer school photos, some are just "Ben's Xiamen tourism photos"....)

To get a sense of what was taught at the summer school -- and who was on the faculty -- you can go to the summer school website; I won't repeat that information here.

The first two weeks of the summer school were lecture-based, and the last week was a practical, hands-on workshop focused on the OpenCog AI system. Unfortunately I missed most of the hands-on segment, as I wound up spending much of that week meeting with various Chinese university officials about future possibilities for Chinese AGI funding (but I'll write another blog post about that), and demo-ing the Artificial Brain Lab robot to said officials.

See here for some videos of the above-mentioned robot, along with some "OpenCog virtual pet" demo videos that were shown at the summer school. (And, the OpenCog virtual pet was also gotten up and running "live" in Xiamen, of course....)

The number of students wasn't as large as we'd hoped -- but on the plus side, we did have a group of VERY GOOD students who learned a lot about AGI, which was after all the point.

(In fact, most conferences have found their attendance figures down this year, due to people wanting to save money on travel costs: an obvious consequence of the faltering world economy.) The majority of students were Chinese from Xiamen University and other universities in Fujian province, but there were also some overseas students from Europe, the US, Korea and Hong Kong (OK, well, Hong Kong isn't quite "overseas" ;-).

All the lectures were videotaped by Raj Dye (thanks Raj!!)

and will be put online once Raj gets time to edit them. I think these will form an extremely valuable resource, and will reach a lot more people than the summer school itself did. (Long live the Internet!!). Raj's active camera work captured a bunch of the dialogues during and after the talks as well, and I think these will make quite interesting viewing. As you might expect, there was some pretty intense give-and-take (especially, for example, during Allan Combs' talks on cognition and the brain).

I'm definitely interested to help organize some future AGI summer schools ... though the next one will be in a different location, as we've already done a pretty good job of spreading the word about AGI to the AI geeks of Fujian Province! Maybe the next one will even be back here in the boring old US of A ....

Random Observations on Chinese-ness in the AGI Context

I learned a lot about China in the course of doing the summer school (though I'm still pathetically ignorant about the place of course ... there's a lot to know) ... I won't try to convey 1% of what I learned here, but will just write down a few hasty and random semi-relevant observations.

First, I learned to speak verrrry slowly and clearly since Chinese students are more accustomed to written than spoken English! ;-)

More interestingly, I learned that the Chinese educational system is more narrowly disciplinary than the US system, and also more focused on memorization of declarative knowledge rather than practical "know how." Compared to their US counterparts, computer science graduates in China know an AWFUL LOT of computer science, yet don't have much advanced knowledge of areas beyond computer science, nor all that much software engineering knowledge or hands-on coding experience. ("Software Engineering" is a separate department in the Chinese university system, and I didn't get to know the Software Engineering students, only the Computer Science ones.) So one role the summer school served was just to introduce a bunch of Chinese AI students to some allied disciplines -- neuroscience, cognitive psychology, philosophy of mind -- that they hadn't seen much during their formal education so far.

(Actually, separately from the Summer School, I did give a talk to some undergrads in the Software Engineering School, on AI and Gaming, which contained one funny bit (unfortunately that talk was not videotaped). I wasn't sure if the students understood what I was talking about, so as a test I showed them this picture as part of my powerpoint

Normally I use this lovely picture as an example of "conceptual blending" (a cognitive operation that OpenCog and other AGI systems must carry out), but this time I announced it differently; I said: "Furthermore, the Artificial Brain Lab here at XMU has an ambitious backup plan, in case our computer science approach to AGI fails. We've devised a machine that can remove the head from a graduate student, and attach it to the body of a Nao humanoid robot, and thus create a kind of synergetic cyborg intelligence." I was curious to see if these Chinese undergrads understood my English well enough to tell that I was joking -- but from their reaction, I was unable to tell. They laughed because the picture looked funny, but I still don't know if they understood what I was saying! Fortunately the Summer School students were less inscrutable, and more reactive and communicative! And overall the AI in Games lecture went well in spite of this perplexing crosscultural joke experience....)

(As an aside within an aside, I also learned during various conversations that typical Chinese high school students spend from 7AM till 10PM or so at school, 6 days a week. Damn.)

Another thing that surprised me was the strength of knowledge the Chinese students had in neural nets, fuzzy logic, computer vision and other "soft computing" and robotics related AI, as compared to logic-based AI. By and large, they had a very strong mathematics background, and a good knowledge of formal logic -- but fairly little exposure to the paradigm in which logic is applied to create AI systems. Quite different from the typical American AI education.

All in all the Chinese seemed to have a lot less skepticism about "strong AI" than Americans. It's not that they had a great faith in its immediacy -- more that they lacked the egomaniacal confidence in its extreme difficulty or implausibility, which one so often finds in Westerners. Chinese culturally seem much more comfortable with accepting situations of great unconfidence, in which the evidence just doesn't exist to make a confident estimate.

I came to the summer school from the Toward a Science of Consciousness conference in Hong Kong, where I led a Machine Consciousness workshop -- which I won't write about here, because I wrote a summary of it for H+ magazine, which will appear shortly. Issues of machine consciousness came up now and then at the summer school, but interestingly, they seem to hold a lot less fascination for Chinese than for Westerners. When I put forth my panpsychist perspective in China (that the universe as a whole is conscious in a useful sense, and different systems -- like human brains and digital computers -- manifest this consciousness in different ways ... and our "theater of reflective consciousness" is one of the ways universal consciousness can manifest itself in certain sorts of complex systems), no one really bats an eye (and not just because the Chinese lack a taste for eye-batting). Not that Chinese scientists consider this panpsychist perspective wholly obvious or necessarily correct; but nor do they consider it outrageous -- and, most critically, very FEW Chinese seem to feel like many Westerners do, that "reductionism" or "materialism" is obviously correct. Once you remove the tendency toward dogmatic materialism, the whole topic and dilemma of "machine consciousness" loses its bite....China versus California (A Semi-Digression)

(This section contains some ramblings on Oriental versus California culture, and the Singularity -- which are only semi-relevant to the summer school, but I'll put them here anyway, because I find them amusing! Hey, this is a blog, anything goes ;-)

In mid-July I voyaged from the Xiamen AGI summer school to California where I gave the keynote speech at the IJCAI workshop on Neural-Symbolic computing (a really interesting gathering, which I'll discuss some other time), and then gave a lecture on AGI at the Singularity University (at NASA Ames Lab, in Silicon Valley).

The contrast between the SU students and the Chinese AGI Summer School students couldn't have been more acute.

For one thing, there was a huge contrast of ego ... to phrase things dramatically: The SU students emanated an attitude that seemed to say "We know more than anyone on the planet!! We already knew almost everything we need to know to dominate the world as part of the techno-elite!"

The Chinese students were not actually more ignorant (though their knowledge bases had different strengths and weaknesses than those of the SU students), but they were dramatically more humble about their state of knowledge!

The SU students also seemed extremely eager to project everything I said about AGI into the world they knew best: Silicon Valley style Internet software. So, most of the questions during and after my talk centered around the theme: "Isn't it unnecessary to work on AGI explicitly ... won't AGI just emerge from the Internet after Silicon Valley startup firms create enough cool narrow-AI online widgets?" When I said I thought this was unlikely, then the questions turned to: "OK, but rather than writing an AGI that actually thinks on its own, shouldn't you just write a narrow-AI that figures out the best way to combine existing online widgets, and achieves general intelligence that way?" And so forth.

But I don't want to make it sound like the SU student body is "all of one mind" -- it's certainly a heterogeneous bunch. At the lunch following my talk at SU, one SU student surprised me with the following statement (paraphrased): "One reason I think AI systems may not achieve the same kind of ethical understandings or states of mind as humans, is that they lack one of the most important human characteristics: our humbleness. We humans have a lot of limitations in our bodies and minds, and these limitations have made us humble, and this humbleness is part of what makes us ethical and part of what makes us profoundly intelligent in a way that a mere calculating machine could ever be."

I laughed out loud and immediately said to the student: "OK, I'm onto you. You're not American." (The student did look Asian ... but I was guessing he was not Asian-American.)

He admitted to being from Korea ... and I noted that few Americans -- and especially no Silicon Valley techno-geek -- would ever identify humbleness as a central characteristic of humans or a key to human intelligence!

Then I couldn't help thinking of the saying "Pride comes before a fall" ... and Vinge's (correct) characterization of the Singularity as a point after which HUMANS WILL HAVE NO IDEA WHAT'S GOING ON ... i.e. no real ability to predict what happens next, as superhuman nonhuman intelligences will be dominating the scene.

Philosopher Walter Kauffmann coined the dorky but evocative term "humbition" to denote the combination of humility and ambition. There's not much humbition in Silicon Valley ... nor for that matter in the public trumpetings of the Chinese government ... but there was a LOT of humbition in the Chinese students at the AGI summer school and the Artificial Brain Lab. Perhaps this quality will serve them well as the world advances, and our knowledge and intuitions prove decreasingly adequate to comprehend our situation...

If you believe that AGI will be created from piecing together narrow-AI internet widgets, then yeah, mostly likely AGI will be created by the Silicon Valley techno-elite. But if (as I suspect) it requires fundamentally different ideas from the ones now underlying the world's technological infrastructure ... maybe it will be created by people who are more open to fundamentally new and different ideas.

But this leads into the next blog post I'm going to write, exploring the question of whether Hugo de Garis is right that AGI is going to get created in China rather than the West!

Musings on the Concept of a Systematic AGI Curriculum, and Lessons for Future AGI Summer Schools

Next, what did I learn this summer about the notion of an AGI summer school, and about teaching AGI altogether?

One big lesson that got reinforced in my mind is: Teaching AGI is very different than teaching Narrow AI!

There is basically no systematic AGI education in universities anywhere on the planet, and this fact certainly helps to perpetuate the current AGI research situation (in which there is very little AGI research going on). By and large, everywhere in the world, students graduate with PhD degrees in AI, without really knowing what "AGI research" means.

Another conclusion I came to is that a carefully crafted "AGI Summer School" curriculum could play a major role -- not only in providing AGI education, but in demonstrating how AGI material should be structured and taught.

However, creating a thorough, systematic AGI curriculum would be a lot of work ... and we didn't really attempt it for the First AGI Summer School. I think the lectures mostly went very well this time (well, you can judge when the videos come online!!), and the sequencing of the lectures made good didactic sense -- but, for the next AGI summer school, we'll put a little more thought into framing the curriculum in a systematic way. Now, having done the summer school once, it's more clear to me (and probably the other participants as well) what an AGI curriculum should be like.

First of all, it's obvious that to make a systematic AGI curriculum, one would need some systematic background curriculum in areas like

Neuroscience

Linguistics

Philosophy of Mind

Psychology (of Cognition, Perception, Emotion, etc.)

We didn't do enough of that this time. Allan Combs' lectures at the Xiamen summer school formed a nice start toward a Neuroscience background curriculum for AGI, but due to lack of time he couldn't do everything needed.

In this vein, one thing that became clear to me at the Xiamen summer school is: The standard "cognitive science" curriculum would certainly fill this need for background, but it's not exactly right, because it's not specifically focused on AGI ... AGI students really only need to digest a certain subset of the cognitive science curriculum, selected specifically with AGI-relevance in mind. But judiciously making this selection would be a nontrivial task in itself.

Next, as part of a thorough AGI curriculum, one would need a systematic review of different conceptions of what "general intelligence" is -- we did such a review at the Xiamen summer school, but not all that systematically. Pei Wang gave a nice talk on this theme, and then Joscha Bach and I presented our own conceptions of GI, and I also briefly reviewed the Hutter/Schmidhuber "universal intelligence" perspective.

Then there's the matter of reviewing the various AGI architectures out there. I think the Xiamen summer school did a fairly good job of that, with in-depth treatments of OpenCog, Pei Wang's NARS architecture, and Joscha Bach's MicroPsi ... and a briefer discussion of Hugo de Garis's neural net based Artificial Brain approach ... and then very quick reviews of other AGI architectures like SOAR and LIDA. Of course there are many, many architectures one could discuss, but in a limited time-frame one has to pick just a few and focus on them. (It would be nice if there were some more systematic way to review the various AGI architectures out there than taking a "laundry list" approach, but this isn't an education problem, it's a fundamental theory problem -- no such systematization exists, even in the research literature.)

There were a lot of OpenCog-related lectures at the Xiamen summer school, and one thing I felt was that it was both too much and too little! Too much OpenCog for a generic AGI Summer School, but too little for a real in-depth OpenCog education. At future summer schools we may split OpenCog stuff off to a greater extent: give a briefer OpenCog treatment in the main summer school lectures, and then do a separate one-week OpenCog lecture series after that, for students who want to dig deep into OpenCog.

Another educational issue is that each AGI architecture involves different narrow-AI algorithms, so that to really follow the architecture lectures fully, students needed to know all about forward and backward inference, attractor, feedforward and recurrent neural nets, genetic algorithms and genetic programming, and so forth. (Most of them did have this knowledge, so it wasn't a problem; actually this might be more of a problem in the US than in China, as China's education system is very strong on comprehensively teaching factual knowledge.) That is: even though AGI is quite distinct from narrow AI, existing AGI architectures make ample use of narrow-AI tools, so students need a good grounding in narrow AI to grok current AGI systems. It would be good to make a systematic list of tutorials on the most AGI-relevant areas of narrow AI, for students whose narrow-AI background is spotty. Again, we did some of this for the Xiamen summer school, but probably not enough.

Finally there's the terminology issue. There is no good "AGI glossary", and every researcher uses terms in slightly different ways. Updating and enlarging an online AGI glossary would be a great project for students at an AGI summer school to participate in!

Undramatic Non-Conclusion

So, the first AGI Summer School went pretty interestingly, and I'm really glad it happened. It was interesting to get to know China a little bit, and to get some experience teaching AGI in an intensive-course context. I learned a lot, and I guess the other faculty and the students did too.... I also made a number of excellent new friends, both among the Chinese and the foreign students. As with many complex real-world experiences, I don't really have any single dramatic summary or conclusion to draw ... but I'm looking forward both to future AGI summer schools, and to future experiences with "AGI in China"....

Monday, June 22, 2009

A while ago I wrote a blog post suggesting that quantum logic should be applied more generally than to quantum physical systems ... that it should be applied to complex classical systems in some cases as well, if they are so complex that their states are unobservable to a certain observer.

This, I suggested, would require making the choice of logic observer-dependent: i.e., the system T might best be modeled by system S using quantum logic, but by system R using classical logic.

I didn't at the time see how to make this speculation rigorous but I've now found a related literature that helps a lot.

And by refining my previous idea, I've come up with an argument that possibly human consciousness may be effectively modeled using quantum logic,whether or not the human brain is a quantum system.

I may write a paper on this stuff at some point (in which process I'll probably figure out nicer ways to express the ideas), but wanted to write it down now while it's fresh in my mind.

Atmanspacher's Idea

Diederik Aerts and Liane Gabora have written some very nice papers related to this topic ... and I read their stuff years ago but didn't quite see how to connect it to my relevant intuitions.

What I discovered just recently was the related work of Harald Atmanspacher, which ties in more directly with the way I was thinking about these issues. (Some relevant papers by both of these guys are linked to at the end of this post.)

Put simply, Atmanspacher's view is that: In any case where two properties of a system cannot be simultaneously measured with high accuracy, you have a situation that should be modeled using quantum logic.

I.e., quantum logic should be applied to any case where there are incompatible observables ... whether or not this is due to quantum microphysics.

Making the Choice of Logic Observer-Dependent

My twist on Atmaspacher's idea is to suggest that quantum logic should be applied, by a cognitive system, to any situation that has two aspects which (perhaps by quantum microphysics, or perhaps simply due to its limitations as a cognitive system) the cognitive system cannot model simultaneously.

That is: If T has two aspects, and S cannot model these two aspects of T simultaneously without becoming non-S, then from the perspective of S, these aspects of T should be modeled using quantum logic rather than classical logic.

Note that S in this argument is not a specific physical system at a particular point in time, but rather a category of instantaneous physical systems, which are being considered as instantiations of a single abstract "system" (for example, "Ben Goertzel" is a category of instantaneous physical systems).

So, my suggestion is that whether T should be reasoned about by quantum or classical logic, must be determined by relativizing the reasoning to some category of instantaneous physical systems.

Possible Implications for Quantum Consciousness

What spurred me to start digging into these issues just now was a conversation with Stuart Hameroff, who believes consciousness to be a quantum phenomenon.

My suggestion is: It could be that a quantum model of human consciousness is the right one, even if the underlying physics of the brain is basically "classical" (and I don't claim to know for sure whether it is or not).

(Note that I referred above to a classical model of "human consciousness", not of consciousness in general -- I tend toward panpsychism, meaning I think everything is conscious and different systems just manifest universal consciousness in different ways.)

As a natural consequence of the above argument, I would suggest that each of us individually, due to our own processing limitations, cannot view ourselves in all aspects simultaneously.

If this is true, then perhaps we should model ourselves using quantum logic.

Being panpsychist I would not identify this with consciousness, but I would say that systems which are sufficiently complex that they implicitly model themselves using quantum logic, in predicting and analyzing their own dynamics, presumably have a distinctive character to the way they manifest universal consciousness.

Can We Tell the Cause of the Incompatibility?

An interesting question is: if I, as a cognitive system, am confronted with incompatible observables ... in what sense can I tell what the cause of this incompatibility is?

Can I tell a case where the incompatibility is caused by my own cognitive limitations, from a case where it is caused by fundamental indeterminacy such as is sometimes hypothesized to occur in quantum microphysics?

It would seem there is no direct way to make this determination, but we can induce general theories from observations of other system aspects, which lead us to hypotheses regarding the causes of an incompatibility.

“I propose to consider any system which produces quantum statistics as quantum (”quantum-like”). A possible test is based on the interference of probabilities. I was mainly interested in using such an approach to ”quantumness” to extend the domain of applications of quantum mathematical formalism and especially to apply it to cognitive sciences. There were done experiments on interference of probabilities for ensembles of students and a nontrivial interference was really found. … Yes, we might expect nonclassical statistics, but there was no reason to get the quantum one, i.e., cos-interference. But we got it!”

Diederik Aeerts & Liane Gabora:

"While some of the properties of quantum mechanics are essentially linked to the nature of the microworld, others are connected to fundamental structures of the world at large and could therefore in principle also appear in other domains than the micro-world."

Diederik Aeerts:

"The emergence of quantal macrostates does not necessarily require the reference to corresponding quantal microstates"

Harald Atmanspacher, Hans Primas & Peter beim Graben:

"A generalized version of the formal scheme of ordinary quantum theory, in which particular features of ordinary quantum theory are not contained, should be used in some non-physical contexts."

"Complementary observables can arise in classical dynamic systems with incompatible partitions of the phase space."

Wednesday, May 20, 2009

(This email summarizes some points I made in conversation recently with an expert in reinforcement learning and AGI. These aren't necessarily original points -- I've heard similar things said before -- but I felt like writing them down somewhere in my own vernacular, and this seemed like the right place....)

Reinforcement learning, a popular paradigm for AI, economics and psychology, models intelligent agents as systems that choose their actions in such a way as to maximize their future reward. There are various ways of averaging future reward over various future time-points, but all of these implement the same basic concept.

I think this is a reasonable model of human behavior in some circumstances, but horrible in others.

And, in an AI context, it seems to combine particularly poorly with the capability for radical self-modification.

Reinforcement Learning and the Ultimate Orgasm

Consider for instance the case of a person who is faced with two alternatives

A: continue their human life as would normally be expected

B: push a button that will immediately kill everyone on Earth except them, but give them an eternity of ultimate trans-orgasmic bliss

Obviously, the reward will be larger for option B, according to any sensible scheme for weighting various future rewards.

For most people, there will likely be some negative reward in option B ... namely, the guilt that will be felt during the period between the decision to push the button and the pushing of the button. But, this guilt surely will not be SO negative as to outweigh the amazing positive reward of the eternal ultimate trans-orgasmic bliss to come after the button is pushed!

But the thing is, not all humans would push the button. Many would, but not all. For various reasons, such as love of their family, attachment to their own pain, whatever....

The moral of this story is: humans are not fully reward-driven. Nor are they "reward-driven plus random noise".... They have some other method of determining their behaviors, in addition to reinforcement-learning-style reward-seeking.

Reward-Seeking and Self-Modification: A Scary Combination

Now let's think about the case of a reward-driven AI system that also has the capability to modify its source code unrestrictedly -- for instance, to modify what will cause it to get the internal sensation of being rewarded.

For instance, if the system has a "reward button", we may assume that it has the capability to stimulate the internal circuitry corresponding to the pushing of the reward button.

Obviously, if this AI system has the goal of maximizing its future reward, it's likely to be driven to spend its life stimulating itself rather than bothering with anything else. Even if it started out with some other goal, it will quickly figure out to get rid of this goal, which does not lead to as much reward as direct self-stimulation.

All this doesn't imply that such an AI would necessarily be dangerous to us. However, it seems pretty likely that it would be. It would want to ensure itself a reliable power supply and defensibility against attacks. Toward that end, it might well decide its best course is to get rid of anyone who could possibly get in the way of its highly rewarding process of self-stimulation.

Not only would such an AI likely be dangerous to us, it would also lead to a pretty boring universe (via my current aesthetic standards, at any rate). Perhaps it would extinguish all other life in its solar system, surround itself with a really nice shield, and then proceed to self-stimulate ongoingly, figuring that exploring the rest of the universe would be expected to bring more risk than reward.

The moral of the above, to me, is that reward-seeking is an incomplete model of human motivation, and a bad principle for control self-modifying AI systems.

Goal-Seeking versus Reward-Seeking

Fortunately, goal-seeking is more general than reward-seeking.

Reward-seeking, of the sort that typical reinforcement-learning systems carry out, is about: Planning a course of action that is expected to lead to a future that, in the future, you will consider to be good.

Goal-seeking doesn't have to be about that. It can be about that ... but it can also be about other things, such as: Planning a course of action that is expected to lead to a future that is good according to your present standards.

Goal-seeking is different from reward-seeking because it will potentially (depending on the goal) cause a system to sometimes choose A over B even if it knows A will bring less reward than B ... because in foresight, A matches the system's current values.

Non-Reward-Based Goals for Self-Modifying AISystems

As a rough indication of what kinds of goals one could give a self-modifying AI, that differ radically from reward-seeking, consider the case of an AI system with a goal G that is the conjunction of two factors:

Try to maximize the function F

If at any point T, you assess that your interpretation of the goal G at time T would be interpreted by your self-from-time-(T-S) as a terrible thing, then roll back to your state at time S

I'm not advocating this as a perfect goal for a self-modifying AI. But the point I want to make is this kind of goal is something quite different from the seeking of reward. There seems no way to formulate this goal as one of reward maximization. This is a goal that involves choosing a near-future course of action to maximize a certain function over future history -- but this function is not any kind of summation or combination of future rewards.

Limitations of the Goal-Seeking Paradigm

Coming at the issue from certain theoretical perspectives, it is easy to overestimate the degree to which human beings are goal-directed. It's not only AI theorists and engineers who have made this mistake; many psychologists have made it as well, rooting all human activity in goals like sexuality, survival, and so forth. To my mind, there is no doubt that goal-directed behavior plays a large role in human activity -- yet it also seems clear that a lot of human activity is better conceived as "self-organization based on environmental coupling" rather than as explicitly goal-directed.

It is certainly possible to engineer AI systems that are more strictly goal-driven than humans, though it's not obvious how far one can go in this direction without sacrificing a lot of intelligence -- it may be that a certain amount of non-explicitly-goal-directed self-organization is actually useful for intelligence, even if intelligence itself is conceived in terms of "the ability to achieve complex goals in complex environments" as I've advocated.

I've argued before for a distinction between the "explicit goals" and "implicit goals" of intelligent systems -- the explicit goals being what the system models itself as pursuing, and the implicit goals being what an objective, intelligent observer would conclude the system is pursuing. I've defined a "well aligned" mind as one whose explicit and implicit goals are roughly the same.

According to this definition, some humans, clearly, are better aligned than others!

Summary & Conclusion

Reward-seeking is best viewed as a special case of goal-seeking. Maximizing future reward is clearly one goal that intelligent biological systems work toward, and it's also one that has proved useful in AI and engineering so far. Thus, work within the reinforcement learning paradigm may well be relevant to designing the intelligent systems of the future.

But, to the extent that humans are goal-driven, reward-seeking doesn't summarize our goals. And, as we create artificial intelligences, there seems more hope of creating benevolent advanced AGI systems with goals going beyond (though perhaps including) reward-seeking, than with goals restricted to reward-seeking.

Crafting goals with reasonable odds of leading self-modifying AI systems toward lasting benevolence is a very hard problem ... but it's clear that systems with goals restricted to future-reward-maximization are NOT the place to look.

Wednesday, May 13, 2009

(This may seem a hackneyed topic, but there are some moderately original points near the end here, if you bear with me ...)

As a card-carrying, future-thinking transhumanist, I take it as obvious that most of the particulars of current religions are relics of earlier eras in human cultural development, which currently do a lot of harm along with doing some good.

But I still find it interesting to ask what aspects of religion reflect underlying phenomena that are essential, meaningful and necessary -- and are likely to continue as humanity transcends the traditional "human condition" and enters its next phase of development....

The basic point Fish makes is that religion offers something science by its very nature cannot.

Eagleton acknowledges ... many terrible things have been done in religion’s name — but at least religion is trying for something more than local satisfactions, for its “subject is nothing less than the nature and destiny of humanity itself, in relation to what it takes to be its transcendent source of life.”

He notes that science cannot address what he calls "theological questions", where

By theological questions, Eagleton means questions like, “Why is there anything in the first place?”, “Why what we do have is actually intelligible to us?” and “Where do our notions of explanation, regularity and intelligibility come from?”

He also notes that the author is

... angry, I think, at having to expend so much mental and emotional energy refuting the shallow arguments of school-yard atheists like Hitchens and Dawkins.

I haven't read Eagleton's book and I'm unlikely to do so -- I have a long list of more interesting-looking reading material -- but Fish's summary did resonate with a paper I'm in the middle of writing (it's paused while I work on more urgent stuff) on the limits of science.

My basic point in that paper will be a simple one: science is based on finite sets of finite-precision observations. That is, all of scientific knowledge is based on some finite set of bits, comprising the empirical observations accepted by the scientific community.

To extrapolate beyond this bit-set, some kind of assumption is needed. To put it another way, some kind of "faith" is needed. Hume was the first one to make this point really clearly ... and we now understand the "Humean problem of induction" well enough to know it's not the kind of thing that can be "solved."

The Occam's Razor principle tries to solve it -- it says that you extrapolate from the bit-set of known data by making the simplest possible hypothesis. This leads to some nice mathematics involving algorithmic information theory and so forth. But of course, one still has to have "faith" in some measure of simplicity!

So: doing or using science requires, in essence, continual acts of faith (though these may be unconscious and routinized rather than conscious and explicit). To the extent that Dawkins, Hitchens or other anti-religion commentators de-emphasize this point, they're engaging in judicious marketing. (It's hard for me to feel too negative toward them about this, however, given the far more explicitly and dramatically dishonest marketing that religion has carried out over the last millennia.)

My paper will focus on what the limits of science tell you about AI, machine consciousness and so forth -- and I'll save that for another blog post, or the paper itself. (Don't worry though, my conclusion is not that scientifically enginering AGI is impossible ... I haven't lost the faith!)

Anyway, I certainly agree with Fish and Eagleton that religion addresses very important questions that science cannot, by its nature, answer.

But I find it rather screwy that Eagleton refers to

“Why is there anything in the first place?”, “Why what we do have is actually intelligible to us?” and “Where do our notions of explanation, regularity and intelligibility come from?”

and so forth as theological questions.

Surely, these are philosophical questions.

One can answer them in various ways without invoking any deities or demons!

"Why does God exist?" is a theological question ...

"Why does anything exist?" is philosophical...

(Though, for the record, I don't think "Why does anything exist?" is a very useful philosophical question. I'm more interested in questions like

"Why do separate objects exist, instead of just one big fluid cosmic mass?"

"In what sense could the universe be considered compassionate?"

"How much ethical responsibility should I feel toward (which) other minds?"

"Why does my mind perceive such a small subset of the space of all possible patterns?"

"How much can a mind grow and expand without losing its sense of self and becoming, experientially, a 'fundamentally different being'?"

"What is it like to be a rock?"

etc.

)

Theology is one way of providing answers to philosophical questions ... but by no means the only way.

I think that religion addresses some very important questions, that are beyond the scope of science -- and by and large provides these questions with extremely bad answers.

One of the many limitations of religion as conventionally conceived is indicated by the quote, given above, that religion's

“subject is nothing less than the nature and destiny of humanity itself....”

From a transhumanist perspective, the qualifier "nothing less than" is misplaced, as this is actually a very limiting subject. The nature and destiny of humanity are important; but one of the things that science has opened our minds to is the relative insignificance of humanity in the space of possible minds. I'm more interested in philosophies that address the nature and destiny of mind itself, rather than just the nature and destiny of one species on one planet.

It is of course a subtle matter to compare and judge different explanations to philosophical questions. You can't compare them using scientific or mathematical methods ... and of course the question of how to evaluate philosophical views becomes "yet another tough philosophical question", tied in with all the other ones.

A crude way to say it, is that it comes down to an intuitive judgment ... which leads into questions of how one can refine and improve one's intuition ... and these questions, of course, possess numerous answers that are philosophical- or religious- tradition -dependent...

Science-synergetic philosophy

It does seem to me, though, that there is an interesting notion of science-synergetic philosophy lurking somewhere in all this.

Suppose we take for granted that doing science -- just like other aspects of living life -- relies on a constant stream of acts of faith, which can't be justified according to science....

One may then note that there are various systems for mentally organizing these acts of faith.

Religions are among them. But religions are quite detached from the process of doing science.

It seems sensible to think about philosophical systems -- i.e. systems for organizing inner acts of faith -- that are intrinsically synergetic with the scientific process. That is, systems for organizing acts of faith, that

when you follow them, help you to do science better

are made richer and deeper by the practice of science

One can broaden this a little and think about philosophical systems that are intrinsically synergetic with engineering and mathematics as well as science.

Now, one cannot prove scientifically that a "scientifically synergetic philosophy" is better than any other philosophy. Philosophies can't be validated or refuted scientifically.

So, the reason to choose a scientifically synergetic philosophy has to be some kind of inner intuition; some kind of taste for elegance, harmony and simplicity; or whatever.

One prediction I have for the next century is that scientifically synergetic philosophies will emerge into the popular consciousness and become richer and deeper and better articulated than they are now.

Because Fish and Eagleton are right about some things: people do need more than science ... they do need collective processes focused on the important philosophical questions that go beyond the scope of science.

But my prediction is that we are going to trend more toward philosophical systems that are synergetic with science, rather than ones that co-exist awkwardly with science.

What will these future philosophical systems be like?

There's nothing extremely new about the concept of science-synergetic philosophy, of course.

Plenty of non-religious scientists and science-friendly non-scientists have created personal philosophies that don't involve deities or other theological notions, yet do involve meaningful approaches to personally exploring the "big questions" that religions address.

Among the many philosophers to take on the task of creating comprehensive science-synergetic philosophical systems, perhaps my favorite is Charles Peirce (who also developed a nice philosophy of science, though one that IMO is significantly incomplete ... but I've discussed that elsewhere.)

Building on work by Peirce and loads of others, I tried to lay out a science-synergetic philosophical system in my book The Hidden Pattern -- but like Peirce's writings, that is a fairly academic work, not an informal tract designed to inspire the common human in their everyday life.

My friend Philippe van Nedervelde likes to talk about this sort of thing as a "TransReligion/ UNReligion", but I confess to not finding that terminology very compelling.

Philippe is interested in (among many other things!) developing vaguely religion-like rituals that coincide with some sort of science-synergetic philosophy. There has been talk about formulating a "TransReligion/ UNReligion" as an outgrowth of the futurist group now called "The Order of Cosmic Engineers." Which I think is an interesting idea ... yet I'm not really sure it's the direction things will (or should) go.

I'm not sure there will emerge any one "Bible of science-synergetic transhumanist philosophy" ... nor any science-synergetic-philosophy analogues of speaking in tongues, kneeling at the altar, or consuming the simulated blood and flesh of the Savior the Son of God who gave his life for our sins. Perhaps, science-synergetic philosophy may wind up being something that pervades human culture in more of a broad-based, implicit way.