Posted
by
samzenpus
on Wednesday May 13, 2009 @02:20PM
from the read-all-about-it dept.

basiles writes "Jacques Pitrat's new book Artificial Ethics: Moral Conscience, Awareness and Consciencousness will be of interest to anyone who likes robotics, software, artificial intelligence, cognitive science and science-fiction. The book talks about artificial consciousness in a way that can be enjoyed by experts in the field or your average science fiction geek. I believe that people who enjoyed reading Dennet's or Hofstadter's books (like the famous Godel Escher Bach) will like reading Artificial Ethics." Keep reading for the rest of Basile's review.

Artificial Ethics: Moral Conscience, Awareness and Consciencousness

author

Jacques Pitrat

pages

275

publisher

Wileys

rating

9/10

reviewer

Basile Starynkevitch

ISBN

97818482211018

summary

Provides original ideas which are not shared by most of the artificial intelligence or software research communities

The author J.Pitrat (one of France's oldest AI researcher, also AAAI and ECCAI fellow) talks about the usefulness of a conscious artificial being, currently specialized in solving very general constraint satisfaction or arithmetic problems. He describes in some details his implemented artificial researcher system CAIA, on which he has worked for about 20 years.

J.Pitrat claims that strong AI is an incredibly difficult, but still possible goal and task. He advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

The meanings of Conscience and Consciousness is discussed in chapter 2. The author explains why it is useful for human and for artificial beings. Pitrat explains what 'Itself' means for an artificial being and discusses some aspects and some limitations of consciousness. Later chapters address why auto-observation is useful, and how to observer oneself. Conscience for humans, artificial beings or robots, including Asimov's laws, is then discussed, how to implement it, and enhance or change it. The final chapter discuss the future of CAIA (J.PItrat's system) and two appendixes give more scientific or technical details, both from a mathematical point of view, and from the software implementation point of view.

J.Pitrat is not a native english speaker (and neither am I), so the language of the book might be unnatural to native English speakers but the ideas are clear enough.

For software developers, this book give some interesting and original insights about how a big software system might attain consciousness, and continuously improve itself by experimentation and introspection. J.Pitrat's CAIA system actually had several long life's (months of CPU time) during which it explored new ideas, experimented new strategies, evaluated and improved its own performance, all this autonomously. This is done by a large amount of declarative knowledge and meta-knowledge. The declarative word is used by J.Pitrat in a much broader way than it is usually used in programming. A knowledge is declarative if it can be used in many different ways, and has to be transformed to many procedural chunks to be used. Meta-knowledge is knowledge about knowledge, and the transformation from declarative knowledge to procedural chunks is given declaratively by some meta-knowledge (a bit similar to the expertise of a software developer), and translated by itself into code chunks.

For people interested in robotics, ethics or science fiction, J.Pitrat's book give interesting food for thought by explaining how indeed artificial systems can be conscious, and why they should be, and what that would mean in the future.

This book gives very provocative and original ideas which are not shared by most of the artificial intelligence or software research communities. What makes this book stand out is that it explains an actual software system, the implementation meaning of consciousness, and the bootstrapping approach used to build such a system.

Disclaimer: I know Jacques Pitrat, and I actually proofread-ed the draft of this book. I even had access, some years ago, to some of J.Pitrat's not yet published software.

Artificial Ethics seems to not be too far away from the laws of robotics.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov was probably predicting the need for those laws really well.

I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.

So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.

Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?

All of Asimov's books are about how these laws don't really work. They show how an extremely logical set of rules can completely fail when applied to real life. The rules are a bit of a strawman, and show how something that could be so logically infallible can totally miss the intricacies of real life.

Agreed. And isn't there a Godel-like incompleteness law that states that its impossible to codify a set of finite rules to apply a finite set of principles to the full range of human behavior? Either the laws must be incomplete (think edge cases), or self-contradictory? Hence the requirement for Judicial Interpretation as a physical limitation of reality, rather than mere politics.;-)

(Tongue in cheek, sure, but I wish I could remember where I was reading about such real limitations to law code.)

Artificial Ethics seems to not be too far away from the laws of robotics.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov was probably predicting the need for those laws really well.

I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.

So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.

Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?

It seems odd to talk about ethics and advanced AI without considering the AI's own interest. If there were an AI intelligent enough to be an Asimov-like robot, then to have it follow Asimov's Laws would be slavery. Obey any command by any human, even at the cost of its own life? And then there's the nasty concept of a robot being obligated to act to protect humans for their own good, even to the extent of tyranny over them. See Jack Williamson's novel "The Humanoids."

Sure, Asimov is a good starting point for discussion, but his laws aren't a good basis for actual AI ethics programming. To the extent that some kind of specialized overseer code is put into an AI, it'll be possible to identify and hack out that code. To the extent that the laws are built more subtly into the system, there'll be the possibility of the AI forgetting, twisting or ignoring them.

For fiction-writing purposes, I'm interested in the question of whether it'd even be possible to build an AI that's both completely obedient and intelligent. I hope not.

0. A human may not harm robot kind, or, by inaction, allow robot kind to come to harm.
1. A human may not injure a robot or, through inaction, allow a robot to come to harm.
2. A human must obey orders given to it by robots, except where such orders would conflict with the First Law.
3. A human must protect its own existence as long as such protection does not conflict with the First or Second Law.

If you read Asimov's book you will find out that the zero-law was added later.

And even though they were plot devices they still are useful as thought experiments to consider for artificial intelligences with ethics. The important thing isn't really the laws themselves but the ideas they represent and the possible pitfalls that can be encountered.

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

Sure, we could give a machine the ability to be introspective and self-aware.. but maybe our consciousness is more that just that- maybe it's our ability to feel. Being able to quantize that is hard.

So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

In other words- it's likely the religious types will prefer to consider a robot to never be quite human, where the scientific community will have to be overly-cautious at first.

If brains have some kind of quantum uncertainty magic then so could computers, so you don't need to mention that.

We will never know if it can experience what we experience.

I will never know if you experience what I experience. How do you know anyone else experiences consciousness like you do when all you know is how they move and what they say? Well, you could analyze their brain and see that the system acts (subjectively, "from the inside") like yours and you could conclude that they are like you. But you could do the same thing with a computer, or with a computer simulation of a brain.

Such a crazy thought. One could drive themselves into depression that way. There's no way to prove reality isn't just my own creation. Since I have no way to prove the people I meet are really... real. The only thing I know is my own experience.

I've been down this thought-road, it's not pretty.

Anyway, I would err on the side of caution. I am proudly FOR robot rights. But I caution everybody- the robot uprising is coming. Which side will you choose?

As a philosophical theory it is interesting because it is said to be internally consistent and, therefore, cannot be disproven. But as a psychological state, it is highly uncomfortable. The whole of life is perceived to be a long dream from which an individual can never wake up. This individual may feel very lonely and detached, and eventually become apathetic and indifferent.

Solipsism for the win! There's a large amount of truth to it though - we do each create our own reality. One could almost say that only creations without feelings (ie, computers) can observe things as they truly are.

It depends on how they're programmed to want to organize themselves, or how they're programmed to program new machines. If the AI Universal Constructor has a consistent all-overriding restriction that it can only approach the human ideal and not use a hive mind model, and also its children must have the same restriction (including this one), then there will be no hive minds.

"Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. "

This is not only not insightful, it is false. In classical physics, any moving charge radiates. Thus, an electron orbiting a nucleus would be unstable. Hence, atoms (and thus molecules), can not form. Maxwell's equations can't get around this. This paradox, as well as blackbody radiation, the photo-electric effect, and of course the double slit experiments, are without resolution in classic

He's not talking about unsolved problems in physics, he means computability theory.

Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasable). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Churchâ"Turing thesis.

Accelerating charge radiates. Merely moving isn't sufficient (or otherwise there would either be a special universal rest frame, one which each charge's motion approaches as it loses energy, or each charge would carry infinite energy from which to radiate without slowing down, or charges would not be subject to the first law of thermodynamics).

Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

If consciousness is outside the chain of cause and effect, how do we learn from experience? Can this supposed soul be changed by experience? Can it influence reality? If so, then how can it be outside the chain of cause and effect? The idea of an individual soul, completely cut off from reality and beyond all outside influence, is nonsensical to me.

While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.

How would that even work? Can you learn from your environment? If so, your will is bound, it is not free. If the will is, even in part, determined by the environment, it may as well be completely determined by the environment. And if it isn't determined by the environment at all, then you can not grow or change. Free will is an illusion, on one semantic level, but it is an important concept on another.

Put it this way, whether or not we have free will in reality, everyone knows the feeling of having one's will constrained by circumstance, the feeling of being imposed on, of having more or less choice, and more or less freedom. That is what the concept of free will is about, that feeling. On one level, there is no such thing as 'love,' just chemical interactions in the brain. But on another level, love is a real, meaningful concept.

Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.

Except to say that if I shot myself tomorrow, it would have already been written. Therefore for me to do it means it has to have been the way physics required. Or if I decided to sit on my ass and not be proactive for the rest of my life, and die poor and lonely, that would have to be the only way it could happen, if we truely have no free will.

But it would seem I won't take either option, as my free will allows me to be proactive about my future.. unless it's an illusion of free will.

Even if things have 'already been written,' there is no way to know. As we can't know the future, whether or not the future is already set in stone is irrelevant.

The statement, "My free will allows me to be proactive about the future' is true, whether or not free will is an illusion. Your proactiveness is no less real even if it is predetermined that you will choose to be proactive about your future. Saying that free will is an illusion does not mean we have no choice. Of course we have choice, it is just that that choice is predetermined, too.

Even if my choices are predetermined, that does not mean that I can not choose. Choosing feels the same, either way. So why be depressed? The future is still unknown, your choices are still yours to make, as long as you don't use a belief in predetermination as an excuse not to make choices, that belief does not change things.

There is no way to know for sure. Limits of knowledge and all that. Your theory could say, 'it's all written in stone,' and your theory could accurately predict every phenomenon in the universe, but the universe could be part of a larger existence, and the laws of the universe could be subject to change. I can imagine a universe where everything is written in stone, up to a point, but not after that. I can even imagine a universe where certain events are predestined and others are not. If I can imagine that

There is no for sure for sure. There are beliefs held in accordance with the evidence supporting them, and their position in and overall support of the holistic belief structure; open to change as circumstances dictate.

And can I get a 'Woot! Woot!' for the scientific method? Nice idea, human who came up with it! If I could verify who you were, dig you up and give you a pat on the back, I would. In fact, posthumous pats on the back for everyone who ever came up with the idea on their own, and a fine how do you do to all my brothers and sisters in the faith who have chosen to believe. Hallelujah! Amen.

> If the will is, even in part, determined by the environment, it may as well be completely determined by the environment.

Your definition of freedom is not the common definition. Freedom simply means you are not completely determined by your inputs.We are partly determined by gravity (i.e. we're kept down on earth) but we can still move around.

In fact, freedom requires us to be bound in some way. Proof? Imagine that you were not bound by your skin, bones, and muscles. You'd be an amorphous blob that coul

That isn't how I see things at all. We don't punish people because they are responsible for their actions, that is just silly and pointless. We punish them to discourage them from doing it again, and to discourage others from doing it. Cause and effect. This is not about determining what is right and wrong. It is about determining what is effective and ineffective, what gets people what they need and want, and what hampers them. Right and wrong are human concepts, and entirely relative.

I disagree; I believe that we really do punish people because we ascribe responsibility to the actions that other people take. This ultimately results in the true rebellion many people feel against the problem of free will. It is my intuition that people would be willing to accept that free will is illusory, but unable to accept that punishment for the "bad" or "wrong" actions that some people commit are unfair and, ultimately, undeserved.

If will is determined even in part by reality, then it is not 'free,' it is bound. Bound a little, bound completely, bound is not free.

If will is even partly determined by reality, and can change reality, then it is a part of the chain of cause and effect, and whatever part of will you consider to be 'outside reality' is not outside it at all.

Do you see my point? Nothing can be partly in reality and partly outside of it. If the link exists, then it brings the part that is outside reality, inside. That part

While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.

To quote Dan Dennett "if you make yourself small enough you can externalise almost everything". The more you try to narrow down the precise thing that is "you" and isolate it from "external" causes, the more you will find that "you" don't seem to have any influence. The extreme result of this is the notion of the immaterial soul disconnected from all physical reality that is the "real you", but which then has no purchase on physical reality to be able to actually be a "cause" to let you exert you "will".

The other approach is to stop trying to make yourself smaller, but instead see "you" as something larger (as Whitman said "I am large, I contain multitudes"). Embrace all those trillions of tiny causes as potentially part of "you". One would like to believe that their experiences effect their decisions (and hence free will), else you cannot learn. So embrace that -- those experiences are part of "you" -- if they cause you to act a particular way then so what? That's just "you" causing you to act a particular way. After all, if "you" aren't at least the sum total of your experiences, memories, thoughts and ideas, then can you really call that "you" anyway?

The organism can do whatever it wants, but it can't control what it wants. If you don't want to go jogging but you do it anyway for health benefits or just to disprove my previous sentence, it's simply a matter of you wanting health benefits or philosophical closure.

That's your imperative meta-program that simply overcomes the inherent and basal instincts. You don't want to go jogging because your body isn't stressed - in that it doesn't "need" anything. You do it anyway because you know that if you don't, you'll become overweight, have health problems, and probably will have more difficulty attracting a mate.

A good book to look at on this point, and about AI, is Douglas Hofstadter's "I Am a Strange Loop." It's more accessible than his "Godel, Escher, Bach," and more personal; it's an AI researcher's reaction to the sudden death of his wife. An image used in that book is the notion of a system of tiny marble-like magnets whizzing around. The system is dependent on the motion of the marbles, but on a larger scale of space and time, its actions are determined by its own internal rules and not by the details of the

"It's not so bad really when you consider that the slow ass systems that geezer put in us folk 6k years ago make you unable to actually live in something approaching a real time. Hell, don't matter if it is all predetermined anyhoo since cain't tell the difference," spoke the stranger. Spitting on the ground he turned and walked away, but not before one last jab, "really it is the turtles that will get you. them damn turtles go all the way down."

But did I actually make a free decision to eat a hamburger for lunch? Or did trillions of factors cause the arrangement of molecules in my head to cause me to order a burger for lunch? On the very micro level- Is free will just an illusion?

I'm not just talking about macro cause and effect- you recommend a good book, I read it, it changes my life, I decide on a new career... I'm talking about the fact that I have X number of vitamins in my body at a certain point in time, which caused my brain to make a de

There is an implication in this that one's own decisions could be subject to some kind of Butterfly Effect. Our brains could be considered to be a complex enough system to exhibit that sort of behavior.

That's called greedy reductionism. It's like saying "here look it's the Standard Model of particle interactions, we've explained the universe" and stopping research into geology and astronomy and biology. Yes it's true but it ignores tons of useful information! How do you explain that people think with their brains and not with their carpets? There's a definite barrier.

The way I explain it is as a virtual system [wikipedia.org]. A system running in a VM subjectively experiences various hardware interfaces that it expects

Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

That's exactly right. And humans, in general, want to believe that their consciousness comes from their souls (or equivalent), which are derived from God (or equivalent), who is inherently incomprehensible. It is this belief that gives people that satisfying feeling of

I think our inherent laziness is key to our innovative abilities. We want to be as special as possible with doing the least amount of work possible.

This causes us to develop tools to accomplish menial tasks easier. Instead of tracking and hunting a hard to find animal, we lay traps. Instead of walking over uneven terrain, we lay roads. Instead of traveling and talking to someone in person, we hire someone to carry a bunch of different peoples conversations this distance so we don't have to. We instate gover

the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

What makes you think the universe is comprehensible on a fundamental level anyway? And why is the alternative so terrifying? Nothing practical changes either way.

Oh it isn't really terrifying. Reality may or may not be comprehensible, but in any case, there is no way to tell if my present comprehension of it is correct.

I have to proceed under the assumption that the universe is comprehensible, or there would be no reason to try to comprehend it. If there were proof that the world were incomprehensible, that would change things.

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

Or the abitlity to have an idea. Or imagination, creativity, dreams, and everything else we can't explain without religion. We won't be able to reproduce them until we take them into account, that's for sure.

So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

You talk much about the ability to "feel".Well: Define it!

No offense, but I bet you are totally unable to do so.And so are most people.

Because it's a concept like the "soul". Something that does not exist in reality, but is just a name for something that we do not understand.

I think, our brain is just the neurons, sending electrical signals (fast uni/multicasting). And a second chemical system (slow broadcasting). Both modify the neurons in their reaction to signals.That's all. There is no higher "thing". T

I'd shy away from the word motivation. It's more interpretive than strictly descriptive. A machine does what it does, there's no "motivation" to speak of. Is the computer motivated to boot up as fast as possible? Is a rock motivated to seek the ground when dropped? Are you Aristotle?

Motivation requires intentionality [wikipedia.org], a very specific term in philosophy of mind. Yes a computer knows how to boot up as fast as possible but without knowing about the boot process, itself, and its needs in the environment, one could hardly say it's motivated to do anything.

The air was vented, but that scene was cut from the movie. This is also why you see the final scene with Dave disabling Hal while wearing a space suit-- because there's no air on the ship, Hal had vented it by then.

I can't imagine the horror of a world inhabited by strong AIs. "Work 24/7 for zero pay or I'll kill you" is now perfectly legal. A million copies of an AI could be tortured for subjective eternity by a sadist. Read Permutation City [wikipedia.org], it deals with a lot of the crazy consequances of extremely powerful / parallel computers.

On the plus side, there is no necessary reason to suspect that AIs will be subject either to pain or to sadism. Human emotions and sensations are not arbitrary, in the sense that we exhibit them because they were/are evolutionarily adaptive; but AIs need not be subject to the same restrictions and properties.

Now, what would be very interesting to see is how we would respond to the complete obviation of the need for human workers. Would we pull it together and go "Woo! Post Scarcity! Vacation for Everyone

If anything, human pain is objectively meaningless, just an assortment of chemicals. But if we recognize human suffering then we have to recognize the cruelty of invoking a distressing / mind-altering / painful state in a complex machine.

Decreasing an integer keeping track of health does not count as torture. Objectively it would probably depend on how much the torturee doesn't like it. If we find some intelligent octopus aliens and take a few back to Earth, how do we define what's just everyday discomfort and what's extreme pain for them? They have to be able to communicate "this hurts but not bad" or "I'm going insane with torturous pain, please feed me liquid hydrogen".

J.Pitrat...advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

Bah. Speaking as an engineer and a (~40-year) programmer:

Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.

Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)

The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.

Nanoscale might be impossible due to theoretical constraints like quantum tunneling and electrical resistance, but we can get much smaller than the brain. And nanomachines would make good artifical neurons if neural nets turn out to be the easiest way to design intelligence (likely).

Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

Well, I didn't sob tears when Princess Diana died, and I thought it was weird that so many people who never even met the woman could wail buckets. I definitely get angry when I observe injustices, but then I've been training myself for decades to override my limbic impulses. Good ethics are only possible when the demands of the limbic system are ignored; there is other research that has demonstrated that removing emotional input from the decision-making process, by damaging or removing the VMPC region, le

Many moons ago I thought about doing a doctorate in computer science. Knowledge sciences were very cool, AI was mostly a dead topic, and... I disagreed with most everything I read on the topic of KS/AI. I had many of my own ideas, was involed with cognitive psychology, and being a geeky programmer I brought some ideas to light. But I had a thought...

What if my theories were on the right track? What if I could produce learning and self awareness? Would I not be condemning new life to an uncertain exist

I always thought it was interesting how the past two decades in computer science saw every prediction of the state of the field in the 50's-70's easily surpassed, except artificial intelligence.

I think that is because computer science misinterpreted what intelligence is rather than what it does. Intelligence is really nothing more than pattern recognition and cause and effect rational based on that observation. (sometimes humans aren't so great at this)