Windwalker wrote:Consciousness as we perceive it may well be illusion -- but it is a real phenomenon, insofar as it arises from electrochemical changes that can be measured and which alter or cease during sleep, coma or death.

We can agree on that. And I believe that someday we will, or might, be able to "engineer" with consciousness in the same way we can light. Some people (maybe SC?) might be dissatisfied with that, might want "more," and I would not try to dissuade them. As long as they don't insist that we be dissatisfied along with them!

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

Windwalker wrote:Consciousness as we perceive it may well be illusion -- but it is a real phenomenon, insofar as it arises from electrochemical changes that can be measured and which alter or cease during sleep, coma or death.

We can agree on that. And I believe that someday we will, or might, be able to "engineer" with consciousness in the same way we can light. Some people (maybe SC?) might be dissatisfied with that, might want "more," and I would not try to dissuade them. As long as they don't insist that we be dissatisfied along with them!

I'm not dissatisfied with where we are in understanding things, but I think it is important to clearly differentiate between the effects that can be measured such as energy, brain activity, etc. and the "thingness" underlying those effects (strings on the one hand possibly? who knows on the other). Just because something acts conscious doesn't make it conscious. The Turing test is only a test of the skill of a programmer and the perception of a judge - it has no abiltiy to measure machine intelligence whatsoever. Just because something looks like a solid object doesn't mean we know what solidity ultimately is or what it's made up of.

While these distinctions may not matter in engineering, they certainly matter in issues of ethics and laws. Ultimately, they may well matter in engineering as well. I don't think I'm alone in this given the desire for a GUT, or even in statements that everything is "knowable". These indicate a desire to actually know something to the root, and not just measure and describe effects. I do think it is important to be able to say what consciousness is and how/when it manifests (not its effects - the actual experience of it), so we can make better decisions about life and death of humans and other beings, just as I think it's important to produce more evidence about evolution to argue against Creationism.

Peace!
SC

PS> The Buddhist position is not that consciousness is illusion, but that our perceptions are largely delusional, and especially those about the self. The soul is a delusion from a Buddhist perspective, but consciousness is self-apparant. I'm not a dogmatic believer in any metaphysics, but I find the Buddhist perspective healthy in that it accepts ignorance and is pragmatic about what to spend our time on (being compassionate and wise).

sanscardinality wrote: Just because something acts conscious doesn't make it conscious.

How do you distinguish between something that acts conscious and something that "is" conscious? Even philosophically? What is "real" consciousness? How do you know "true" consciousness even exists? How do I know that you all aren't just "acting" conscious? (That last question is not original to me.)

No, really, I'd like to know.

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

sanscardinality wrote: Just because something acts conscious doesn't make it conscious.

How do you distinguish between something that acts conscious and something that "is" conscious? Even philosophically? What is "real" consciousness? How do you know "true" consciousness even exists? How do I know that you all aren't just "acting" conscious? (That last question is not original to me.)

No, really, I'd like to know.

We don't know what exactly consciousness is, so no one can say at present how to distinguish its mere outward signs from the actual occurrence. I feel confident that making lego robots that pretend to be conscious in limited ways or computer programs that are really good at pretending to be conscious don't come any closer than rocks carved into human forms. Perhaps that will change someday, and I think one of the interesting scenarios for determining what consciousness is is to engineer one with better introspective capabilities and ask it. We don't seem to have that ability.

I don't think measuring more accurately how it uses hardware is likely to bring about actual knowledge of what it is (I am equally skeptical about a GUT that doesn't beg more questions).

Now could all the other consciousnesses be illusory? Sure, but it doesn't make much sense to me philosophically.

sanscardinality wrote:We don't know what exactly consciousness is, so no one can say at present how to distinguish its mere outward signs from the actual occurrence.

Speaking strictly for myself, I have a really difficult time being interested in arguing about the differences between "pretend" X and "actual" X if I don't know even approximately what X is.

Everyone else, of course, is welcome to do as they see fit.

Being unable to measure something or define it doesn't make it pretend. My main point is that it is a leap to say that because something seems conscious doesn't necessarily make it conscious. It may be the same thing, but there's no particular reason to think so any more than there is a reason to think two watches that keep similar time have similar mechanisms.

The conversation didn't strike me as an argument really - I don't think anyone has made a claim about the nature of consciousness nor has anyone refuted a claim about the nature of consciousness. Sorry if I came across as argumentative.

sanscardinality wrote:We don't know what exactly consciousness is, so no one can say at present how to distinguish its mere outward signs from the actual occurrence.

Speaking strictly for myself, I have a really difficult time being interested in arguing about the differences between "pretend" X and "actual" X if I don't know even approximately what X is.

I believe SC was thinking along the lines of the Turing test analogy, which at this point would be "pretend" consciousness.

These questions involve the conclusions of the observer: how do we know that others have minds and perceive similarly to us? This is directly pertinent to the real world because of cognitive defects, particularly autism, whose sufferers cannot create a "theory of mind". It will also come prominently into play if we meet other intelligent species.

I agree with SC that these questions impinge on ethics and law. From the biological viewpoint, consciousness is increasingly considered to be a gradient (in fact, a family of gradients), rather than a yes-no phenomenon. Scientists who use animals in experiments now must avoid pain and discomfort as much as possible. The protocols that are required for such work are lengthy and detailed and pass rigorous reviews by several committees.

For I come from an ardent raceThat has subsisted on defiance and visions.

Oops. I mean "argument" as in a persuasive line of reasoning. Did not mean to slander you or accuse you of being, well, vulgar.

sanscardinality wrote: but there's no particular reason to think so any more than there is a reason to think two watches that keep similar time have similar mechanisms.

Well, there's the rub. Is consciousness the mechanism or the outcome? Two watches may have very different mechanisms, but they both keep time. Is one "real" time and the other "pretend" time? And this is my concern: conflating the mechanism and the outcome.

Let me give you an example. The philosopher John Searle has long been a critic of the possibility of AI--in essence he insists that it is, in principle, impossible for machines to think, have consciousness, etc.. Here is one of his arguments, not verbatim, but close: He says that having an algorithm that appears to think (be conscious, self-aware, etc.) is a model of thinking but not "real" thinking. He further gives the example of digestion: a computer model of digestion is just a model, not real digestion.

But I say this is a false analogy. The real analogy would be a plastic-and-glass stomach that adds chemicals to process food and make it suitable for uptake. It might use different chemical processes, but in the end a source of energy and building blocks is broken down to be suitable for adding to the body. It's like SC's watch analogy: there may be different mechanism, but if the food is digested, it's still real.

Look, the only thing I can judge by is the product. If you have an algorithm that persuasively acts as if it is conscious, self-aware, thinking, then frankly I don't see how someone could argue--sorry, put forward a persuasive line of reasoning--that it is not those things. And to address Athena's point about the Turing test--I acknowledge it won't be simple, and there will be arguments, er, persuasive lines of reasoning. But I am extremely uncomfortable with a priori assumptions that there is "real" thinking and "pretend" thinking when we don't really understand it. Maybe we will have a good criterion for distinguishing them. But maybe we won't. Right now it's pretty easy to dismiss ELIZA and other programs that trick the unwary into thinking they are talking to a real person, and we are far, far, far, from any artificial consciousness, real or pretend. But I'm willing to bet that if and when we make progress and get closer, it's going to be really hard to make a distinction that isn't simply on the basis of discrimination, e.g., if's only real thoughts if it's done with carbon and not with silicon. In fact, it is exactly because the issue is of importance to ethics and so on that I am, on principle, unwilling to concede differences between "real" and "pretend" consciousness when none of us have any basis whatsoever to make that distinction. I'm sorry if I am being argumentative. But none of you have put forward a persuasive line of reasoning towards a distinction between "real" or "pretend" thinking. This is why I agreed with the Dijkstra comment that started this thread. I think (if you believe I am doing "real" thinking" ) that discussions whether or not a submarine "swims" are, for me, as pointless as a discussion if two watches with different mechanisms are both keeping "real" time. Discussing the differences in the mechanisms and how they achieve the same end is fascinating and fruitful. Discussing what consciousness is, and how it arises in biological systems, is fascinating. Speculation about how might consciousness might arise in algorithmic systems, while premature, could also be fascinating. Even discussions of how we might be "fooled" into thinking an algorithmic system is consciousness, as with the Turing Test, is interesting.

Last edited by caliban on Tue Mar 27, 2007 4:00 pm, edited 1 time in total.

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

caliban wrote:Well, there's the rub. Is consciousness the mechanism or the outcome?//But I'm willing to bet that if and when we make progress and get closer, it's going to be really hard to make a distinction that isn't simply on the basis of discrimination, e.g., it's only real thoughts if it's done with carbon and not with silicon. In fact, it is exactly because the issue is of importance to ethics and so on that I am, on principle, unwilling to concede differences between "real" and "pretend" consciousness when none of us have any basis whatsoever to make that distinction

I agree, as you already know from having read my book. In my various posts on this thread I was contemplating the ramifications, rather than disagreeing. Maybe I was too indirect.

For I come from an ardent raceThat has subsisted on defiance and visions.

Windwalker wrote: In my various posts in this thread I was contemplating the ramifications

I think comtemplating the ramifications would be fascinating. In fact, I would propose a new direction: what would be evidence of consciousness? How could we tell? The Turing Test is probably flawed, but it's a start. Is there a way to go further? And what might be the ramifications?

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

But I say this is a false analogy. The real analogy would be a plastic-and-glass stomach that adds chemicals to process food and make it suitable for uptake. It might use different chemical processes, but in the end a source of energy and building blocks is broken down to be suitable for adding to the body. It's like SC's watch analogy: there may be different mechanism, but if the food is digested, it's still real.

Consciousness and self-awareness are tricky here. There is no known way to tell if in fact the apparent consciousness is conscious. Normally this wouldn't be an issue, but we impart ethical considerations to beings that we don't to unconscious objects. If the stomach's acids are removed and it no longer digests, then it no longer digests. If a consciousness loses its ability to communicate consciousness, or even loses the ability to appear conscious by other measures, it may still be a consciousness. So while I agree with your analogy in general, I think it fails when it comes to self-awareness and consciousness. We build convincing simulacra all the time, but they don't show much incremental progress towards consciousness - even on the level of a simple animal.

Look, the only thing I can judge by is the product. If you have an algorithm that persuasively acts as if it is conscious, self-aware, thinking, then frankly I don't see how someone could argue--sorry, put forward a persuasive line of reasoning--that it is not those things.

I'm not saying we'll never be able to tell the difference, but that for now we cannot. Perhaps when we get closer to making a machine that really appears to think, it'll tell us how to tell (it should have a *lot* more ability to retain information and work through very complex problems quickly).

My sensitivity to coming across in an argumentative way is simply that I don't like to alienate people - something I'm very capable of doing when I get going in a conversation like this. Please feel free to argue your case as strongly as you like! Short of personal insults, I don't get offended.

And to address Athena's point about the Turing test--I acknowledge it won't be simple, and there will be arguments, er, persuasive lines of reasoning. But I am extremely uncomfortable with a priori assumptions that there is "real" thinking and "pretend" thinking when we don't really understand it. Maybe we will have a good criterion for distinguishing them. But maybe we won't.

This copy of EMACS does some thinking-like things such as finding the beginning of the next sentence. It's not thinking. It's not approximating it or making incremental progress towards it - it's just a bunch of human thoughts that have been chained into logical structures. Even the most compelling game AIs are just machines - they can appear to be thinking and react in uncannily lifelike ways, yet they are pretend thinkers. No one worries about leaving on the PlayStation because of all the little guys in there who die every time you turn it off, and I think for good reason. My cat on the other hand clearly thinks and is conscious. That is not to say that someday, a computer cannot become conscious - I have no reason to think it can't. But just because it passes a Turing Test won't necessarily make it a consciousness. So I do think there is "real" and "pretend" thinking, but that we cannot dismiss all non-biological appearance of thinking as pretend.

Right now it's pretty easy to dismiss ELIZA and other programs that trick the unwary into thinking they are talking to a real person, and we are far, far, far, from any artificial consciousness, real or pretend. But I'm willing to bet that if and when we make progress and get closer, it's going to be really hard to make a distinction that isn't simply on the basis of discrimination, e.g., if's only real thoughts if it's done with carbon and not with silicon. In fact, it is exactly because the issue is of importance to ethics and so on that I am, on principle, unwilling to concede differences between "real" and "pretend" consciousness when none of us have any basis whatsoever to make that distinction. I'm sorry if I am being argumentative. But none of you have put forward a persuasive line of reasoning towards a distinction between "real" or "pretend" thinking. This is why I agreed with the Dijkstra comment that started this thread. I think (if you believe I am doing "real" thinking" ) that discussions whether or not a submarine "swims" are, for me, as pointless as a discussion if two watches with different mechanisms are both keeping "real" time. Discussing the differences in the mechanisms and how they achieve the same end is fascinating and fruitful. Discussing what consciousness is, and how it arises in biological systems, is fascinating. Speculation about how might consciousness might arise in algorithmic systems, while premature, could also be fascinating. Even discussions of how we might be "fooled" into thinking an algorithmic system is consciousness, as with the Turing Test, is interesting.

I agree with all your points, but I think it cuts both ways. None of us has a basis to equate consciousness as we experience it with the effects of consciousness. The limitation of our ability to measure does not define the underlying reality.

Just to illustrate how far we are from confronting the issue head on, I pulled a bit of the transcript from the 2005 Loebner prize competition. This is the highest scoring "AI" in the competition, whose strategy appears to be reciting nonsense and drawing the judge into its better areas by making leading statements. The transcript as a whole can be found at http://loebner.net/Prizef/2005_Contest/ ... ssion2.htm, but this was my favorite bit:

2005-09-18-10-53-22 PROGRAM: A situation can't be pointless, but the solutions will be very hard to implement.

2005-09-18-10-54-04 JUDGE: Solutions to what? And what about the damn hairbrush?

sanscardinality wrote: We build convincing simulacra all the time, but they don't show much incremental progress towards consciousness - even on the level of a simple animal.

I don't think they are very convincing at all--such as in your example. Clearly they are tricks, and take advantage of how people respond to social clues. How will we know it is not a trick? That is hard to answer.

Let me try to state my concern. I agree with you that today's effort at AI fall far short, and that any appearance of consciousness is just a trick. But I am concerned that someone, not necessarily you, might conclude that any machine-based consciousness is a trick and not real. I am unwilling to a priori dismiss the possibility of machine consciousness and intelligence out of hand. (I am not asserting that you are dismissing the possibility.) This is the basis of my fervid comments.

Which leads us to: if we accept at least the possibility of machine consciousness, how can one distinguish between the, er, "real" case and the frauds like ELIZA and the competition you cite? I don't know, but I think this is an important question. I myself prefer this phrasing of the question.

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

caliban wrote:I don't think they are very convincing at all--such as in your example. Clearly they are tricks, and take advantage of how people respond to social clues. How will we know it is not a trick? That is hard to answer.

Agreed. The convincing simulacra I was describing are more in specialized areas. There are certainly chat bots that fish for information on IMs that are convincing enough to fool teenagers into giving personal information out. Game AIs are also pretty convincing at doing things like squad tactics. DeepBlue is a pretty good simulacra of a chess player. I agree that general purpose AIs are pretty darn weak.

Let me try to state my concern. I agree with you that today's effort at AI fall far short, and that any appearance of consciousness is just a trick. But I am concerned that someone, not necessarily you, might conclude that any machine-based consciousness is a trick and not real. I am unwilling to a priori dismiss the possibility of machine consciousness and intelligence out of hand. (I am not asserting that you are dismissing the possibility.) This is the basis of my fervid comments.

I share your concerns. I actually think we see the issue somewhat similarly. My concern is more in the other direction - that by simulating consciousness, we'll demote the actual kind to something less/different than it is.

Which leads us to: if we accept at least the possibility of machine consciousness, how can one distinguish between the, er, "real" case and the frauds like ELIZA and the competition you cite? I don't know, but I think this is an important question. I myself prefer this phrasing of the question.

I have the same answer as you - I don't know. I do think it's important as well. I suspect we'll know it when we see it, but we may not.

sanscardinality wrote:DeepBlue is a pretty good simulacra of a chess player.

Deep Blue is an effective chess program, but it actually is lousy at simulating a human player. In fact it is an excellent illustration of how little we understand thought. Human players demonstrably "chunk" data--there are experiments which illustrate this--and explore only a few branches. Deep Blue and other chess programs look at millions of possible combinations, but are unable to chunk data the way humans do. If we could write a chess program that approaches chess the way humans do, by chunking data, we would have made huge progress.

sanscardinality wrote:My concern is more in the other direction - that by simulating consciousness, we'll demote the actual kind to something less/different than it is.

Oh, I'm not worried about that. We humans have a healthy, or even unhealthy, self-regard. And, although this is almost certainly not what you meant, I am leery of arguments that such-and-such will demote human status. Although you did not intend it, it smells to me of what was said to Copernicus, Galileo, Darwin... Don't worry. We'll always think ourselves special. And always think the Other as unspecial. It's our gift and our curse.

"Results! Why, man, I have gotten a lot of results. I know several thousand things that won't work." --Thomas A. Edison

sanscardinality wrote:DeepBlue is a pretty good simulacra of a chess player.

Deep Blue is an effective chess program, but it actually is lousy at simulating a human player. In fact it is an excellent illustration of how little we understand thought. Human players demonstrably "chunk" data--there are experiments which illustrate this--and explore only a few branches. Deep Blue and other chess programs look at millions of possible combinations, but are unable to chunk data the way humans do. If we could write a chess program that approaches chess the way humans do, by chunking data, we would have made huge progress.

I'm sure it would fool me, but I'm a bad chess player. Now Go - there's a game I can get into

Oh, I'm not worried about that. We humans have a healthy, or even unhealthy, self-regard. And, although this is almost certainly not what you meant, I am leery of arguments that such-and-such will demote human status. Although you did not intend it, it smells to me of what was said to Copernicus, Galileo, Darwin... Don't worry. We'll always think ourselves special. And always think the Other as unspecial. It's our gift and our curse.

It isn't really what I meant, and I do agree with you. I was thinking more of our attitude towards "lesser" beings. I don't think any area of inquiry should be off limits to science.