From eric at m056832107.syzygy.com Mon Feb 1 01:03:29 2010
From: eric at m056832107.syzygy.com (Eric Messick)
Date: 1 Feb 2010 01:03:29 -0000
Subject: [ExI] How to ground a symbol
In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com>
References: <20100131230539.5.qmail@syzygy.com>
<975270.46265.qm@web36504.mail.mud.yahoo.com>
Message-ID: <20100201010329.5.qmail@syzygy.com>
Gordon:
>This kind of processing goes on in every software/hardware system.
Yes, and apparently you didn't understand me. I already addressed this
issue later in the same message. It's at a different layer of
abstraction.
It's fine to ignore parts of messages that you agree with. It's
disingenuous to act as though a point hadn't been raised when you're
actually ignoring it.
>> Come back after you've written a neural network
>> simulator and trained it to do something useful.
>
>Philosophers of mind don't care much about how "useful" it may seem.
While I haven't actually written a neural network simulator, I have
written quite a few programs that are of similar levels of complexity.
I know from experience that things which seem simple, clear, and well
defined when thought about in an abstract way are in fact complex,
muddy, and ill-defined when one actually tries to implement them.
Until such a system has been shown to do something useful, it's
probably incomplete, and any intuition learned from writing it may
well be useless. That's why I stipulated usefulness.
>I think artificial neural networks show great promise as decision
> making tools.
Natural ones do too.
>But 100 billion * 0 = 0.
But 100,000,000,000 * 0.000,000,000,001 = 1.
Your argument depends on the axiomatic assumption that the level of
understanding in a single simulated neuron is *exactly* zero. Even
the tiniest amount of understanding in a programmed device (like a
thermostat) devastates your argument. So you cling to the belief that
understanding must be a binary thing, while the universe around you
continues to work by degrees instead of absolutes.
Yes, philosophy deals with absolutes, but where it ignores shades of
gray in the real world it gets things horribly wrong.
-eric
From gts_2000 at yahoo.com Mon Feb 1 01:47:46 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Sun, 31 Jan 2010 17:47:46 -0800 (PST)
Subject: [ExI] multiple realizability
In-Reply-To: <20100201010329.5.qmail@syzygy.com>
Message-ID: <418148.11027.qm@web36501.mail.mud.yahoo.com>
--- On Sun, 1/31/10, Eric Messick wrote:
>> This kind of processing goes on in every
>> software/hardware system.
>
> Yes, and apparently you didn't understand me.? I
> already addressed this issue later in the same message.?
> It's at a different layer of abstraction.
The layer of abstraction does not matter to me. What does matter is the extent to which the system has supposed mental operations comprised of computational processes operating over formal elements, i.e., to what extent it operates by formal programs. To that extent, in my view, the system lacks a mind.
One can conceive of an "artificially" constructed neural network that is in every respect identical to a natural brain, in which case that machine has a mind. So let's be clear: my objection is not that strong AI cannot happen. It is that it cannot happen in software/hardware systems, networked or stand-alone.
To make my point even more clear: I reject the doctrine of multiple realizability. I do not believe we can extract the mind from the neurological material that causes the subjective mental phenomena that characterize it, as if one could put a mind on a massive floppy disk and then load that "mental software" onto another substrate. I reject that idea as nothing more than a 21st century version of Cartesian mind/matter dualism.
The irony is that people who don't understand me call me the dualist, and suggest that I rather than they posit the existence of some mysterious mental substance that exists distinct from brain matter. I hope Jeff Davis catches this message.
-gts
From stathisp at gmail.com Mon Feb 1 08:52:56 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Mon, 1 Feb 2010 19:52:56 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <491598.82004.qm@web36508.mail.mud.yahoo.com>
References: <491598.82004.qm@web36508.mail.mud.yahoo.com>
Message-ID:
2010/2/1 Gordon Swobe :
>> He is the whole system, but his intelligence is only a
>> small and inessential part of the system, as it could easily
>> be replaced by dumber components.
>
> Show me who or what has conscious understanding of the symbols.
The intelligence created by the system has understanding.
>> It's irrelevant that the man doesn't really
>> understand what he is doing. The ensemble of neurons doesn't
>> understand what it's doing either, and they are the whole system too.
>
> I have no objection to your saying that neither the system nor anything contained in it has conscious understanding, but in that case you need to understand that you don't disagree with me; you don't believe in strong AI any more than I do.
The system has understanding, but no part of the system either
separately or taken as an ensemble has understanding.
I've tried to explain this giving several variations on the CRA, none
of which you have directly responded to, so here they are again:
Suppose that each neuron has sufficient intelligence intelligence for
it to know how to do its job. No neuron understands language, but the
person does. There are many tiny specialised intelligences and one
large general intelligence, and the two don't communicate. This is
analogous to the extended CR.
Suppose that the neurons are connected as one entity with sufficient
intelligence to know when to make its constituent parts fire. This
entity doesn't understand language, but the person does. There are two
intelligences, one specialised and one general, and the two don't
communicate. This is analogous to the CR.
Suppose there are several men in the extended CR all doing their bit
manipulating symbols. The men don't understand language, but the
entity created by the system does. There are several small specialised
intelligences (their general intelligence is not put to use) and one
large general intelligence, and the two don't communicate. This is
analogous to a normal brain.
--
Stathis Papaioannou
From stathisp at gmail.com Mon Feb 1 10:04:03 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Mon, 1 Feb 2010 21:04:03 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <732400.27938.qm@web36501.mail.mud.yahoo.com>
References: <20100131182926.5.qmail@syzygy.com>
<732400.27938.qm@web36501.mail.mud.yahoo.com>
Message-ID:
On 1 February 2010 06:22, Gordon Swobe wrote:
> --- On Sun, 1/31/10, Eric Messick wrote:
>> This was the start of a series of posts where you said that
>> someone with a brain that had been partially replaced with
>> programmatic neurons would behave as though he was at least partially
>> not conscious.? You claimed that the surgeon would have to
>> replace more and more of the brain until he behaved as though he was
>> conscious, but had been zombified by extensive replacement.
>
> Right, and Stathis' subject will eventually pass the TT just as your subject will in your thought experiment. But in both cases the TT will give false positives. The subjects will have no real first-person conscious intentional states.
I think you have tried very hard to avoid discussing this rather
simple thought experiment. It has one premise, call it P:
P: It is possible to make artificial neurons which behave like normal
neurons in every way, but lack consciousness.
That's it! Now, when I ask if P is true you have to answer "Yes" or
"No". Is P true?
OK, assuming P is true, what happens to a person's behaviour and to
his experiences if the neurons in a part of his brain with an
important role in consciousness are replaced with these artificial
neurons?
I'll answer (a) for you: his behaviour must remain unchanged. It must
remain unchanged because the artificial neurons behave in a perfectly
normal way in their interactions with normal neurons, sensory organs
and effector organs, according to P. If they don't, then P is false,
and you said that P is true. Can you see a way that I haven't seen
whereby it might *not* be a contradiction to claim that the person's
neurons will behave normally but the person will behave differently?
OK, the person's behaviour remains unchanged, by definition if P is
true. What about his experiences? The classic example here is visual
perception. If P is true, then the person would go blind; but if P is
true, he is also forced to behave as if he has normal vision. So
internally, either he must not notice that he is blind, or he must
notice that he is blind but be unable to communicate it. The latter is
impossible for the same reasons as it is impossible that his behaviour
changes: the neurons in his brain which do the thinking are also
constrained to behave normally. That leaves the first option, that he
goes blind but doesn't notice. If this idea is coherent to you, then
you have to admit that you might right now be blind and not know it.
However, you have clearly stated that you think this is preposterous:
a zombie doesn't know it's a zombie, but you know you're not a zombie,
and you would certainly know if you suddenly went blind (as a matter
of fact, some people *don't* recognise when they go blind - it's
called Anton's syndrome - but these people also behave abnormally, so
they aren't zombies or partial zombies).
Where does that leave you? I think you have to say you were mistaken
in saying P is true. It isn't possible to make artificial neurons
which behave like normal neurons in every way but lack consciousness.
Can you see another way out that I haven't seen?
--
Stathis Papaioannou
From stefano.vaj at gmail.com Mon Feb 1 10:16:10 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Mon, 1 Feb 2010 11:16:10 +0100
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <20100129192646.5.qmail@syzygy.com>
References:
<584374.10388.qm@web36504.mail.mud.yahoo.com>
<20100129192646.5.qmail@syzygy.com>
Message-ID: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com>
On 29 January 2010 20:26, Eric Messick wrote:
> Meaning is attached to word symbols when the word symbols are
> associated with sense symbols, not with other word symbols.
Not all symbols are words - and in fact the word "three" can be
associated with the number "3" - but "sense symbols" sounds as a
dubious and redundant concept.
--
Stefano Vaj
From gts_2000 at yahoo.com Mon Feb 1 12:53:49 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Mon, 1 Feb 2010 04:53:49 -0800 (PST)
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To:
Message-ID: <641209.68117.qm@web36506.mail.mud.yahoo.com>
--- On Mon, 2/1/10, Stathis Papaioannou wrote:
> The system has understanding, but no part of the system
> either separately or taken as an ensemble has understanding.
> I've tried to explain this giving several variations on the
> CRA, none of which you have directly responded to
Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system.
Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference?
-gts
From stathisp at gmail.com Mon Feb 1 13:14:09 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Tue, 2 Feb 2010 00:14:09 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <641209.68117.qm@web36506.mail.mud.yahoo.com>
References:
<641209.68117.qm@web36506.mail.mud.yahoo.com>
Message-ID:
On 1 February 2010 23:53, Gordon Swobe wrote:
> --- On Mon, 2/1/10, Stathis Papaioannou wrote:
>
>> The system has understanding, but no part of the system
>> either separately or taken as an ensemble has understanding.
>> I've tried to explain this giving several variations on the
>> CRA, none of which you have directly responded to
>
> Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system.
Could you respond to the specific examples I have used to demonstrate
this apparently non-obvious point? The neurons do not understand
language, they probably don't "understand" anything, and if they got
together on a day off to talk about it they still wouldn't understand
anything. And yet acting in concert, they produce this new entity, the
person, who does understand language. Note that it works both ways:
the person, who is very much more intelligent than the neurons,
doesn't have a clue what is going on in his head when he thinks
either. It's his head, so how is this possible?
> Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference?
You have to acknowledge that there are different levels of
abstraction. The man understands English but that's completely
irrelevant to his mechanistic symbol manipulation. It could be that a
lone clever neuron in his frontal lobe understands Russian and recites
Pushkin while squirting its neurotransmitters, but that has nothing to
do with the man understanding Russian, since it does not in any way
impact on the operation of his language centre; and conversely, the
clever Russian-speaking neuron does not necessarily have any idea what
the man is up nor any knowledge of English or Chinese.
--
Stathis Papaioannou
From gts_2000 at yahoo.com Mon Feb 1 13:29:15 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Mon, 1 Feb 2010 05:29:15 -0800 (PST)
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To:
Message-ID: <452387.36160.qm@web36507.mail.mud.yahoo.com>
--- On Mon, 2/1/10, Stathis Papaioannou wrote:
>> Right, and Stathis' subject will eventually pass the
> TT just as your subject will in your thought experiment. But
> in both cases the TT will give false positives. The subjects
> will have no real first-person conscious intentional
> states.
>
> I think you have tried very hard to avoid discussing this
> rather simple thought experiment. It has one premise, call it P:
I didn't avoid anything. We went over it a million times. :)
> P: It is possible to make artificial neurons which behave
> like normal neurons in every way, but lack consciousness.
P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do. I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain.
> OK, assuming P is true, what happens to a person's
> behaviour and to his experiences if the neurons in a part of his
> brain with an important role in consciousness are replaced with these
> artificial neurons?
As I explained many times, because your artificial neurons will not help the patient have complete subjective experience, and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows.
Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens.
-gts
From stathisp at gmail.com Mon Feb 1 14:28:04 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Tue, 2 Feb 2010 01:28:04 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <452387.36160.qm@web36507.mail.mud.yahoo.com>
References:
<452387.36160.qm@web36507.mail.mud.yahoo.com>
Message-ID:
2010/2/2 Gordon Swobe :
>> P: It is possible to make artificial neurons which behave
>> like normal neurons in every way, but lack consciousness.
>
> P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons.
Yes, that would be one aspect of the behaviour that needs to be reproduced.
> I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do.
That is a consequence of functionalism but at this point functionalism
is assumed to be wrong. All we need is artificial neurons that fit
inside the head (which excludes structures the size of Texas) and can
fool their neighbours into thinking they are normal neurons.
> I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain.
OK, P can be made even more general by replacing "neuron" with
"component". The component could be subneuronal in size or a
collection of multiple neurons.It just has to behave normally in
relation to its neighbours.
>> OK, assuming P is true, what happens to a person's
>> behaviour and to his experiences if the neurons in a part of his
>> brain with an important role in consciousness are replaced with these
>> artificial neurons?
>
> As I explained many times, because your artificial neurons will not help the patient have complete subjective experience,
Yes, that's an essential part of P: no subjective experiences
> and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows.
But how? We agreed that the artificial components BEHAVE NORMALLY.
That is their essential feature, apart from lacking consciousness. You
remove any normal component whatsoever, drop in the replacement, and
the behaviour of the whole brain MUST remain unchanged, or else the
replacement component is not as assumed. I can't believe that you
don't see this, and after being inconsistent being disingenuous is the
worst sin you can commit in philosophical discussions.
> Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens.
You keep repeating it but it doesn't make it so. I have assumed that
what you are saying is true and tried to show you that it leads to an
absurdity, but you respond by saying that if A behaves exactly the
same as B then A does not behave exactly the same as B, and carry on
as if no-one will notice the problem with this!
--
Stathis Papaioannou
From bbenzai at yahoo.com Mon Feb 1 14:15:33 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Mon, 1 Feb 2010 06:15:33 -0800 (PST)
Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1
In-Reply-To:
Message-ID: <681115.27739.qm@web113615.mail.gq1.yahoo.com>
> From: Gordon Swobe
> To: ExI chat list
> Subject: Re: [ExI] How to ground a symbol
> Message-ID: <589903.82027.qm at web36507.mail.mud.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
> --- On Sun, 1/31/10, Ben Zaiboc
> wrote:
>
> > In future, whenever the system sees a rose, it will
> know
> > whether it's a red rose or not, because there'll be a
> part
> > of its internal state that matches the symbol "Red".?
>
> The system you describe won't really "know" it is red. It
> will merely act as if it knows it is red, no different from,
> say, an automated camera that acts as if it knows the light
> level in the room and automatically adjusts for it.
Please explain what "really knowing" is.
I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. In fact, I'm at a loss to see how that sentence can even make sense.
You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck.
Ben Zaiboc
From bbenzai at yahoo.com Mon Feb 1 14:28:43 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Mon, 1 Feb 2010 06:28:43 -0800 (PST)
Subject: [ExI] multiple realizability
In-Reply-To:
Message-ID: <629922.52960.qm@web113610.mail.gq1.yahoo.com>
Gordon Swobe declared:
> The layer of abstraction does not matter to me.
Well, if that's the case, all your philosophising avails you nothing.
At all.
Levels of abstraction are vitally important, and if you dismiss them as irrelevant, you're chucking not just the baby, but the whole universe out with the bathwater.
If you honestly think levels of abstraction irrelevant, then everything is just a vast sea of gluons and quarks (or something even lower down), and there is no such thing as matter, planets, stars, water, trees, or people.
If levels of abstraction are irrelevant, you don't exist.
Ben Zaiboc
From hkeithhenson at gmail.com Mon Feb 1 17:28:56 2010
From: hkeithhenson at gmail.com (Keith Henson)
Date: Mon, 1 Feb 2010 10:28:56 -0700
Subject: [ExI] Glacier Geoengineering
Message-ID:
On Mon, Feb 1, 2010 at 5:00 AM, Alfio Puglisi wrote:
> On Sun, Jan 31, 2010 at 10:52 AM, Keith Henson wrote:
>
>> The object is to freeze a glacier to bedrock.
snip
>
> Temperatures at the glacier-bedrock interface can be amazingly high. This
> article talks about bedrock *welding* with temperatures higher than 1,000
> Celsius:
>
> http://jgs.lyellcollection.org/cgi/content/abstract/163/3/417
>
> I guess the energy comes from the potential energy of the ice sliding down
> the terrain.
True. The article makes the point that it happened in a very short
time in a small volume though.
> This is only enough to take ?out the heat coming out of the earth. ?Probably
>> need it somewhat
>> larger to pull the huge masses of ice in a few decades down to a
>> temperature where they would flow much slower.
>>
>
> If one also needs to remove the heat generated gravitationally, this could
> be potentially much larger than just the Earth's heat flux.
Good point. Let's put numbers on it. Take a square km of ice a km
deep. Consider the case of it sliding at 10 m/year down a 10 m/km
(1%) slope. So the energy release would be Mgh. 1000 kg/cubic meter
x 10E9cubic m/cubic km x 9.8 x 0.1m =9.8 10E12 J.
That is released over a year, so divide by seconds in a year, 3.15 x
10E7or ~3.1 10E5 watts, which is 310 kW.
So for this case of a fairly fast moving glacier, gravity released
heat would be about 3 times the geo heat. Of course the heat from
this motion would stop if the glacier was frozen to the bedrock.
Keith
From jonkc at bellsouth.net Mon Feb 1 17:04:58 2010
From: jonkc at bellsouth.net (John Clark)
Date: Mon, 1 Feb 2010 12:04:58 -0500
Subject: [ExI] How not to make a thought experiment (was: How to ground a
symbol)
In-Reply-To: <304772.53589.qm@web36501.mail.mud.yahoo.com>
References: <304772.53589.qm@web36501.mail.mud.yahoo.com>
Message-ID:
On Jan 31, 2010, Gordon Swobe wrote:
> Let me know what you think.
> http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php
More of the same. You ask us to imagine a room too large to fit into the observable universe and then say that it acts intelligently but "obviously" it doesn't understand anything. You just refuse to consider two possibilities:
1) That you don't fully understand understanding as well as you think you do.
2) Even if you don't understand how it could understand the room could still understand.
In fact if Darwin is right (and there is an astronomical amount of evidence that he is) then that room MUST have consciousness despite your or my lack of comprehension of the mechanics of it all. And even if Darwin is not right every one of your arguments against consciousness existing in a robot could just as easily be used to argue against consciousness existing in your fellow human beings; but for some reason you seem unenthusiastic in pursuing that line of thought.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stefano.vaj at gmail.com Mon Feb 1 18:13:12 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Mon, 1 Feb 2010 19:13:12 +0100
Subject: [ExI] Understanding is useless
In-Reply-To: <165704.91501.qm@web36502.mail.mud.yahoo.com>
References: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net>
<165704.91501.qm@web36502.mail.mud.yahoo.com>
Message-ID: <580930c21002011013m7eb5e8b8r483ea67c719b304f@mail.gmail.com>
On 29 January 2010 20:25, Gordon Swobe wrote:
> Some people here might even call me a chauvinist of sorts for daring to claim that computers don't understand their own words. I suppose typewriters and cell phones should have civil rights too.
Why, do you suggest that unconscious human beings should lose their own? ;-)
--
Stefano Vaj
From eric at m056832107.syzygy.com Mon Feb 1 18:14:30 2010
From: eric at m056832107.syzygy.com (Eric Messick)
Date: 1 Feb 2010 18:14:30 -0000
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com>
References:
<584374.10388.qm@web36504.mail.mud.yahoo.com>
<20100129192646.5.qmail@syzygy.com>
<580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com>
Message-ID: <20100201181430.5.qmail@syzygy.com>
Stefano:
>Eric:
>> Meaning is attached to word symbols when the word symbols are
>> associated with sense symbols, not with other word symbols.
>
>Not all symbols are words - and in fact the word "three" can be
>associated with the number "3" - but "sense symbols" sounds as a
>dubious and redundant concept.
I should probably explain what I mean by the phrase "sense symbols".
As a brain thinks, we can consider it as activating and processing
sequences of sets of symbols. This is analogous to a CPU having
various bit patterns active on internal busses, with the bit patterns
representing symbols.
Some of the symbols in the brain map 1 to 1 with words in a spoken
language, and we would refer to them as word symbols. Other brain
symbols appear within the brain as a direct result of the stimulation
of sensory neurons in the body, and this is what I mean by a "sense
symbol".
It's basically the internal representation of directly sensed external
events.
Actually, I was partially mistaken in saying that meaning cannot be
attached to a word by association with other words. A definition
could associate a new word with a set of old words, and if all of the
old words have meanings (by being grounded or by association) the new
one can acquire meaning as well.
-eric
From Frankmac at ripco.com Mon Feb 1 18:54:46 2010
From: Frankmac at ripco.com (Frank McElligott)
Date: Mon, 1 Feb 2010 13:54:46 -0500
Subject: [ExI] war is peace
Message-ID: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c>
The largest exporter of oil is Russia, more than the Saudi's.
Yet there was only one bidder, my my a dream auction.
Is this the real world we live in, or are we back in the days of sherwood forest with Robin hood and the sheriff.
The Wizard of Russia
By Michael Bohm
Michael Bohm is opinion page editor of The Moscow Times.
A year after former Yukos CEO Mikhail Khodorkovsky was arrested on fraud charges, Baikal Finance Group ? a mysterious company with a share capital of only 10,000 rubles ($330) ? acquired Yukos' largest subsidiary, Yuganskneftegaz, for $9.3 billion in an "auction" consisting of only one bidder. After Yuganskneftegaz was sold four days later to state-controlled Rosneft, Andrei Illarionov, economic adviser to then-President Vladimir Putin, called the state expropriation of Yukos "the Biggest Scam of the Year" in his annual year-end list of Russia's worst events. When Illarionov announced his 2009 list in late December, he should have added another award and given it to Putin: "the Best PR Project of the Decade."
The Yukos scam was "legal nihilism" par excellence, but most Russians have a completely different version of the event. The Kremlin's 180-degree PR spin on the Yukos nationalization should be a case study for any nation aspiring to create a Ministry of Truth. As Putin explained in his December call-in show, the Yukos affair was not government expropriation at all, but a way to give money that Yukos "stole from the people" back to the people by helping them buy new homes and repair old ones. Putin, it turns out, is also Russia's Robin Hood. War is peace. Ignorance is strength.
Oh by the way Obama's job program is going to cost 100 billion, again another robin hood:)
Frank
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jonkc at bellsouth.net Mon Feb 1 21:59:53 2010
From: jonkc at bellsouth.net (John Clark)
Date: Mon, 1 Feb 2010 16:59:53 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <394481.10295.qm@web36506.mail.mud.yahoo.com>
References: <394481.10295.qm@web36506.mail.mud.yahoo.com>
Message-ID: <1CC02E2F-6A82-4B99-A0B4-39BF26253BEC@bellsouth.net>
On Jan 31, 2010, Gordon Swobe wrote:
> digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects
You've said that before and when you did I said brains are not important minds are, and minds are digital although they are not objects. To save time and avoid needless wear and tear on electrons the next time you have the urge to repeat that same remark yet again lets adopt the convention of you just saying "41" and my retort to your remark will be "42".
> Philosophers of mind don't care much about how "useful" it may seem.
And that's why philosophers of mind have never produced anything useful and probably never will; computer programers have, mathematicians have, but philosophers of mind not so much.
> They do care if it has a mind capable of having conscious intentional states:
Unfortunately that is all philosophers of mind care about, if they spent just a little time considering what the mind in question actually does regardless of what "intentional state" it is in they would be much more successful. If they spent time taking a high school biology class they would be even better off. But they dislike getting their hands dirty conducting experiments other than the thought kind, and considering actual evidence is even more disagreeable to them.
Darwin contributed astronomically more to understanding what the mind is than any philosopher of mind that ever lived. And these two bit philosophers act as if they've never heard of him; they deserve our contempt.
> Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand.
Some might think that it was outrageous enough to propose a thought experiment that contained a room larger than the observable universe and that operated so slowly that the 13.7 billion year age of the universe is not nearly enough time for it to complete a single action, and then to confidently proclaim exactly what this bizarre amalgamation can and cannot understand; but no, Searle was just getting warmed up. Calling his next step ridiculous doesn't capture its true nature, it's more like ridiculous to the ridiculous power.
Piling absurdity on top of absurdity he now wants us to think about a "man" who "internalized" this contraption that is far too large and far too slow to fit in our universe. I don't know what sort of entity could do that and I would be a fool to claim to know what that vastly improbable, something, could and couldn't do, and so would you, and so would Searle. I do know one thing, whatever it is you can bet your life that it isn't a man.
> The system you describe won't really "know" it is red. It will merely act as if it knows it is red
Einstein didn't understand physics he just acted like he understood physics. Tiger Woods didn't understand how to play golf he just acted like he understood how to play golf. I've said it before I'll say it again, understanding is useless!
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From thespike at satx.rr.com Mon Feb 1 23:47:10 2010
From: thespike at satx.rr.com (Damien Broderick)
Date: Mon, 01 Feb 2010 17:47:10 -0600
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
Message-ID: <4B6767FE.8080304@satx.rr.com>
http://www.foxnews.com/story/0,2933,584500,00.html
* Witches, Druids and pagans rejoice! The Air Force Academy in Colorado
is about to recognize its first Wiccan prayer circle, a Stonehenge on
the Rockies that will serve as an outdoor place of worship for the
academy's neo-pagans.*
Wiccan cadets and officers on the Colorado Springs base have been
convening for over a decade, but the school will officially dedicate a
newly built circle of stones on about March 10, putting the outdoor
sanctuary on an equal footing with the Protestant, Catholic, Jewish and
Buddhist chapels on the base.
"When I first arrived here, Earth-centered cadets didn't have anywhere
to call home," said Sgt. Robert Longcrier, the lay leader of the
neo-pagan groups on base.
"Now, they meet every Monday night, they get to go on retreats, and they
have a stone circle."
Academy officials had no tally of the number of Wiccan cadets at the
school of 4,500, but said they had been angling to set up a proper space
since the academic year began.
"That's one of the newer groups," said John Van Winkle, a spokesman for
the academy. "They've had a worship circle on base for some time and
we're looking to get them an official one."
The Air Force recognizes several distinct forms of neo-paganism,
including Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and
Druidism, according to Pagan groups that track the information.
It isn't nearly as comprehensive when it comes to sects within other
religions. The academy still does not recognize, for instance, the
massive gulfs between Catholics with guilt problems and those without;
or the distinct practices of Jews who keep kosher, those who eat bacon,
and those secretly wish they could.
Since a 2004 survey of cadets on the base revealed dozens of instances
of harassment and intolerance, superintendent Michael Gould has made
religious tolerance a priority.
Yet Van Winkle, the academy spokesman, said he could not confirm whether
the school's superintendent or senior staff would attend the dedication
ceremony.
"(We) haven't gotten that far yet: First we have to get a date, and then
once we get a date for the dedication ceremony we'll see who's going to
be available for it," he told FoxNews.com.
"Once we get a date that's going to be the real driving force for who's
going to attend."
From msd001 at gmail.com Tue Feb 2 03:46:15 2010
From: msd001 at gmail.com (Mike Dougherty)
Date: Mon, 1 Feb 2010 22:46:15 -0500
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
In-Reply-To: <4B6767FE.8080304@satx.rr.com>
References: <4B6767FE.8080304@satx.rr.com>
Message-ID: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com>
On Mon, Feb 1, 2010 at 6:47 PM, Damien Broderick wrote:
> "When I first arrived here, Earth-centered cadets didn't have anywhere to
> call home," said Sgt. Robert Longcrier, the lay leader of the neo-pagan
> groups on base.
Earth-centered cadets... didn't have anywhere... to call home.
Is this in comparison to space cadets? Or is it illustrating a
problem with location or availability of communications equipment? Or
maybe it's about the alienation of earth-centered cadets feeling
isolated... from their earthican center?
> "Now, they meet every Monday night, they get to go on retreats, and they
> have a stone circle."
On monday night, the earth-centered cadets go on retreats to a stone circle?
> Academy officials had no tally of the number of Wiccan cadets at the school
> of 4,500, but said they had been angling to set up a proper space since the
> academic year began.
I can tell you the "angling" of a stone circle is 360 degrees, no
matter how many bi- sects there are. Or maybe while on their monday
night retreats they go fishing? I'm not sure how that is a productive
way to get work done.
> "That's one of the newer groups," said John Van Winkle, a spokesman for the
> academy. "They've had a worship circle on base for some time and we're
> looking to get them an official one."
What criteria is used to make a circle official? Is a qualified
someone going to measure diameter and circumference to a high degree
of precision before making a declaration?
> The Air Force recognizes several distinct forms of neo-paganism, including
> Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and Druidism,
> according to Pagan groups that track the information.
That's pretty impressive considering most of the time members of these
groups can hardly recognize each other.
> It isn't nearly as comprehensive when it comes to sects within other
> religions. The academy still does not recognize, for instance, the massive
> gulfs between Catholics with guilt problems and those without; or the
> distinct practices of Jews who keep kosher, those who eat bacon, and those
> secretly wish they could.
And what would these group be "officially" recognized as Whole Guilt
Catholics vs Skim Catholics or Bacon Jews vs Fakin Bacon Jews?
> "(We) haven't gotten that far yet: First we have to get a date, and then
> once we get a date for the dedication ceremony we'll see who's going to be
> available for it," he told FoxNews.com.
>
> "Once we get a date that's going to be the real driving force for who's
> going to attend."
Much like high school students deciding if they'll attend a sophomore prom...
I wonder if we could get the Air Force to recognize our Holy
HotTub-based religion and declare an officially sanctioned meeting
place on base? You know, to be fair and completely "tolerant."
From thespike at satx.rr.com Tue Feb 2 03:54:15 2010
From: thespike at satx.rr.com (Damien Broderick)
Date: Mon, 01 Feb 2010 21:54:15 -0600
Subject: [ExI] US "Air Force recognizes several distinct forms
of neo-paganism"
In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com>
References: <4B6767FE.8080304@satx.rr.com>
<62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com>
Message-ID: <4B67A1E7.9020905@satx.rr.com>
On 2/1/2010 9:46 PM, Mike Dougherty quoth:
>said John Van Winkle
did make me wonder about a leg-pull... but he pops up on Google more
than once.
From moulton at moulton.com Tue Feb 2 05:42:01 2010
From: moulton at moulton.com (moulton at moulton.com)
Date: 2 Feb 2010 05:42:01 -0000
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
Message-ID: <20100202054201.55588.qmail@moulton.com>
Here is some background info:
http://www.nytimes.com/2005/06/04/national/04airforce.html
http://www.militaryreligiousfreedom.org/
It looks like things are getting better than they were a few years ago.
From gts_2000 at yahoo.com Tue Feb 2 14:43:10 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Tue, 2 Feb 2010 06:43:10 -0800 (PST)
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <20100201181430.5.qmail@syzygy.com>
Message-ID: <969214.43756.qm@web36506.mail.mud.yahoo.com>
--- On Mon, 2/1/10, Eric Messick wrote:
> Actually, I was partially mistaken in saying that meaning
> cannot be attached to a word by association with other words.
I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning.
Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean...
Compare:
1) Jack means that the moon orbits the earth.
2) The word "moon" means a large object that orbits the earth.
In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon.
In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language.
The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning.
-gts
From bbenzai at yahoo.com Tue Feb 2 15:08:38 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Tue, 2 Feb 2010 07:08:38 -0800 (PST)
Subject: [ExI] Glacier Geoengineering
In-Reply-To:
Message-ID: <847034.14281.qm@web113617.mail.gq1.yahoo.com>
I need to ask a question here, please indulge me if the answer should be obvious:
What's the point of sticking glaciers to their bedrock?
Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose.
Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely?
Ben Zaiboc
From bbenzai at yahoo.com Tue Feb 2 15:10:42 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Tue, 2 Feb 2010 07:10:42 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To:
Message-ID: <263261.91221.qm@web113605.mail.gq1.yahoo.com>
Gordon wrote:
"the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens"
This is a good example of a "straw man" argument. You are misrepresenting the claim that some formal programs can cause minds as a claim that *all* formal programs *must* cause minds. This is (or should be) obvious nonsense.
As many people now have said, directly and indirectly, many times, it's not the 'formal programness' that's important. That is completely irrelevant. What's important is information processing of a particular kind. This could be implemented by a biological system, an electronic or electromechanical system, a purely chemical system, a nanomechanical system or indeed by a massive array of beer cans and string. The fact that you find beer cans and string an unlikely substrate for intelligence is beside the point (I find it unlikely too, but for entirely different reasons, to do with practicality, not theoretical possibility).
These 'formal programs' that you keep going on about are just one subset among a large set of possible information processing systems that can give rise to minds, if set up and run in the right way.
Ben Zaiboc
From stefano.vaj at gmail.com Tue Feb 2 18:57:38 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Tue, 2 Feb 2010 19:57:38 +0100
Subject: [ExI] 1984 and Brave New World
In-Reply-To: <12411.23612.qm@web27003.mail.ukl.yahoo.com>
References:
<12411.23612.qm@web27003.mail.ukl.yahoo.com>
Message-ID: <580930c21002021057x1cb3d14ai15b3cdafd0d9282a@mail.gmail.com>
On 31 January 2010 16:09, Tom Nowell wrote:
> Brave New World reflects the utopian thinking of those who believed a technocratic elite could bestow happiness for all, and its focus on biological engineering of people and society reflects the early 20th century eugenicists. In a time when people were publicly advocating the sterilisation of undesirable types, and where people were using dubious biology to push forward their own political views, Huxley warns us of one way in which this could end up.
Mmhhh.
Where is the "warning"? Huxley does seem to see the Brave New World as
the unavoidable destination of the societal goals worth pursuing.
And where is "eugenics", at least in a transhumanist sense? The
different castes of BNW are kept as stable as possible, no effort to
improve, enhance of change their genetic makeover is in place.
--
Stefano Vaj
From gts_2000 at yahoo.com Tue Feb 2 23:44:33 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Tue, 2 Feb 2010 15:44:33 -0800 (PST)
Subject: [ExI] meaning & symbols
In-Reply-To:
Message-ID: <816357.45313.qm@web36501.mail.mud.yahoo.com>
--- On Tue, 2/2/10, Spencer Campbell wrote:
> According to Eric, association is the sole factor giving
> many symbols meaning in human minds. The only prerequisite is that at
> least one symbol in the web has meaning intrinsically; that is to
> say, it is a sense symbol. Meaning can effectively be shared between
> symbols, and is not diluted in the process.
I think I misread Eric's sentence. Thanks for pointing that out.
In any case I do not believe there exists any such thing as a "sense symbol".
Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions.
Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data.
I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z.
In casual conversation we sometimes speak about words as if they mean something, but they do not actually mean anything. Conscious agents mean things and they use words to convey their meanings.
-gts
From possiblepaths2050 at gmail.com Wed Feb 3 01:01:25 2010
From: possiblepaths2050 at gmail.com (John Grigg)
Date: Tue, 2 Feb 2010 18:01:25 -0700
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
In-Reply-To: <20100202054201.55588.qmail@moulton.com>
References: <20100202054201.55588.qmail@moulton.com>
Message-ID: <2d6187671002021701v18de857bs184fa3e66265410f@mail.gmail.com>
I hope the Air Force Academy got a handle on the serious rape problem they
had, not so many years ago.
John
On Mon, Feb 1, 2010 at 10:42 PM, wrote:
>
> Here is some background info:
> http://www.nytimes.com/2005/06/04/national/04airforce.html
> http://www.militaryreligiousfreedom.org/
>
> It looks like things are getting better than they were a few years ago.
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gts_2000 at yahoo.com Wed Feb 3 02:32:09 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Tue, 2 Feb 2010 18:32:09 -0800 (PST)
Subject: [ExI] meaning & symbols
In-Reply-To:
Message-ID: <230474.84371.qm@web36502.mail.mud.yahoo.com>
--- On Tue, 2/2/10, Spencer Campbell wrote
> In the second paragraph I almost jumped on you again for
> misusing the concept of abstraction, but then I noticed you said "and
> about other" rather than "and other". You weren't saying that sense data
> are abstractions, if I understand correctly.
Right.
> When we get to the third paragraph, however, it sounds as
> if you believe that manking discovered symbols rather than
> invented them.
No. Not sure why you would say that. I certainly do not believe we discover symbols. We create them.
> Here's the thing: the very idea of a symbol is, in and of
> itself, an abstraction. I suspect it's possible to form a
> coherent model of the mind (by today's standards) without ever
> mentioning symbols or anything like them. It may not be a particularly
> elegant model, but it would work as well as any other.
No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact.
The computationalist model fails to explain that plain fact. On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. To make the model coherent, its proponents must introduce a homunculous: an observer/user of the supposed brain/computer who sees and understands the meanings of the symbols. But the homunculous fallacy proves fatal to the theory: How does the homunculous understand the symbols if not by some means other than computation? And if that's so then why did we say the mind exists as a computer in the first place?
> You're correct in saying that sense symbols do not exist,
> but only insofar as there aren't any symbols which DO exist.
Hmm, I count 21 word-symbols in that sentence of yours.
> All I meant by "intrinsic" meaning was that some symbols in
> the field of all available within a given Z are meaningful
> irrespective of any other symbols. Eric explains that this is so
> because they are invoked directly by incoming sensory data: I see
> a dog, I think a dog symbol.
Yes, but when I look inside your head I see nothing even remotely resembling a digital computer. Instead I see a marvelous product of biological evolution.
-gts
From pharos at gmail.com Wed Feb 3 09:04:36 2010
From: pharos at gmail.com (BillK)
Date: Wed, 3 Feb 2010 09:04:36 +0000
Subject: [ExI] meaning & symbols
In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com>
References:
<230474.84371.qm@web36502.mail.mud.yahoo.com>
Message-ID:
On 2/3/10, Gordon Swobe wrote:
> Yes, but when I look inside your head I see nothing even remotely
> resembling a digital computer. Instead I see a marvelous product
> of biological evolution.
>
>
Yes, You have told us all at great length that you very strongly
believe that only human brains (and other things which must be almost
identical to human brains) can do the magic human 'consciousness'
thing. That's fine, you are allowed to believe anything you like, but
it is only a belief that you cannot 'prove' is correct.
And much reasoning has been produced to show that it is probably a
mistaken belief.
We shall just have to wait until weak AI computers develop, (probably
using new designs and different programming techniques) into machines
that apparently have strong AI, are more intelligent than humans and
have a type of 'consciousness'.
When these machines are out exploring the universe and reporting back
to the remaining humans trapped on earth who are being looked after by
similar intelligent machines, I fully expect you to say 'But they're
not "really* conscious'.
BillK
From stefano.vaj at gmail.com Wed Feb 3 13:04:44 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Wed, 3 Feb 2010 14:04:44 +0100
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
In-Reply-To: <4B6767FE.8080304@satx.rr.com>
References: <4B6767FE.8080304@satx.rr.com>
Message-ID: <580930c21002030504m701fe685k6f0e955f191467b5@mail.gmail.com>
On 2 February 2010 00:47, Damien Broderick wrote:
> http://www.foxnews.com/story/0,2933,584500,00.html
>
> * Witches, Druids and pagans rejoice! The Air Force Academy in Colorado is
> about to recognize its first Wiccan prayer circle, a Stonehenge on the
> Rockies that will serve as an outdoor place of worship for the academy's
> neo-pagans.*
>
>
Good news... Even though I am somewhat diffident of the orthodoxy of US
neopagans. ;-)
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stefano.vaj at gmail.com Wed Feb 3 13:09:05 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Wed, 3 Feb 2010 14:09:05 +0100
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <20100201181430.5.qmail@syzygy.com>
References:
<584374.10388.qm@web36504.mail.mud.yahoo.com>
<20100129192646.5.qmail@syzygy.com>
<580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com>
<20100201181430.5.qmail@syzygy.com>
Message-ID: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com>
On 1 February 2010 19:14, Eric Messick wrote:
> Some of the symbols in the brain map 1 to 1 with words in a spoken
> language, and we would refer to them as word symbols. Other brain
> symbols appear within the brain as a direct result of the stimulation
> of sensory neurons in the body, and this is what I mean by a "sense
> symbol".
>
So, is the integer "3" a word symbol or a sense symbol? And what about the
ASCII decoding of a byte? Or the rasterisation of the ASCII symbol?
And what difference would exactly make?
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stefano.vaj at gmail.com Wed Feb 3 13:15:28 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Wed, 3 Feb 2010 14:15:28 +0100
Subject: [ExI] war is peace
In-Reply-To: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c>
References: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c>
Message-ID: <580930c21002030515y34e2ec6ag9742e901071242b4@mail.gmail.com>
2010/2/1 Frank McElligott
> The Yukos scam was "legal nihilism" par excellence, but most Russians have
> a completely different version of the event. The Kremlin's 180-degree PR
> spin on the Yukos nationalization should be a case study for any nation
> aspiring to create a Ministry of Truth. As Putin explained in his December
> call-in show, the Yukos affair was not government expropriation at all, but
> a way to give money that Yukos "stole from the people" back to the people by
> helping them buy new homes and repair old ones. Putin, it turns out, is also
> Russia's Robin Hood. War is peace. Ignorance is strength.
>
>
I am really confused. Do you maintain that they were wrong to change their
views? Or that they should have left Yukos in the hands it had fallen in?
And why?
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bbenzai at yahoo.com Wed Feb 3 13:52:04 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Wed, 3 Feb 2010 05:52:04 -0800 (PST)
Subject: [ExI] meaning & symbols
In-Reply-To:
Message-ID: <910459.5460.qm@web113613.mail.gq1.yahoo.com>
Gordon Swobe wrote:
> I do not believe there exists any such thing as
> a "sense symbol".
>
> Organisms with highly developed nervous systems create and
> ponder mental abstractions, aka symbols, about sense data
> and about other abstractions.
>
> Simple organisms on the order of, say, fleas have eyes and
> other sense organs, so it seems likely that they have
> awareness of sense data. But because they lack a well
> developed nervous system it seems very improbable to me that
> they can do much in the way of forming symbols to represent
> that data.
OK obviously this word 'symbol' needs some clear definition.
I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns. In that sense, sensory symbols exist, as do (visual) word symbols, (auditory) word symbols, concept symbols, which are a higher-level abstraction from the above three types, and hundreds of other types of 'symbol', representing all the different patterns of neural activity that can be regarded as coherent units, like emotional states, memories, linguistic units (nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of fluidity, etc.), and so on.
But that's just me. Maybe I'm overstretching the use of the word.
What do other people mean by the word 'symbol', in this context?
Gordon points out that they are all meaningless in themselves, only taking on a meaning in the context of a system that can be called a conscious mind.
I'm not sure if the 'conscious' part is necessary, though. In any event, the 'meaning' arises as a result of the interaction of the symbols, grounded in the system's interaction with its environment.
To say that an organism's 'hunger', which results in it finding and consuming food, is meaningless unless the organism is conscious, is rather a silly statement, and calls into question what we mean by 'meaning'.
Ben Zaiboc
From jonkc at bellsouth.net Wed Feb 3 15:32:24 2010
From: jonkc at bellsouth.net (John Clark)
Date: Wed, 3 Feb 2010 10:32:24 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com>
References: <969214.43756.qm@web36506.mail.mud.yahoo.com>
Message-ID:
On Feb 2, 2010, Gordon Swobe wrote:
> The mere association of a symbol to another symbol does not give either symbol meaning.
> Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality.
Broken down to its smallest component parts, a symbol is something that consistently and systematically changes the state of the symbol reader. A Turing Machine does this when it encounters a zero or a one, and a punch card reader does this when it encounters a hole. You demand an explanation of human style intentionality and say, correctly, that the examples I cite are far less complex and awe inspiring, but if they were just as mysterious they wouldn't be doing their job. I honestly don't know what you want, you say you want an explanation but when one is provided and its split into parts small enough to comprehend you say I understand that so it can't be the explanation.
Your retort is always I don't understand that or I do understand that so "obviously" that can't be right. Even in theory I don't see how any explanation would satisfy you.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jonkc at bellsouth.net Wed Feb 3 16:13:18 2010
From: jonkc at bellsouth.net (John Clark)
Date: Wed, 3 Feb 2010 11:13:18 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com>
References: <230474.84371.qm@web36502.mail.mud.yahoo.com>
Message-ID:
On Feb 2, 2010, at 9:32 PM, Gordon Swobe wrote:
> The human mind thus "has semantics" and any coherent model of it must explain that plain fact.
But before that can happen you must explain what you mean by explain.
>
> The computationalist model fails to explain that plain fact.
It explains it beautifully according to my understanding of the word. You want a theory that is simultaneously completely understandable and utterly mysterious, so naturally you have been disappointed.
> On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax.
Correct me if I'm wrong but I seem to think you may have said something along those lines before, and I think I even remember people bringing up very good counter arguments against that argument that you have steadfastly ignored.
> Nothing in the model explains how the mind can have conscious understand the symbols it manipulates.
True, they are not comprehensible and incomprehensible at the same time.
> I look inside your head I see nothing even remotely resembling a digital computer.
Then why are people spending hundreds of millions of dollars building digital computers that simulate larger and larger chunks of neurons?
> Instead I see a marvelous product of biological evolution.
How can you dare use the word "Evolution"!? YOUR VIEWS ARE 100% INCOMPATIBLE WITH EVOLUTION!
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ablainey at aol.com Wed Feb 3 19:54:58 2010
From: ablainey at aol.com (ablainey at aol.com)
Date: Wed, 03 Feb 2010 14:54:58 -0500
Subject: [ExI] meaning & symbols
In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com>
Message-ID: <8CC7321E5DB89A5-D54-12CB@webmail-d081.sysops.aol.com>
-----Original Message-----
Ben Zaiboc wrote
>OK obviously this word 'symbol' needs some clear definition.
>I would use the word to mean any distinct pattern of neural activity that has a
>relationship with other such patterns. In that sense, sensory symbols exist, as
>do (visual) word symbols, (auditory) word symbols, concept symbols, which are a
>higher-level abstraction from the above three types, and hundreds of other types
>of 'symbol', representing all the different patterns of neural activity that can
>be regarded as coherent units, like emotional states, memories, linguistic units
>(nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of
>fluidity, etc.), and so on.
>
>But that's just me. Maybe I'm overstretching the use of the word.
>
>What do other people mean by the word 'symbol', in this context?
>
>Gordon points out that they are all meaningless in themselves, only taking on a
>meaning in the context of a system that can be called a conscious mind.
>
>I'm not sure if the 'conscious' part is necessary, though. In any event, the
>'meaning' arises as a result of the interaction of the symbols, grounded in the
>system's interaction with its environment.
>
>To say that an organism's 'hunger', which results in it finding and consuming
>food, is meaningless unless the organism is conscious, is rather a silly
>statement, and calls into question what we mean by 'meaning'.
>
>Ben Zaiboc
I agree. The problem is that we are using linguistic symbols to which we give our own personal meaning
to debate a system that we do not fully understand and of which cannot effectively articulate our personal view.
I would go along with the notion that there are sense symbols and many others kinds. So In that context of
"Symbols" I dont think conciousness is necessary. Certainly not at a self awareness level. Does this
exclude inteligence? I think our definitions need some tweaking.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gts_2000 at yahoo.com Wed Feb 3 22:05:18 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Wed, 3 Feb 2010 14:05:18 -0800 (PST)
Subject: [ExI] meaning & symbols
In-Reply-To:
Message-ID: <837227.95744.qm@web36508.mail.mud.yahoo.com>
--- On Wed, 2/3/10, BillK wrote:
> You have told us all at great length that you very
> strongly believe that only human brains (and other things which must
> be almost identical to human brains) can do the magic human
> 'consciousness' thing.
I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics).
Idealistic dreamers here on ExI take offense. Sorry about that.
-gts
From gts_2000 at yahoo.com Wed Feb 3 23:08:34 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Wed, 3 Feb 2010 15:08:34 -0800 (PST)
Subject: [ExI] meaning & symbols
In-Reply-To:
Message-ID: <642330.6805.qm@web36501.mail.mud.yahoo.com>
--- On Wed, 2/3/10, John Clark wrote:
> Broken down to its smallest component parts, a symbol is something that
> consistently and systematically changes the state of the symbol reader.
Many things aside from symbols can consistently and systematically change the state of the symbol reader.
This wikipedia definition seems better:
"A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention."
-gts
From hkeithhenson at gmail.com Thu Feb 4 00:26:16 2010
From: hkeithhenson at gmail.com (Keith Henson)
Date: Wed, 3 Feb 2010 17:26:16 -0700
Subject: [ExI] Glacier Geoengineering
Message-ID:
On Wed, Feb 3, 2010 at 5:00 AM, Ben Zaiboc wrote:
> I need to ask a question here, please indulge me if the answer should be obvious:
>
> What's the point of sticking glaciers to their bedrock?
To slow them down. That way they don't run off into the sea or down
to lower altitudes where they melt
> Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose.
>
> Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely?
No, they will still move, but much slower. Ice is like cold tar and
the colder you get it the slower it moves.
Keith
From ddraig at gmail.com Tue Feb 2 06:03:23 2010
From: ddraig at gmail.com (ddraig)
Date: Tue, 2 Feb 2010 17:03:23 +1100
Subject: [ExI] US "Air Force recognizes several distinct forms of
neo-paganism"
In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com>
References: <4B6767FE.8080304@satx.rr.com>
<62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com>
Message-ID:
On 2 February 2010 14:46, Mike Dougherty wrote:
> I can tell you the "angling" of a stone circle is 360 degrees, no
> matter how many bi- sects there are.
Maybe the stones lean inwards? Or outwards? Maybe the whole thing is
designed to be in a slow process of collapse, until you end with
something looking like a stone circle made of dominos?
>> "That's one of the newer groups," said John Van Winkle, a spokesman for the
>> academy. "They've had a worship circle on base for some time and we're
>> looking to get them an official one."
>
> What criteria is used to make a circle official?
Circles are, officially, circular, I believe.
> That's pretty impressive considering most of the time members of these
> groups can hardly recognize each other.
Sure they can. They are fat, and dress in black.
> I wonder if we could get the Air Force to recognize our Holy
> HotTub-based religion and declare an officially sanctioned meeting
> place on base? ?You know, to be fair and completely "tolerant."
Works for me. Is the HPS cute?
Dwayne
--
ddraig at pobox.com irc.deoxy.org #chat
...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e...
http://www.barrelfullofmonkeys.org/Data/3-death.jpg
our aim is wakefulness, our enemy is dreamless sleep
From lacertilian at gmail.com Mon Feb 1 00:54:57 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Sun, 31 Jan 2010 16:54:57 -0800
Subject: [ExI] How to ground a symbol
In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com>
References: <20100131230539.5.qmail@syzygy.com>
<975270.46265.qm@web36504.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
>Eric Messick :
>> The animations and other text at the site all indicate that
>> this is the type of processing going on in Chinese rooms.
>
> This kind of processing goes on in every software/hardware system.
No, it doesn't. That's only the result of the processing. I went over
this before. The processing itself is so spectacularly more
fine-grained that thinking about it as an "if this input, then this
output" rule is outright fallacious. Yes, you put that input in; yes,
you get that output out; but between these two points, a universe is
created and destroyed.
Gordon Swobe :
>Eric Messick :
>> Come back after you've written a neural network
>> simulator and trained it to do something useful.
>
> Philosophers of mind don't care much about how "useful" it may seem. They do care if it has a mind capable of having conscious intentional states: thoughts, beliefs, desires and so on as I've already explained.
The point isn't to have a useful product, it's to demonstrate a
minimal comprehension of how neural network simulations work. You left
out the crux of what Eric said:
"Then we'll see if your intuition still says that computers can't
understand anything."
Getting a neural network simulation to do anything useful is
sufficiently difficult that you will necessarily learn something about
them in the process, and this may change your intuitive impression of
what a computer is capable of.
Besides, we don't care what philosophers of mind think. We care what
computers think. Regrettably, we are forced to talk to the former in
order to learn about the latter.
From lacertilian at gmail.com Mon Feb 1 02:47:55 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Sun, 31 Jan 2010 18:47:55 -0800
Subject: [ExI] multiple realizability
In-Reply-To: <418148.11027.qm@web36501.mail.mud.yahoo.com>
References: <20100201010329.5.qmail@syzygy.com>
<418148.11027.qm@web36501.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
> The layer of abstraction does not matter to me.
Bad move. You've been attacked before on the basis that you have
trouble comprehending the importance of abstraction. How far down
through the layers one can go before new complexities cease to emerge
is a tremendous component of the argument against formal programs
being capable of creating consciousness.
To prove this is trivial. All I have to do is invoke a couple of black boxes:
One box contains my brain, and another box contains a standard digital
computer running the best possible simulation of my brain. Both brains
begin in exactly the same state. A single question is sent into each
box at the same moment, and the response coming out of the other side
is identical.
This is the highest level of abstraction, turning whole brains into,
essentially, pseudo-random number generators. They carry states; an
input is combined with the state through a definite function; the
state changes; an output is produced.
Gordon has said before that in situations like these, it is impossible
to determine whether or not either box has consciousness without
"exploratory surgery". I assume Gordon is at least as good a surgeon
as Almighty God, and has unlimited computing power available to
analyze the resulting data instantaneously.
The point is that such surgery is precisely the sort of process which
reduces the level of abstraction. A crash course may be in order.
You are given ten thousand people. You ask, "How many have blue
eyes?". The number splits into two, becoming less abstract. You ask,
"How many are taller than I am?". Now there are four numbers, and one
quarter the abstraction. Eventually any question you ask will be
redundant, as you will have split the population into ten thousand
groups of one. But there is still some abstraction left: people are
not fundamental particles. So you ask enough questions to uniquely
identify every proton, neutron, electron, and any other relevant
components. Yet still your description is abstract, because you've
only differentiated the particles: you haven't determined their exact
locations in space.
And here, in a universe equipped with the Heisenberg uncertainty
principle, we find that you can't. The description is still abstract.
It can be made less so, as we expend greater and greater sums of
energy to pin down ever more precise configurations of human beings,
but to eliminate abstraction entirely would require infinite energy.
In this thread Gordon explicitly rejects the notion that a mind can be
copied, in whole, to another substrate without a catastrophic
interruption in subjective experience. I agree with this, but I think
it's for a completely different reason. I can't say for sure because
his clarification made things less clear.
Proposition A: a machine operating by formal programs cannot replicate
the human mind.
Proposition B: a neural network could conceivably replicate the human mind.
Logical Conclusion: an individual human mind cannot be extracted from
its neurological material.
This does not appear to follow, unless you were counting artificial
neural networks as "neurological material". I understood you to mean
the specific neurons responsible for instantiating the mind in
question originally. By my understanding, that one experiment in which
you replace each individual neuron with an artificial duplicate, one
by one, would preserve the same conscious mind you started with.
Actually I am kind of counting on this last point being true, so I
have a vested interest in finding out whether or not it is. If you can
convince me of my error before I actually act on it, Gordon, I would
appreciate it.
For the record, I am a dualist in the sense that I believe minds are
distinct entities from brains, as well as that programs are distinct
entities from computers. However, I do not believe that minds or
programs are composed of a "substance" in any sense. Both are
insubstantial. Software (which I say includes minds) is one layer of
abstraction higher than its supporting hardware (which I say includes
brains), and therefore one order of magnitude less "real".
I'm not sure what the radix is for that order of magnitude, but I am
absolutely confident that it is exactly one order!
From lacertilian at gmail.com Mon Feb 1 19:15:11 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Mon, 1 Feb 2010 11:15:11 -0800
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To:
References: <20100131182926.5.qmail@syzygy.com>
<732400.27938.qm@web36501.mail.mud.yahoo.com>
Message-ID:
Stathis Papaioannou :
> P: It is possible to make artificial neurons which behave like normal
> neurons in every way, but lack consciousness.
>
> That's it! Now, when I ask if P is true you have to answer "Yes" or
> "No". Is P true?
Yes.
But not for any reason relevant to the discussion. The proposition
doesn't illustrate your point. Ordinary neurons behave normally
without producing consciousness all the time! This state can be
produced with trivial effort: either fall asleep, faint, or get
somebody to knock you upside the head. Presto. An entire unconscious
brain, neurons and all.
Request that you clarify the constraints of the experiment.
Now, for the other thing that bothered me...
Gordon Swobe :
> P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons.
Stathis has not chosen to define behavior this way.
Stathis Papaioannou :
> Yes, that would be one aspect of the behaviour that needs to be reproduced.
See, he's talking about behavior in full: walking, talking, thinking,
everything. I don't know why he didn't come right out and say that
when obviously it's a point of contention. I had to deduce it from
this cryptic reply.
It seems as if Gordon believes behavior has nothing to do with
consciousness, and Stathis believes consciousness is produced as a
direct result of behavior. Further, that the quantity of consciousness
is proportional to the intelligence of that behavior.
I'd be interested to hear from each of you a description of what would
constitute the simplest possible conscious system, and whether or not
such a system would necessarily also have intelligence or
understanding.
I haven't been able to figure out exactly what any of these three
words mean to either of you. I am pretty sure, however, that you each
have radically different definitions.
From lacertilian at gmail.com Mon Feb 1 19:49:23 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Mon, 1 Feb 2010 11:49:23 -0800
Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1
In-Reply-To: <681115.27739.qm@web113615.mail.gq1.yahoo.com>
References:
<681115.27739.qm@web113615.mail.gq1.yahoo.com>
Message-ID:
Ben Zaiboc :
>Gordon Swobe :
>> The system you describe won't really "know" it is red. It
>> will merely act as if it knows it is red, no different from,
>> say, an automated camera that acts as if it knows the light
>> level in the room and automatically adjusts for it.
>
> Please explain what "really knowing" is.
>
> I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. ?In fact, I'm at a loss to see how that sentence can even make sense.
Like so many other things, it depends on the method of measurement.
Gordon did not describe any such thing, but we can assume he had at
least a vague notion of one in mind.
It actually is possible to get that paradoxical result, and in fact
it's easy enough that examples of it are widespread in reality. See:
public school systems the world over, and their obsessive tendency to
test knowledge.
It's alarmingly easy to get the right answer on a test without
understanding why it's the right answer, but a certain mental trick is
required to notice when this happens. Basically, you have to
understand your own understanding without falling into an infinite
recursion loop. Human beings are naturally born into that ability, but
most people lose it in school because they learn (incorrectly) that
understanding doesn't make a difference.
Ben Zaiboc :
> You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. ?That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck.
This is the standard method of measurement in philosophy: omniscience.
The only problem is, omniscience tends to break down rather rapidly
when confronted with questions about subjective experience. If you do
manage to pry a correct answer from your god's-eye view, it will
typically be paradoxical, ambiguous, or both.
Works great for ducks, though, and brains by extension. If you assume
the existence of consciousness in a given brain, and then you
perfectly reconstruct that brain elsewhere on an atomic level, the
copy must necessarily also have consciousness.
But then you have to ask whether or not it's the same consciousness,
and, in my case, I'm forced to conclude that the copy is identical,
but distinct. In the next moment, the two versions will diverge,
ceasing to be identical. So far so good.
However, Gordon usually does not begin with a working consciousness:
he tries to construct one from scratch, and he finds that when he uses
a digital computer to do so, he fails. I'm not sure yet whether this
is a fundamental limitation built into how digital computers work, or
if Gordon is just a really bad programmer. I tend to believe the
latter. Gordon believes the former, so he's extended the notion to
situations in which we DO begin with a working consciousness and then
try to move it to another medium.
Hope that elucidates matters for you.
Also, that it's accurate.
From lacertilian at gmail.com Tue Feb 2 18:03:01 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Tue, 2 Feb 2010 10:03:01 -0800
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com>
References: <20100201181430.5.qmail@syzygy.com>
<969214.43756.qm@web36506.mail.mud.yahoo.com>
Message-ID:
On Tue, Feb 2, 2010 at 6:43 AM, Gordon Swobe wrote:
> --- On Mon, 2/1/10, Eric Messick wrote:
>
>> Actually, I was partially mistaken in saying that meaning
>> cannot be attached to a word by association with other words.
>
> I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning.
>
> Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean...
>
> Compare:
>
> 1) Jack means that the moon orbits the earth.
>
> 2) The word "moon" means a large object that orbits the earth.
>
> In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon.
>
> In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language.
>
> The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning.
>
> -gts
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
Gordon Swobe :
>> Actually, I was partially mistaken in saying that meaning
>> cannot be attached to a word by association with other words.
>
> I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning.
This is exactly what Eric did not say.
The whole paragraph, in case you missed it, was:
Eric Messick :
> Actually, I was partially mistaken in saying that meaning cannot be
> attached to a word by association with other words. ?A definition
> could associate a new word with a set of old words, and if all of the
> old words have meanings (by being grounded or by association) the new
> one can acquire meaning as well.
According to Eric, association is the sole factor giving many symbols
meaning in human minds. The only prerequisite is that at least one
symbol in the web has meaning intrinsically; that is to say, it is a
sense symbol. Meaning can effectively be shared between symbols, and
is not diluted in the process.
From lacertilian at gmail.com Wed Feb 3 00:16:42 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Tue, 2 Feb 2010 16:16:42 -0800
Subject: [ExI] meaning & symbols
In-Reply-To: <816357.45313.qm@web36501.mail.mud.yahoo.com>
References:
<816357.45313.qm@web36501.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
> In any case I do not believe there exists any such thing as a "sense symbol".
>
> Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions.
>
> Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data.
In the second paragraph I almost jumped on you again for misusing the
concept of abstraction, but then I noticed you said "and about other"
rather than "and other". You weren't saying that sense data are
abstractions, if I understand correctly. Nothing to disagree with
there.
When we get to the third paragraph, however, it sounds as if you
believe that manking discovered symbols rather than invented them.
Here's the thing: the very idea of a symbol is, in and of itself, an
abstraction. I suspect it's possible to form a coherent model of the
mind (by today's standards) without ever mentioning symbols or
anything like them. It may not be a particularly elegant model, but it
would work as well as any other.
So, it's really just a matter of convenience to talk about symbols
instead of synapses. Fleas have synapses, if fewer than we do, so if
we wanted to we could easily say that they form and use symbols
(blood, not-blood) within their puny flea-minds. We wouldn't be wrong.
You're correct in saying that sense symbols do not exist, but only
insofar as there aren't any symbols which DO exist.
Gordon Swobe :
> I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z.
You're right, of course. It was a poor choice of words. I was trying
to convey Eric's theory, which I mostly agree with, in as lazy a
manner as possible.
All I meant by "intrinsic" meaning was that some symbols in the field
of all available within a given Z are meaningful irrespective of any
other symbols. Eric explains that this is so because they are invoked
directly by incoming sensory data: I see a dog, I think a dog symbol.
I have no control over whether or not this happens, except to avoid
looking at dogs. It's impossible to perceive, or even conceive, a
discrete object without simultaneously attaching a symbol to it. Or,
if you prefer, grounding a symbol on it.
(Assuming that we're considering information processing in terms of symbols.)
From lacertilian at gmail.com Wed Feb 3 03:44:54 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Tue, 2 Feb 2010 19:44:54 -0800
Subject: [ExI] meaning & symbols
In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com>
References:
<230474.84371.qm@web36502.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
> No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact.
I still can't see how the computationalist model fails here, but, more
significantly, I can't see why you think it does. Maybe if I went back
through the archives and read this whole discussion from the start,
but even I don't have that much free time.
Gordon Swobe :
>Spencer Campbell :
>> You're correct in saying that sense symbols do not exist,
>> but only insofar as there aren't any symbols which DO exist.
>
> Hmm, I count 21 word-symbols in that sentence of yours.
And I count 23, because those apostrophes denote points where I've
smashed two discrete words together to save space. I could also get 25
by treating "insofar" as the full three words it's composed of.
Then again, it's really just an arbitrary convention that allows me to
do this, so if I change the convention I could just as easily count
"in saying that" (ins'ingt), "but only" ('tonly), and "insofar as
there" (you get the idea) as single word-symbols as well. There's also
an uncompressed "do not" in there, and the concept of sense symbols
might catch on to such an extent that we start talking about
sensesymbols instead!
So I might just as well say that there are only 14 word symbols in
that sentence. Then again, I only chose those particular words because
it struck me that I almost always put them together in just those
sequences. I could make any two words into one, if I don't care about
efficiency. Therefore, I could squeeze the whole sentence into a
single, magnificently specific word that I'll never, ever have a
chance to use again.
Conclusion: spaces are not (aren't) the be-all end-all demarcation
method of choice, and this is why the word counter in my command-line
shell comes up with a slightly different answer than the one built
into Google Docs.
By the first method this message weighs in at 368 words, whereas the
second confidently gives me a figure of 371.
From lacertilian at gmail.com Wed Feb 3 16:36:02 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Wed, 3 Feb 2010 08:36:02 -0800
Subject: [ExI] meaning & symbols
In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com>
References:
<910459.5460.qm@web113613.mail.gq1.yahoo.com>
Message-ID:
Ben Zaiboc :
> OK obviously this word 'symbol' needs some clear definition.
>
> I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns.
> But that's just me. ?Maybe I'm overstretching the use of the word.
>
> What do other people mean by the word 'symbol', in this context?
About the same. It's a problematic definition in that *distinct*
patterns of neural activity are hard to come by, but I can't do any
better.
From lacertilian at gmail.com Wed Feb 3 16:44:16 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Wed, 3 Feb 2010 08:44:16 -0800
Subject: [ExI] How not to make a thought experiment
In-Reply-To:
References: <969214.43756.qm@web36506.mail.mud.yahoo.com>
Message-ID:
John Clark :
> Your retort is always I don't understand that or I do understand that so
> "obviously" that can't be right. Even in theory I don't see how any
> explanation would satisfy you.
Right now I'm thinking the only way to do it is by forming an
unbreakable line of similarity between Turing machines and human
brains. Not an easy task for the same reason you hinted at: one is
very simple and easy to understand, whereas one is very complex and
difficult to understand.
Basically, it all depends on what Gordon thinks is the simplest
conceivable object capable of intentionality.
From lacertilian at gmail.com Thu Feb 4 00:30:40 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Wed, 3 Feb 2010 16:30:40 -0800
Subject: [ExI] meaning & symbols
In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com>
References:
<642330.6805.qm@web36501.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
> Many things aside from symbols can consistently and systematically change the state of the symbol reader.
This isn't true even remotely true of all symbol readers real and
imaginary. Maybe you're talking about the human mind specifically.
But, if not:
Turing machines are not real things, but they are symbol readers. If
we imagine a Turing machine whose state can be changed, consistently
and systematically, by anything aside from symbols, we are not really
imagining a Turing machine anymore.
Gordon Swobe :
> This wikipedia definition seems better:
>
> "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention."
More generally accurate, yes, but in this context not nearly as useful
as Ben Zaiboc's definition. Wikipedia's does not come anywhere near
explaining how one symbol can dynamically give rise to a chain of
other symbols, which, to my thinking, is the very essence of thought.
My guess is that no one here believes meaning can exist outside of
thought, or at least a thought-like process.
The only question is how thought-like the process has to be.
From gts_2000 at yahoo.com Thu Feb 4 01:34:32 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Wed, 3 Feb 2010 17:34:32 -0800 (PST)
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To:
Message-ID: <428505.81133.qm@web36503.mail.mud.yahoo.com>
--- On Mon, 2/1/10, Stathis Papaioannou wrote:
>> I reject as absurd for example your theory that a
>> brain the size of Texas constructed of giant neurons made of
>> beer cans and toilet paper will have consciousness merely by
>> virtue of those beer cans squirting neurotransmitters
>> betwixt themselves in the same patterns that natural neurons
>> do.
>
> That is a consequence of functionalism but at this point
> functionalism is assumed to be wrong.
??
Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not?
-gts
From stathisp at gmail.com Thu Feb 4 03:03:08 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Thu, 4 Feb 2010 14:03:08 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <428505.81133.qm@web36503.mail.mud.yahoo.com>
References:
<428505.81133.qm@web36503.mail.mud.yahoo.com>
Message-ID:
On 4 February 2010 12:34, Gordon Swobe wrote:
> --- On Mon, 2/1/10, Stathis Papaioannou wrote:
>
>>> I reject as absurd for example your theory that a
>>> brain the size of Texas constructed of giant neurons made of
>>> beer cans and toilet paper will have consciousness merely by
>>> virtue of those beer cans squirting neurotransmitters
>>> betwixt themselves in the same patterns that natural neurons
>>> do.
>>
>> That is a consequence of functionalism but at this point
>> functionalism is assumed to be wrong.
>
> ??
>
> Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not?
It would have to be much, much larger than Texas if it was to be human
equivalent and it probably wouldn't physically possible due (among
other problems) to loss of structural integrity over the vast
distances involved. However, theoretically, there is no problem if
such a system is Turing-complete and if the behaviour of the brain is
computable.
As for the "??": I have ASSUMED that functionalism is wrong, i.e. that
is possible to make a structure which behaves like a brain but lacks
consciousness, to see where this leads. I have shown (with your help)
that it leads to a contradiction, eg. "the structure both does and
does not behave exactly like a normal brain", which implies that the
original assumption must be FALSE. It is like assuming that sqrt(2) is
rational, and then showing that this leads to contradiction, which
implies that sqrt(2) is not rational.
--
Stathis Papaioannou
From avantguardian2020 at yahoo.com Thu Feb 4 06:33:39 2010
From: avantguardian2020 at yahoo.com (The Avantguardian)
Date: Wed, 3 Feb 2010 22:33:39 -0800 (PST)
Subject: [ExI] How not to make a thought experiment
In-Reply-To:
References: <969214.43756.qm@web36506.mail.mud.yahoo.com>
Message-ID: <317129.32350.qm@web65616.mail.ac4.yahoo.com>
----- Original Message ----
> From: Spencer Campbell
> To: ExI chat list
> Sent: Wed, February 3, 2010 8:44:16 AM
> Subject: Re: [ExI] How not to make a thought experiment
>
> John Clark :
> > Your retort is always I don't understand that or I do understand that so
> > "obviously" that can't be right. Even in theory I don't see how any
> > explanation would satisfy you.
>
> Right now I'm thinking the only way to do it is by forming an
> unbreakable line of similarity between Turing machines and human
> brains. Not an easy task for the same reason you hinted at: one is
> very simple and easy to understand, whereas one is very complex and
> difficult to understand.
>
> Basically, it all depends on what Gordon thinks is the simplest
> conceivable object capable of intentionality.
If you equate intentionality with consciousness, one is left with the result that individual cells (of all types)?are conscious. This is because cells demonstrate intentionality. It is one of the lesser known hallmarks of life. Survival is intentional and?anything that left survival strictly to chance would?quickly be weeded out by natural selection. One can clearly see that in this video posted earlier by Spike.
http://www.youtube.com/watch?v=JnlULOjUhSQ
The white blood cell is clearly *intent* on eating the bacterium. And the bacterium is clearly *intent* on evading the theat to its existense.?Therefore a bacterium is the simplest concievable object that I am?confident is capable of intentionality. Although viruses being far?simpler may possibly?also display intentionality if you?interpret trying to?hijack cells and evade?the immune?response as hallmarks of "intention". With regard to the on going discussion, I think that it may be?an important first step?to try to program a computer to be unequivocally "alive" even on the level of a bacterium as a first step. It would be far simpler than trying to create a "brain" from scratch and would lend a great deal of support to the functional case. Not to mention it would disprove vitalism once and for all which would be a feather in the cap of functionalism.???
?
Stuart LaForge
"Never express yourself more clearly than you think." - Niels Bohr
From bbenzai at yahoo.com Thu Feb 4 09:35:52 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 01:35:52 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To:
Message-ID: <496675.14673.qm@web113614.mail.gq1.yahoo.com>
Spencer Campbell wrote:
> >From my perspective, Gordon has been very consistent
> when it comes to
> what will and will not pass the Turing test. His arguments,
> implicitly
> or explicitly, state that the Turing test does not measure
> consciousness. This is one point on which he and I agree.
The Turing test was designed to answer the question "can machines think?".
It doesn't measure consciousness directly (we don't know of anything that can), but it does measure something which can only be the product of consciousness: The ability of a system to convince a human that it is itself human. This is equivalent to convincing them that is is conscious.
If this wasn't the case, people would have no real reason to believe that other people were conscious.
For this reason, I'd say that anything which can convincingly pass the Turing test should be regarded as conscious.
Obviously, you'd want to take this seriously, and not be satisfied with a five-minute conversation. It'd have to be over a period of time, involving many different domains of knowledge before you'd be fully convinced, but if and when you were convinced that you were actually talking to a human, You'd have to admit that either you think you were talking to a conscious being, or that you think other humans aren't conscious.
Ben Zaiboc
From jameschoate at austin.rr.com Thu Feb 4 10:25:00 2010
From: jameschoate at austin.rr.com (jameschoate at austin.rr.com)
Date: Thu, 4 Feb 2010 10:25:00 +0000
Subject: [ExI] The digital nature of brains
In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com>
Message-ID: <20100204102500.86W8S.511470.root@hrndva-web26-z02>
No it does not. It is test which asks if a human being can tell the difference through a remote communications channel between a machine and a human.
It says absolutely nothing about intelligence, thinking, or anything like that with regard to machines. These sorts of claims demonstrate that the claimant has an inverted understanding of the issue. The Turing Test has one, and only one outcome...to measure the limits of human ability.
---- Ben Zaiboc wrote:
> The Turing test was designed to answer the question "can machines think?".
--
-- -- -- --
Venimus, Vidimus, Dolavimus
jameschoate at austin.rr.com
james.choate at g.austincc.edu
james.choate at twcable.com
h: 512-657-1279
w: 512-845-8989
www.ssz.com
http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu
http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center
Adapt, Adopt, Improvise
-- -- -- --
From stathisp at gmail.com Thu Feb 4 11:15:11 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Thu, 4 Feb 2010 22:15:11 +1100
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To:
References:
Message-ID:
On 31 January 2010 14:07, Spencer Campbell wrote:
> Stathis Papaioannou :
>>Gordon Swobe :
>>> A3: syntax is neither constitutive of nor sufficient for semantics.
>>>
>>> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now?
>>
>> No, I assert the very opposite: that meaning is nothing but the
>> association of one input with another input. You posit that there is a
>> magical extra step, which is completely useless and undetectable by
>> any means.
>
> Crap! Now I'm doing it too. This whole discussion is just an absurdly
> complex feedback loop, neither positive nor negative. It will never
> get better and it will never end. Yet the subject matter is
> interesting, and I am helpless to resist.
>
> First, yes, I agree with Stathis's assertion that association of one
> input with another input, or with another output, or, generally, of
> one datum with another datum, is the very definition of meaning.
> Literally, "A means B". This is mathematically equivalent to, "A
> equals B". Smoke equals fire, therefore, if smoke is true or fire is
> true then both are true. This is very bad reasoning, and very human.
> Nevertheless, we can say that there is a semantic association between
> smoke and fire.
>
> Of course the definitions of semantics and syntax seem to have become
> deranged somewhere along the lines, so someone with a different
> interpretation of their meaning than I have may very well leap at the
> chance to rub my face in it here. This is a risk I am willing to take.
>
> So!
>
> To see a computer's idea of semantics one might look at file formats.
> An image can be represented in BMP or PNG format, but in either case
> it is the same image; both files have the same meaning, though the
> manner in which that meaning is represented differs radically, just as
> 10/12 differs from 5 * 6^-1.
>
> Another source might be desktop shortcuts. You double-click the icon
> for the terrible browser of your choice, and your computer takes this
> to mean instead that you are double-clicking an EXE file in a
> completely different place. Note that I could very naturally insert
> the word "mean" there, implying a semantic association.
>
> Neither of these are nearly so human a use of semantics, because the
> relationship in each case is literal, not causal. However, it is still
> semantics: an association between two pieces of information.
>
> Gordon has no beef with a machine that produces intelligent behavior
> through semantic processes, only with one that produces the same
> behavior through syntax alone.
>
> At this point, though, his argument becomes rather hazy to me. How can
> anything even resembling human intelligence be produced without
> semantic association?
>
> A common feature in Searle's thought experiments, and in Gordon's by
> extension, is that there is a very poor description of the exact
> process by which a conversational computer determines how to respond
> to any given statement. This is necessary to some extent, because if
> anyone could give a precise description of the program that passes the
> Turing test, well, they could just write it.
>
> In any case, there's just no excuse to describe that program with
> rules like: if I hear "What is a pig?" then I will say "A farm
> animal". Sure, some people give that response to that question some of
> the time. But if you ask it twice in a row to the same person, you
> will get dramatically different answers each time. It's a gross
> oversimplification, but I'm forced to admit that it is technically
> valid if one views it only as what will happen, from a very high-level
> perspective, if "What is a pig?" is the very next thing the Chinese
> Room is asked. A whole new lineup of rules like that would be have to
> be generated after each response. Not a very practical solution.
> Effective, but not efficient.
>
> However, it seems to me that even if we had the brute processing power
> to implement a system like that while keeping it realistically
> quick-witted, it would still be impossible to generate that rule
> without the program containing at least one semantic fact, namely,
> "pig = farm animal".
>
> The only part syntactical rules play in this scenario is to insert the
> word "a" at the beginning of the sentence. Syntax is concerned only
> with grammatical correctness. Using syntax alone, one might imagine
> that the answer would be "a noun": the place at which "pig" occurs in
> the sentence implies that the word must be a noun, and this is as
> close as a syntactical rule can come to showing similarity between two
> symbols. If the grammar in question doesn't explicitly provide
> categories for symbols, as in English, then not even this can be done,
> and a meaningful syntax-based response is completely impossible.
>
> I started on this message to point out that Stathis had completely
> missed the point of A3, but sure enough I ended up picking on Searle
> (and Gordon) as well.
>
> In the end, I would like to make the claim: syntax implies semantics,
> and semantics implies syntax. One cannot find either in isolation,
> except in the realm of one's imagination. Like so many other divisions
> imposed between natural (that is, non-imaginary) phenomena, this one
> is valid but false.
I'm not completely sure what you're saying in this post, but at some
point the string of symbol associations (A means B, B means C, C means
D...) is grounded in sensory input. Searle would say that there needs
to be an extra step whereby the symbol so grounded gains "meaning",
but this extra step is not only completely mysterious, it is also
completely superfluous, since every observable fact about the world
would be the same without it. It's like claiming that a subset of
humans have an extra dimension of meaning, meaning*, which is
mysterious and undetectable, but assuredly there making their lives
richer.
--
Stathis Papaioannou
From stefano.vaj at gmail.com Thu Feb 4 11:59:50 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Thu, 4 Feb 2010 12:59:50 +0100
Subject: [ExI] Personal conclusions
Message-ID: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
On 3 February 2010 23:05, Gordon Swobe wrote:
> I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics).
Yes, this is clear by now.
The bunch of threads of which Gordon Swobe is the star, which I have
admittedly followed on and off, also because of their largely
repetitive nature, have been interesting, albeit disquieting, for me.
Not really to hear him reiterate innumerable times that for whatever
reason he thinks that (organic? human?) brains, while obviously
sharing universal computation abilities with cellular automata and
PCs, would on the other hand somewhat escape the Principle of
Computational Equivalence.
But because so many of the people having engaged in the discussion of
the point above, while they may not believe any more in a religious
concept of "soul", seem to accept without a second thought that some
very poorly defined Aristotelic essences would per se exist
corresponding to the symbols "mind", "consciousness", "intelligence",
and that their existence in the sense above would even be an a priori
not really open to analysis or discussion.
Now, if this is the case, I have sincerely troubles in finding a
reason why we should not accept on an equal basis, the article of
faith that Gordon Swobe proposes as to the impossibility for a
computer to exhibit the same.
Otherwise, we should perhaps reconsider a little non really the AI
research programmes in place, but rather, say, the Circle of Vienna,
Popper or Dennett.
--
Stefano Vaj
From stathisp at gmail.com Thu Feb 4 12:07:17 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Thu, 4 Feb 2010 23:07:17 +1100
Subject: [ExI] Personal conclusions
In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
Message-ID:
On 4 February 2010 22:59, Stefano Vaj wrote:
> But because so many of the people having engaged in the discussion of
> the point above, while they may not believe any more in a religious
> concept of "soul", seem to accept without a second thought that some
> very poorly defined Aristotelic essences would per se exist
> corresponding to the symbols "mind", "consciousness", "intelligence",
> and that their existence in the sense above would even be an a priori
> not really open to analysis or discussion.
Probably you and I believe the same things about "mind",
"consciousness" etc., but we use different words.
--
Stathis Papaioannou
From stefano.vaj at gmail.com Thu Feb 4 12:16:45 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Thu, 4 Feb 2010 13:16:45 +0100
Subject: [ExI] The digital nature of brains
In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com>
References:
<496675.14673.qm@web113614.mail.gq1.yahoo.com>
Message-ID: <580930c21002040416o54ebd748rceedf0dabe607034@mail.gmail.com>
On 4 February 2010 10:35, Ben Zaiboc wrote:
> For this reason, I'd say that anything which can convincingly pass the
Turing test should be regarded as conscious.
In fact, I suspect that anything that can convincingly pass the Turing test
is simply conscious *by definition*, because it is the test that we
routinely apply to check whether the system we are in touch with is
conscious or not (say, when trying to decide whether some human being is
asleep or dead).
The simple question is: should something, in addition to being able to
perform as well as the average adult, alert human being in a Turing test,
have blue eyes, flesh limbs, a hairy head or a liver to qualify as
"conscious"? If we try to analyse any "intuition" we may have in this sense,
any such intuition evaporates quickly enough
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stefano.vaj at gmail.com Thu Feb 4 12:24:24 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Thu, 4 Feb 2010 13:24:24 +0100
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To:
References:
Message-ID: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com>
On 4 February 2010 12:15, Stathis Papaioannou wrote:
> I'm not completely sure what you're saying in this post, but at some
> point the string of symbol associations (A means B, B means C, C means
> D...) is grounded in sensory input.
Defined as?
> Searle would say that there needs
> to be an extra step whereby the symbol so grounded gains "meaning",
> but this extra step is not only completely mysterious, it is also
> completely superfluous, since every observable fact about the world
> would be the same without it.
>
Which sounds pretty equivalent to saying that it does not exist, if one
accepts that one's "world" is simply the set of all observable phenomena,
and that a claim pertaining to the existence of something is meaningless
only if it can be disproved.
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stathisp at gmail.com Thu Feb 4 13:03:26 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Fri, 5 Feb 2010 00:03:26 +1100
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com>
References:
<580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com>
Message-ID:
2010/2/4 Stefano Vaj :
> On 4 February 2010 12:15, Stathis Papaioannou wrote:
>>
>> I'm not completely sure what you're saying in this post, but at some
>> point the string of symbol associations (A means B, B means C, C means
>> D...) is grounded in sensory input.
>
> Defined as?
Input from the environment. "Chien" is "hund", "hund" is "dog", and
"dog" is the furry creature with four legs and a tail, as learned by
English speakers as young children.
>> Searle would say that there needs
>> to be an extra step whereby the symbol so grounded gains "meaning",
>> but this extra step is not only completely mysterious, it is also
>> completely superfluous, since every observable fact about the world
>> would be the same without it.
>
> Which sounds pretty equivalent to saying that it does not exist, if one
> accepts that one's "world" is simply the set of all observable phenomena,
> and that a claim pertaining to the existence of something is meaningless
> only if it can be disproved.
Yes, or you could create undetectable entities like this whenever the
fancy took you.
--
Stathis Papaioannou
From bbenzai at yahoo.com Thu Feb 4 13:38:59 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 05:38:59 -0800 (PST)
Subject: [ExI] Mind extension
In-Reply-To:
Message-ID: <767464.48754.qm@web113618.mail.gq1.yahoo.com>
Spencer Campbell wrote:
> By my understanding, that one experiment in which
> you replace each individual neuron with an artificial duplicate, one
> by one, would preserve the same conscious mind you started with.
> Actually I am kind of counting on this last point being true, so I
> have a vested interest in finding out whether or not it is. If you can
> convince me of my error before I actually act on it, Gordon, I would
> appreciate it.
I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue.
I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating.
If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience.
If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact.
The overall idea is to build extra room for the mind to expand into, and see if it really has or not. If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off.
Ben Zaiboc
From bbenzai at yahoo.com Thu Feb 4 13:13:12 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 05:13:12 -0800 (PST)
Subject: [ExI] multiple realizability
In-Reply-To:
Message-ID: <764802.35206.qm@web113618.mail.gq1.yahoo.com>
I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea.
That's an over-simplification of course, because the digital program/s would more likely implement a set of software objects that interact to implement individual neural nets that interact to implement sets of information processing mechanisms that interact to create a mind. That's 5 levels of abstraction in my probably over-simplistic concept of the process. There may well be several more in a realistic implementation.
Ben Zaiboc
From gts_2000 at yahoo.com Thu Feb 4 13:47:00 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 05:47:00 -0800 (PST)
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com>
Message-ID: <595536.39512.qm@web36508.mail.mud.yahoo.com>
--- On Thu, 2/4/10, Stefano Vaj wrote:
Stathis wrote:
>> Searle would say that there
>> needs to be an extra step whereby the symbol so grounded gains
>> "meaning", but this extra step is not only completely mysterious, it
>> is also completely superfluous, since every observable fact about
>> the world would be the same without it.
No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it".
Real subjective first-person facts of the world include one's own conscious understanding of words.
Stefano wrote:
> Which sounds pretty equivalent to saying that it does not
> exist,
I think you want to deny the reality of the subjective. I don't know why.
-gts
From alfio.puglisi at gmail.com Thu Feb 4 13:56:39 2010
From: alfio.puglisi at gmail.com (Alfio Puglisi)
Date: Thu, 4 Feb 2010 14:56:39 +0100
Subject: [ExI] New NASA plans
Message-ID: <4902d9991002040556x5a5407c1r7a8e0bfee32f401a@mail.gmail.com>
Does anyone know if this article from the Economist about Obama's plans for
NASA:
http://www.economist.com/sciencetechnology/displayStory.cfm?story_id=15449787&source=features_box_main
is anywhere accurate? The overall tone is more positive than I expected...
in particular, the elimination of "cost-plus" contracts seems a big step in
cleaning things up. And, well, I'm a huge fan of SpaceX :-)
Alfio
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gts_2000 at yahoo.com Thu Feb 4 13:32:16 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 05:32:16 -0800 (PST)
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To:
Message-ID: <854080.27226.qm@web36504.mail.mud.yahoo.com>
--- On Wed, 2/3/10, Stathis Papaioannou wrote:
>> Can a conscious Texas-sized brain constructed out of
>> giant neurons made of beer cans and toilet paper exist as a
>> possible consequence of your brand of functionalism? Or
>> not?
>
> It would have to be much, much larger than Texas if it was
> to be human equivalent and it probably wouldn't physically possible due
> (among other problems) to loss of structural integrity over the
> vast distances involved. However, theoretically, there is no
> problem if such a system is Turing-complete and if the behaviour of
> the brain is computable.
Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you.
I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-)
-gts
From bbenzai at yahoo.com Thu Feb 4 14:26:44 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 06:26:44 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To:
Message-ID: <438813.76903.qm@web113617.mail.gq1.yahoo.com>
claimed:
> ---- Ben Zaiboc
> wrote:
>
> > The Turing test was designed to answer the question
> "can machines think?".
> No it does not. It is test which asks if a human being can
> tell the difference through a remote communications channel
> between a machine and a human.
>
> It says absolutely nothing about intelligence, thinking, or
> anything like that with regard to machines. These sorts of
> claims demonstrate that the claimant has an inverted
> understanding of the issue. The Turing Test has one, and
> only one outcome...to measure the limits of human ability.
Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true.
The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots)
Ben Zaiboc
From stefano.vaj at gmail.com Thu Feb 4 14:30:37 2010
From: stefano.vaj at gmail.com (Stefano Vaj)
Date: Thu, 4 Feb 2010 15:30:37 +0100
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To:
References:
<580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com>
Message-ID: <580930c21002040630k1d8931dapd63f2ef62ff51491@mail.gmail.com>
On 4 February 2010 14:03, Stathis Papaioannou wrote:
> 2010/2/4 Stefano Vaj :
> > On 4 February 2010 12:15, Stathis Papaioannou
> wrote:
> >>
> >> I'm not completely sure what you're saying in this post, but at some
> >> point the string of symbol associations (A means B, B means C, C means
> >> D...) is grounded in sensory input.
> >
> > Defined as?
>
> Input from the environment. "Chien" is "hund", "hund" is "dog", and
> "dog" is the furry creature with four legs and a tail, as learned by
> English speakers as young children.
>
Mmhhh. "Dog" is a sound perceived with one's ears, subvocalised or
represented by the appropriate characters in a given typeface, Pluto may be
a design or icon of such an animal, the bits by which he is rasterised are
another symbol thereof, the pixel of the image of an actual dog on
somebody's retina is another symbol thereof. Symbols all the way down, all
of them "sensorial" after a fashion, for us as exactly as for any other
system.
OTOH, inputs and interfaces are of course crucial to the definition of a
given system. Mr. Jones is different from young baby Brown who is different
from a bat who is different from a PC with a SCSI scanner which is different
from an I-Phone...
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bbenzai at yahoo.com Thu Feb 4 14:35:27 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 06:35:27 -0800 (PST)
Subject: [ExI] The simplest possible conscious system
In-Reply-To:
Message-ID: <820642.67186.qm@web113613.mail.gq1.yahoo.com>
The simplest possible conscious system
Spencer Campbell asked:
> I'd be interested to hear from each of you a description of what would
> constitute the simplest possible conscious system, and whether or not
> such a system would necessarily also have intelligence or
> understanding.
Hm, interesting challenge.
I'd probably define Intelligence as problem-solving ability, and
Understanding as the association of new 'concept-symbols' with established ones.
I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it.
I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. Hm. maybe we already have conscious robots, and don't realise it!
This is just a stab in the dark, though, I may be way off.
As for possessing intelligence and understanding, the simplest possible conscious system almost certainly wouldn't have much of either, although by my definitions above, there would have to be *some* of both. Just not very much (it would need Intelligence only if it's going to try to survive).
Ben Zaiboc
From gts_2000 at yahoo.com Thu Feb 4 15:00:22 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 07:00:22 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To: <20100204102500.86W8S.511470.root@hrndva-web26-z02>
Message-ID: <325635.76699.qm@web36508.mail.mud.yahoo.com>
Spencer:
> How would one determine, in practice, whether or not any
> given information processor is a digital computer?
I would start by looking for the presence of real physical digital electronic hardware and syntactic instructions that run on it. In some cases you will find those instructions in the software. In other cases you will find them coded into the hardware or firmware.
Another way to answer your question:
If you find yourself wanting to consult a philosopher about whether a given entity might in some sense exist at some level of description as a digital computer then most likely it's not really a digital computer. :)
>> Is it accurate to say that two digital computers,
>> networked together, may themselves constitute a larger digital computer?
Sure.
>> Is the Internet a digital computer? Or, equivalently,
>> depending on your definition of the Internet: is the Internet a
>> piece of software running on a digital computer?
I see the internet as a network of computers that run software. You could consider it one large computer if you like.
>> Finally, would you say that an artificial neural
>> network is a digital computer?
Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents.
The upshot is that 1) strong AI on digital computers is false, and 2) the human brain does something besides run programs, assuming it runs programs at all.
-gts
From jonkc at bellsouth.net Thu Feb 4 15:47:02 2010
From: jonkc at bellsouth.net (John Clark)
Date: Thu, 4 Feb 2010 10:47:02 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <317129.32350.qm@web65616.mail.ac4.yahoo.com>
References: <969214.43756.qm@web36506.mail.mud.yahoo.com>
<317129.32350.qm@web65616.mail.ac4.yahoo.com>
Message-ID: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net>
On Feb 4, 2010, The Avantguardian wrote:
> a bacterium is the simplest concievable object that I am confident is capable of intentionality.
Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather than another; or at least that's what I mean by the word. I like it because it lacks circularity. So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality too. Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and mysterious things in the smallest possible chunks in a way that is easily understood.
Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality. A circle has no end so that may be why his thread has been going on for so long with no end in sight.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jameschoate at austin.rr.com Thu Feb 4 16:43:16 2010
From: jameschoate at austin.rr.com (jameschoate at austin.rr.com)
Date: Thu, 4 Feb 2010 16:43:16 +0000
Subject: [ExI] The digital nature of brains
In-Reply-To: <438813.76903.qm@web113617.mail.gq1.yahoo.com>
Message-ID: <20100204164316.1CI9P.454213.root@hrndva-web06-z02>
This is a perfect example of my 'understanding inversion' claim...
First, we're not talking about different things. The Turing Test was suggested, not 'designed' as it's not a algorithm or mechanism. At best it's a heuristic. If you read Turing's papers and the period documentation the fundamental question is 'can the person tell the difference?'. If the answer is 'yes' the -pre-assumptive claim- is that some level of 'intelligence' has been reached in AI technology. Exactly what that level is, is never defined specifically by the original authors. The second and follow on generations of AI researchers have interpreted it to mean that AI has intelligence in the human sense. I would suggest, strongly, that this is a cultural 'taboo' that differentiates main stream from perceived cranks.
They way you flip the meaning of 'can the person tell the difference' to 'machine to convince' are specious and moot. The important point is the human not being able to tell the difference. You say it is not meant to test the ability of humans, but it is the humans who -must be convinced-.
I would say you're trying to massage the test to fit a preconceived cultural desire and not a real technical benchmark. It's about validating human emotion and not mechanical performance.
---- Ben Zaiboc wrote:
> Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true.
>
> The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots)
--
-- -- -- --
Venimus, Vidimus, Dolavimus
jameschoate at austin.rr.com
james.choate at g.austincc.edu
james.choate at twcable.com
h: 512-657-1279
w: 512-845-8989
www.ssz.com
http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu
http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center
Adapt, Adopt, Improvise
-- -- -- --
From jameschoate at austin.rr.com Thu Feb 4 16:48:30 2010
From: jameschoate at austin.rr.com (jameschoate at austin.rr.com)
Date: Thu, 4 Feb 2010 16:48:30 +0000
Subject: [ExI] The simplest possible conscious system
In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com>
Message-ID: <20100204164830.VFDFM.454315.root@hrndva-web06-z02>
I would agree, however there is a couple of issues that must be addressed before it becomes meaningful.
First, what is 'conscious'? That definition must not use human brains as an axiomatic measure. Otherwise we're arguing in circles and making an axiomatic assumption that humans are somehow fundamentally gifted with a singular behavior. This destroys our test on several levels. The point being the theoretic structure must demonstrate that human thought is conscious and not be assumptive on that point. We can't use an a priori assumption we are conscious, that we think we are does not make it so.
---- Ben Zaiboc wrote:
> The simplest possible conscious system
>
> Spencer Campbell asked:
>
> > I'd be interested to hear from each of you a description of what would
> > constitute the simplest possible conscious system, and whether or not
> > such a system would necessarily also have intelligence or
> > understanding.
>
> Hm, interesting challenge.
--
-- -- -- --
Venimus, Vidimus, Dolavimus
jameschoate at austin.rr.com
james.choate at g.austincc.edu
james.choate at twcable.com
h: 512-657-1279
w: 512-845-8989
www.ssz.com
http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu
http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center
Adapt, Adopt, Improvise
-- -- -- --
From jonkc at bellsouth.net Thu Feb 4 16:51:20 2010
From: jonkc at bellsouth.net (John Clark)
Date: Thu, 4 Feb 2010 11:51:20 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com>
References: <642330.6805.qm@web36501.mail.mud.yahoo.com>
Message-ID:
On Feb 3, 2010 Gordon Swobe wrote:
> Many things aside from symbols can consistently and systematically change the state of the symbol reader.
Like what? And if it consistently and systematically changes the state of the symbol reader exactly what additional quality do these "many things" have that disqualifies them as being symbols?
> This wikipedia definition seems better:
> "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association
Rather like a hole in a particular place on a punch card, and its association to a particular column in a punch card reader.
> I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times.
ABSOLUTELY!
> This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you.
Before you use a reductio ad absurdum argument you must be certain it's logically contradictory, just being odd is not good enough.
> I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them.
It seems odd to us for beer cans and toilet paper to be conscious but in a beer can world it would seem equally odd for 3 pounds of grey goo to be conscious. Neither is logically contradictory.
> I have no interest in magic.
I'm sure you tell yourself that and I'm sure you believe it, but I don't believe it. Grey goo has magic but beer cans computers and toilet paper don't, despite all the talk of semantics and syntax that is the heart of your argument.
> I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents
And how did you learn of this very interesting fact? You certainly didn't prove it mathematically or find it in the fossil record, you must have learned of it magically. A magic stronger than Darwin.
John K Clark
>
> -gts
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kanzure at gmail.com Thu Feb 4 17:50:01 2010
From: kanzure at gmail.com (Bryan Bishop)
Date: Thu, 4 Feb 2010 11:50:01 -0600
Subject: [ExI] Blue Brain Project film preview
Message-ID: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com>
Noah Sutton is making a documentary:
http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/
"We are very proud to present the world premiere of BLUEBRAIN ? Year
One, a documentary short which previews director Noah Hutton?s 10-year
film-in-the-making that will chronicle the progress of The Blue Brain
Project, Henry Markram?s attempt to reverse-engineer a human brain.
Enjoy the piece and let us know what you think."
There's a longer video that explains what he's up to.
The Emergence of Intelligence in the Neocortical Microcircuit
http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA
- Bryan
http://heybryan.org/
1 512 203 0507
From lacertilian at gmail.com Thu Feb 4 18:12:27 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 10:12:27 -0800
Subject: [ExI] Personal conclusions
In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
Message-ID:
Stefano Vaj :
> Not really to hear him reiterate innumerable times that for whatever
> reason he thinks that (organic? human?) brains, while obviously
> sharing universal computation abilities with cellular automata and
> PCs, would on the other hand somewhat escape the Principle of
> Computational Equivalence.
Yeah... yeah.
He doesn't seem like the type to take Stephen Wolfram seriously.
I'm working on it. Fruitlessly, maybe, but I'm working on it. Getting
some practice in rhetoric, at least.
Stefano Vaj :
> ... very poorly defined Aristotelic essences would per se exist
> corresponding to the symbols "mind", "consciousness", "intelligence" ...
Actually, I gave a fairly rigorous definition for intelligence in an
earlier message. I've refined it since then:
The intelligence of a given system is inversely proportional to the
average action (time * work) which must be expended before the system
achieves a given purpose, assuming that it began in a state as far
away as possible from that purpose.
(As I said before, this definition won't work unless you assume an
arbitrary purpose for the system in question. Purposes are roughly
equivalent to attractors here, but the system may itself be part of a
larger system, like us. Humans are tricky: the easiest solution is to
say they swap purposes many times a day, which means their measured
intelligence would change depending on what they're currently doing.
Which is consistent with observed reality.)
I can't give similarly precise definitions for "mind" or
consciousness, and I wouldn't be able to describe the latter at all.
Tentatively, I think consciousness is devoid of measurable qualities.
This would make it impossible to prove its existence, which to my mind
is a pretty solid argument for its nonexistence. Nevertheless, we talk
about it all the time, throughout history and in every culture. So
even if it doesn't exist, it seems reasonable to assume that it is at
least meaningful to think about.
Stefano Vaj :
> Now, if this is the case, I have sincerely troubles in finding a
> reason why we should not accept on an equal basis, the article of
> faith that Gordon Swobe proposes as to the impossibility for a
> computer to exhibit the same.
Your argument runs like this:
We have assumed at least one truth a priori. Therefore, we should
assume all truths a priori.
No, sorry. Doesn't work that way. All logic is, at base, fundamentally
illogical. You begin by assuming something for no logical reason
whatsoever, and attempt to redeem yourself from there. That doesn't
mean reasoning is futile. There's a big difference between a logical
assumption (which doesn't exist) and a rational assumption (which
does).
Accepting at face value that we have minds, intelligence, and
consciousness, is perfectly rational. Accepting at face value that
computers can not, is not.
I can't say exactly why you should believe either of these statements,
of course. They aren't in the least bit logical. Make of them what you
will. I have to go eat breakfast.
From jonkc at bellsouth.net Thu Feb 4 18:32:10 2010
From: jonkc at bellsouth.net (John Clark)
Date: Thu, 4 Feb 2010 13:32:10 -0500
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <595536.39512.qm@web36508.mail.mud.yahoo.com>
References: <595536.39512.qm@web36508.mail.mud.yahoo.com>
Message-ID: <4529BD17-B295-4C3A-B869-2C19DDC12F88@bellsouth.net>
On Feb 4, 2010, Gordon Swobe wrote:
>>>
> he [Searle] would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts[...]
What's with this "we" business? You can know certain subjective facts about the universe from direct experience, and that outranks everything else even logic. But you have no reason to think that I or anybody else or a computer could do that too, and yet you do think that at least for the first two, you think other people are conscious when they are able to act intelligently; you do this because like me you couldn't function it you thought you were the only conscious being in the universe. But every one of the arguments you have used against the existence of consciousness in computers could just as easily be used to argue against the existence of consciousness in your fellow human beings, but you have never done so.
You could also use your arguments to try to show that even you are not conscious, but as I say direct experience outranks everything else; but you have no reason to believe that other people who act intelligently are fundamentally different from anything else that acts intelligently.
John K Clark
> only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it".
>
> Real subjective first-person facts of the world include one's own conscious understanding of words.
>
> Stefano wrote:
>> Which sounds pretty equivalent to saying that it does not
>> exist,
>
> I think you want to deny the reality of the subjective. I don't know why.
>
> -gts
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From avantguardian2020 at yahoo.com Thu Feb 4 18:24:10 2010
From: avantguardian2020 at yahoo.com (The Avantguardian)
Date: Thu, 4 Feb 2010 10:24:10 -0800 (PST)
Subject: [ExI] How not to make a thought experiment
In-Reply-To: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net>
References: <969214.43756.qm@web36506.mail.mud.yahoo.com>
<317129.32350.qm@web65616.mail.ac4.yahoo.com>
<4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net>
Message-ID: <565163.72063.qm@web65609.mail.ac4.yahoo.com>
From: John Clark
>To: ExI chat list
>Sent: Thu, February 4, 2010 7:47:02 AM
>Subject: Re: [ExI] How not to make a thought experiment
>
>
>On Feb 4, 2010, The Avantguardian wrote:
>
>a bacterium is the simplest concievable object that I am?confident is capable of intentionality.
>
>Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather >than another; or at least that's what I mean by the word. I like it because it lacks circularity.
While I understand your dislike of circularity, the definition you give is far too broad. Almost everything has an internal state that can be changed. The discovery of this and the mathematics behind it made?Ludwig Boltzmann famous. A rock has a temperature which is an?"internal state". If the temperature of the rock is higher than that of its surroundings,?its internal state predisposes the rock to?cool down. FWIW evolution by natural selection is based on a circular argument as well. Species evolve by the differential survival and reproduction of the fittest members of the species. What is fitness? Those adaptations?that allows?members of a?species to survive and reproduce.?
>So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality >too.
While I will not discount the possibility that in the future a sufficently complex program running on a computer may exhibit life or consciousness, that program does not currently exist. Currently the "intentionality" of existing software is completely explicit and vicarious. That is to say that all software currently in existence?exhibits only the intentionality of the programmer and not any native or implicit intentionality of its own. By the same token, a mouse trap exhibits explicit intentionality as well, but?lacks implicit intentionality. That is we would say the mouse trap is *intended* to catch a mouse but we would not say the mouse trap is *intent* on catching a mouse.?Now some people may think that is true of?bacteria as well, but we laugh at intelligent design don't we?
>Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and >mysterious things in the smallest possible chunks in a way that is easily understood.?
The way of reductionism is fraught with the peril of oversimplification. You can reduce an automobile to quarks but that doesn't give you any insight as to how an automobile works.
>Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality.
Then Gordon must accept that a bacterium is conscious. I however would say that implicit intentionality is necessary?for consciousness but not sufficient.?
>A circle has no end so that may be why his thread has been going on for so long with no end in sight.
One can extrapolate insufficent data into any conclusion one likes. Two given points can lie on a straight line or on?a?drawing of a unicorn. Neither of?these is likely the truth. Which is why I prefer empiricaI science to philosophy. I?think experimentation is the only hope of?settling this argument.
??
?Stuart LaForge
"Never express yourself more clearly than you think." - Niels Bohr
From aware at awareresearch.com Thu Feb 4 18:30:39 2010
From: aware at awareresearch.com (Aware)
Date: Thu, 4 Feb 2010 10:30:39 -0800
Subject: [ExI] Personal conclusions
In-Reply-To:
References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
Message-ID:
On Thu, Feb 4, 2010 at 10:20 AM, Aware wrote:
> It's simply and necessarily how any system refers to references to
> itself. ?Yes, it's recursive, and therefore unfamiliar and unsupported
> by a language and culture that evolved to deal with relatively shallow
> context and linear relationships of cause and effect. ?Meaning is not
> as perceived by the observer, but in the response of the observer,
> determined by its nature within a particular context.
I left out a key word, sorry. Should have been:
> It's simply and necessarily how any system refers to references to
> itself. Yes, it's recursive, and therefore unfamiliar and unsupported
> by a language and culture that evolved to deal with relatively shallow
> context and linear relationships of cause and effect. Meaning is not
> as perceived by the observer, but in the observed response of the observer,
> determined by its nature within a particular context.
- Jef
From rpwl at lightlink.com Thu Feb 4 18:18:17 2010
From: rpwl at lightlink.com (Richard Loosemore)
Date: Thu, 04 Feb 2010 13:18:17 -0500
Subject: [ExI] ANNOUNCE: New "Artificial General Intelligence" discussion
list on Google Groups
In-Reply-To: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com>
References: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com>
Message-ID: <4B6B0F69.8090709@lightlink.com>
In response to the imminent closure of the AGI discussion list, I just
set this up as an alternative:
http://groups.google.com/group/artificial-general-intelligence
Full name of the group is "Artificial General Intelligence" but the
short name is AGI-group. (Note that there is already a google group
called Artificial General Intelligence, but it appears to be spam-dead.)
Its purpose is to encourage polite and well-informed discussion, so it
will be moderated to that effect.
Allow me to explain my rationale. In the past I felt like posting
substantial content to the AGI list because it seemed that there were
some people who were well-informed enough to engage in discussion. These
days, the noise level is so high that I have no interest, because I know
that the people who would give serious thought to real issues are just
not listening anymore.
I understand that Ben Goertzel is trying to solve this by setting up the
H+ forum on AGI. I wish him luck in this, of course, and I myself have
joined that forum and will participate if there is useful material
there. But I also prefer the faster, easier format of a discussion list
WHEN THAT LIST IS CONTROLLED. Consider this to be an experiment, then.
If it works, it works. If not, then not.
Anyone can join.
But if there are people who
(a) send ad hominem remarks
(b) rant on about fringe topics
(c) persistently introduce irrelevant material
... they will first be subjected to KILLTHREADs, and then if it does not
stop they will be banned. This process will be escalated slowly, and
anything as drastic as a ban will be preceded by soliciting the opinions
of the group if it is a borderline case.
Wow! That sounds draconian! Who is to say what is "fringe" and what is
way out there, but potentially valuable?
Well, the best I can offer is this. I have over 25 years' experience of
research in AI, physics and psychology, and I have also investigated
other "fringe" areas like scientific parapsychology, so I consider my
standards to be very tolerant when it comes to new ideas (after all, I
have some outlier ideas of my own), but also savvy enough to know when
someone is puncturing the envelope, rather than pushing it.
So here goes. You are all invited to join at the above address.
For the serious people: let's try to establish a standard early on.
Richard Loosemore
From aware at awareresearch.com Thu Feb 4 18:20:45 2010
From: aware at awareresearch.com (Aware)
Date: Thu, 4 Feb 2010 10:20:45 -0800
Subject: [ExI] Personal conclusions
In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
Message-ID:
On Thu, Feb 4, 2010 at 3:59 AM, Stefano Vaj wrote:
> On 3 February 2010 23:05, Gordon Swobe wrote:
>> I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics).
>
> Yes, this is clear by now.
>
> The bunch of threads of which Gordon Swobe is the star, which I have
> admittedly followed on and off, also because of their largely
> repetitive nature, have been interesting, albeit disquieting, for me.
Interesting to me too, as an example of our limited capability at
present, even among intelligent, motivated participants, to
effectively specify and frame disparate views and together seek a
greater, unifying context which either resolves the apparent
differences or serves to clarify them.
>
> Not really to hear him reiterate innumerable times that for whatever
> reason he thinks that (organic? human?) brains, while obviously
> sharing universal computation abilities with cellular automata and
> PCs, would on the other hand somewhat escape the Principle of
> Computational Equivalence.
Gordon exhibits a strong reductionist bent; he seems to believe that
Truth WILL be found if only one can see closely and precisely enough
into the heart of the matter.
Ironically, to the extent that he parrots Searle his logic is
impeccable, but he went seriously off that track when engaging with
Stathis in the neuron-replacement thought experiment. Most who engage
in this debate fall into the same trap of defending functionalism, and
this is where the Chinese Room Argument gets most of its mileage, but
functionalism, materialism and computationalism are really not at
issue. Searle quite clearly and coherently shows that syntax DOES NOT
entail semantics, no matter how detailed the implementation.
So at the sophomoric level representative of most common objections,
the debate spins around and around, as if Searle were denying
functionalist, materialist, or computationalist accounts of reality.
He's not, and neither is Gordon. The point is that there's a paradox.
[And paradox is always a matter of insufficient context. In the
bigger picture, all the pieces must fit.]
John Clark jumps in to hotly defend Truth and his simple but circular
view that consciousness is a Fact, it obviously arrived via Evolution
thus Evolution is the key. And how dare you deny Evolution--or Truth!?
Stathis patiently (he has plenty of patients, as well as patience)
rehashes the defense of functionalism which needs no defending, and
although Gordon occasionally asserts that he doesn't disagree (on
this) he doesn't go far enough to acknowledge and embrace the apparent
truth of functionalist accounts WHILE highlighting the ostensible
paradox presented by Searle.
Eric and Spencer jump in (late in the game, if a merry-go-round can be
said to have a "later" point in the ride) and contribute the next
layer after functionalism: If we accept that we have "consciousness",
"and unquestionably we do", and we accept materialist, functionalist,
computationalist accounts of reality, then the answer is not to be
found in the objects being represented, but in the complex
associations between them. They too are correct (within their
context) but their explanation only raises the problem another level,
no closer to resolution.
> But because so many of the people having engaged in the discussion of
> the point above, while they may not believe any more in a religious
> concept of "soul", seem to accept without a second thought that some
> very poorly defined Aristotelic essences would per se exist
> corresponding to the symbols "mind", "consciousness", "intelligence",
> and that their existence in the sense above would even be an a priori
> not really open to analysis or discussion.
Yes, many of our "rationalist" friends decry belief in a soul, but
passionately defend belief in an essential self--almost as if their
self depended on it. And along the way we get essential [qualia,
experience, intentionality, free-will, meaning, personal identity...]
and paradox.
And despite accumulating evidence of the incoherence of consciousness,
with all its gaps, distortions, fabrication and confabulation, we hang
on to it, and decide it must be a very Hard Problem. Thus inoculated,
and fortified by the biases built in to our language and culture, we
know that when someone comes along and says that it's actually very
simple, cf. Dennett, Metzinger, Pollack, Buddha..., we can be sure,
even though we can't make sense of what they're saying, that they must
be wrong.
A few deeper thinkers, aiming for greater coherence over greater
context, have suggested that either all entities "have consciousness"
or none do. This is a step in the right direction. Then the
question, clarified, might be decided in simply information-theoretic
terms. But even then, more often they will side with Panpsychism
(even a rock has consciousness, but only a little) than to face the
possibility of non-existence of an essential experiencer.
> Now, if this is the case, I have sincerely troubles in finding a
> reason why we should not accept on an equal basis, the article of
> faith that Gordon Swobe proposes as to the impossibility for a
> computer to exhibit the same.
>
> Otherwise, we should perhaps reconsider a little non really the AI
> research programmes in place, but rather, say, the Circle of Vienna,
> Popper or Dennett.
Searle is right, in his logic. Wrong, in his premises. No formal
syntactic system produces semantics. Further, to the extent that the
human brain is formally described, no semantics will be found there
either. We never had it, and don't need it. "It" can't even be
defined in functional terms. The notion is incoherent, despite the
strength and seductiveness of the illusion.
It's simply and necessarily how any system refers to references to
itself. Yes, it's recursive, and therefore unfamiliar and unsupported
by a language and culture that evolved to deal with relatively shallow
context and linear relationships of cause and effect. Meaning is not
as perceived by the observer, but in the response of the observer,
determined by its nature within a particular context. Yes, it may
feel like a direct attack on the sanctity of Self, but it's not. It
destroys nothing that ever existed, and opens up thinking on agency
just as valid, extending beyond the boundaries of the cranium, or the
skin, or the organism plus its tools, or ...
Oh well. Baby steps...
- Jef
From lacertilian at gmail.com Thu Feb 4 18:57:01 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 10:57:01 -0800
Subject: [ExI] The digital nature of brains
In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com>
References: <20100204102500.86W8S.511470.root@hrndva-web26-z02>
<325635.76699.qm@web36508.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe wrote:
> Spencer:
>> Finally, would you say that an artificial neural network is a digital computer?
>
> Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes.
I could easily have guessed you would say that, but my question
pertains to hard, non-simulated artificial neural networks.
This brings up another point of interest: you seem to place computer
programs within the category of digital computers. This isn't how I
use the term. I would say:
Firefox is not a digital computer, it is instantiated by a digital
computer. All computers are physical objects in reality; if they are
not, they should be explicitly designated as virtual computers.
As a side note, are all computers effectively digital computers? It'd
save me some time, and the Internet some bandwidth, if so. Personally
I could go either way. When I want to be fully inclusive, I usually
say "information processor", which denotes brains just as well as
laptops.
From gts_2000 at yahoo.com Thu Feb 4 19:09:39 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 11:09:39 -0800 (PST)
Subject: [ExI] Principle of Computational Equivalence
In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com>
Message-ID: <578968.53151.qm@web36506.mail.mud.yahoo.com>
--- On Thu, 2/4/10, Stefano Vaj wrote:
> Not really to hear him reiterate innumerable times that for
> whatever reason he thinks that (organic? human?) brains, while
> obviously sharing universal computation abilities with cellular
> automata and PCs, would on the other hand somewhat escape the Principle
> of Computational Equivalence.
I see no reason to consider the so-called Principle of Computational Equivalence of philosophical interest with respect to natural objects like brains.
Given a natural entity or process x and a computation of it c(x) it does not follow that c(x) = x. It does matter whether x = an organic apple or an organic brain.
c(x) = x iff x = a true digital artifact. It seems to me that we have no reason to suppose except as a matter of religious faith that any x in the natural world actually exists as a digital artifact.
For example we might in principle create perfect computations of hurricanes. It would not follow that hurricanes do computations.
-gts
From hkeithhenson at gmail.com Thu Feb 4 19:09:49 2010
From: hkeithhenson at gmail.com (Keith Henson)
Date: Thu, 4 Feb 2010 12:09:49 -0700
Subject: [ExI] Space based solar power again
Message-ID:
(reply to a discussion on another list about power satellites)
How you get the energy down from GEO is a problem with a couple of
known solutions.
What has to be solved is getting the parts to GEO (or the parts to LEO
and the whole thing to GEO if you build it in LEO).
Even at a million tons per year (what's needed for a decent sized SBSP
project) the odds are against the cost being low enough for power
satellites to make sense (i.e., undercut coal and nuclear) if you try
to transport the parts with chemical rockets.
You either have to go to some non reaction method, magnet launcher,
cannon, launch loop or space elevator, or you have to go to an exhaust
velocity higher than what the energy of chemical fuels will give you.
The non reaction methods are extremely difficult engineering problems,
partly because we live a the bottom of a dense atmosphere, partly
because of the extreme energy needed.
The rule of thumb from the rocket equation is that mass ratio 3 will
get the vehicle up to the exhaust velocity and a mass ratio 2 will get
it to a bit under 0.7 of the exhaust velocity. Beyond mass ratio 3
the payload fraction rapidly goes to zero.
So to get to LEO on a mass ratio 3 means an average exhaust velocity
of around 9.5 km/sec
The Skylon gets about 10.5 km/sec equivalent Ve in air breathing mode.
Laser heated hydrogen will give up to 9.8 km/sec.
So much for the physics, on to the engineering! :-)
Keith
From lacertilian at gmail.com Thu Feb 4 19:11:07 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 11:11:07 -0800
Subject: [ExI] Mind extension
In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com>
References:
<767464.48754.qm@web113618.mail.gq1.yahoo.com>
Message-ID:
Ben Zaiboc :
> (lots and lots of neat stuff)
Ever since Stathis put me on the spot to state my feelings on
uploading and "brain prostheses", and in fact for several years prior,
I've been thinking about pretty much the same thing. At the time I
sent my response I actually thought this is what he was talking about,
but later decided he probably meant a full brain transplant.
Judging by the craziness Alex is pulling off with
that EPOC EEG, I'm thinking it would be trivially easy to add new
"wings" to our brains. We might be able to do it right now, with
lab-grown neurons, if we can figure out a way to increase skull
capacity.
I hadn't considered a few of the tests you propose, though.
Specifically, temporarily "turning off" the organic hemisphere to see
if the synthetic hemisphere keeps working. I can imagine a lot of
problems with putting that into practice. Certainly, we couldn't even
try it until we start creating neurons via building rather than via
growing.
Hmm!
From Frankmac at ripco.com Fri Feb 5 19:14:07 2010
From: Frankmac at ripco.com (Frank McElligott)
Date: Fri, 5 Feb 2010 14:14:07 -0500
Subject: [ExI] war is peace
Message-ID: <004001caa697$66d15db0$ad753644@sx28047db9d36c>
In Russia, strong leaders are held in high esteem. If you want to take over an oil company, just throw the CEO in jail for tax evasion and the people will say he deserved it and our Putin knows best. Then to create a legal auction to have the state take over the company under this action with one bidder is ok, it was legal wasn't it.
Russia is a different place, rules are set by strong leaders and the people go along with it. What is legal is what Putin says is legal.
As Example from the US, if Obama decided that Goldman Sach's bonus plan was out of the realm of what was good for the country, he could arrest the CEO, stopping him from doing god's work, and then have the government take over the company and thus screwing the shareholders out of hundreds of millions.
Sorry that's a bad example I should have used AIG instead. Last year thats what Us Gov't did with AIG except they have not arrested no one YET.
What I could have used was the EU taking over the books of Greece because we all know the Greek Gov't is corrupt and for the good of the EU we must stop tose Greeks from cheating.
If Greece falls, so does Spain, so does Portugal, and must I say Italy as well?
Greece is in trouble because they lied, AIG was in trouble on count of their greed, and in Russia it's tax problems.
If your Government tells you it is right, you accept it, as we did with Bush and his Weapons of Mass Destruction, in Russia they are no different, or in EU for that matter.
So my friend , War is Peace, here in the US, in Russia, and now even in the Eurzone.
Hope that helps
Frank
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lacertilian at gmail.com Thu Feb 4 19:19:47 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 11:19:47 -0800
Subject: [ExI] multiple realizability
In-Reply-To: <764802.35206.qm@web113618.mail.gq1.yahoo.com>
References:
<764802.35206.qm@web113618.mail.gq1.yahoo.com>
Message-ID:
Ben Zaiboc :
> I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". ?All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea.
Yep.
But, I'm sure Gordon has already been there. My guess is he took a
plane instead of walking or driving, though, and most likely missed
all of the cultural flavor by persistently following a tour guide. I
wouldn't be surprised if he just stayed in a hotel the whole time.
More as an experiment than anything else, I've been trying to figure
out how to take him step-by-step into Abstraction City and show him
everything he missed. Right now I'm stuck on black boxes.
We have a long way to go.
From lacertilian at gmail.com Thu Feb 4 19:43:45 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 11:43:45 -0800
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com>
References:
<854080.27226.qm@web36504.mail.mud.yahoo.com>
Message-ID:
Gordon Swobe :
>Stathis Papaioannou :
>> ... loss of structural integrity over the
>> vast distances involved. However, theoretically, there is no
>> problem if such a system is Turing-complete and if the behaviour of
>> the brain is computable.
> I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-)
Whenever I've seen Stathis use the term "an absurdity", I've mentally
translated it to "a paradox".
A beer can brain is absurd, but not paradoxical.
An unconscious brain which is exactly identical to a conscious brain
is paradoxical, but not absurd.
From bbenzai at yahoo.com Thu Feb 4 20:07:37 2010
From: bbenzai at yahoo.com (Ben Zaiboc)
Date: Thu, 4 Feb 2010 12:07:37 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To:
Message-ID: <356658.52755.qm@web113601.mail.gq1.yahoo.com>
wrote:
---- Ben Zaiboc wrote:
>
> > Well, we're talking about different things. I
> said "it was designed to..", and you replied "no it does
> not". Both of these can be true.
> This is a perfect example of my 'understanding inversion'
> claim...
>
> First, we're not talking about different things. The Turing
> Test was suggested, not 'designed' as it's not a algorithm
> or mechanism. At best it's a heuristic. If you read Turing's
> papers and the period documentation the fundamental question
> is 'can the person tell the difference?'. If the answer is
> 'yes' the -pre-assumptive claim- is that some level of
> 'intelligence' has been reached in AI technology. Exactly
> what that level is, is never defined specifically by the
> original authors. The second and follow on generations of AI
> researchers have interpreted it to mean that AI has
> intelligence in the human sense. I would suggest, strongly,
> that this is a cultural 'taboo' that differentiates main
> stream from perceived cranks.
>
> They way you flip the meaning of 'can the person tell the
> difference' to 'machine to convince' are specious and moot.
> The important point is the human not being able to tell the
> difference. You say it is not meant to test the ability of
> humans, but it is the humans who -must be convinced-.
>
> I would say you're trying to massage the test to fit a
> preconceived cultural desire and not a real technical
> benchmark. It's about validating human emotion and not
> mechanical performance.
Um, I think 'understanding inversion' is right. I don't actually understand what you're trying to say.
Ben Zaiboc
From gts_2000 at yahoo.com Thu Feb 4 20:39:00 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 12:39:00 -0800 (PST)
Subject: [ExI] Personal conclusions
In-Reply-To:
Message-ID: <469838.61714.qm@web36504.mail.mud.yahoo.com>
--- On Thu, 2/4/10, Aware wrote:
> So at the sophomoric level representative of most common
> objections, the debate spins around and around, as if Searle were
> denying functionalist, materialist, or computationalist accounts
> of reality. He's not, and neither is Gordon.?
On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality.
By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested, and 2) functionalism is not about making artificial neurons, per se, and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality.
-gts
From lacertilian at gmail.com Thu Feb 4 21:22:22 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 13:22:22 -0800
Subject: [ExI] Semiotics and Computability (was: The digital nature of
brains)
In-Reply-To:
References:
Message-ID:
Stathis Papaioannou wrote:
> I'm not completely sure what you're saying in this post, but at some
> point the string of symbol associations (A means B, B means C, C means
> D...) is grounded in sensory input.
I'm talking about syntax and semantics, but especially syntax. In the
context of this discussion, you're making a statement about semantics.
One assumption (or conclusion, it's hard to tell) made by the
notorious Gordon Swobe is that digital computers are capable of
syntax, but not of semantics. I made that post to explore the question
of whether or not that's even possible in theory.
If I was vague and difficult to understand (I was), that might be due
to the fact I have a very fuzzy idea of what Gordon means when he
talks about syntax, and his is the definition I tried to use. I
wouldn't describe your typical CPU as performing syntactical
operations normally, but here I would do so without hesitation.
From lacertilian at gmail.com Thu Feb 4 21:41:48 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 13:41:48 -0800
Subject: [ExI] Semiotics and Computability
Message-ID:
Gordon Swobe :
> Stathis wrote:
>> Searle would say that there
>> needs to be an extra step whereby the symbol so grounded gains
>> "meaning", but this extra step is not only completely mysterious, it
>> is also completely superfluous, since every observable fact about
>> the world would be the same without it.
>
> No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it".
You're both wrong! Only I am right! Me!
>From my limited research, it appears Searle has never said anything
about some unknown extra step necessary to produce meaning. If you
think his arguments imply any such thing, that's your extrapolation,
not his. The Chinese room argument isn't chiefly about meaning: it's
about understanding. They're extremely different things. We take
meaning as input and output, or at least feel like we do, but we
simply HAVE understanding.
And no, it isn't a substance. It's a measurable phenomenon. Not easily
measurable, but measurable nonetheless.
Secondly, "facts with subjective first-person ontologies" is a
nightmarishly convoluted phrase. Does the universe even have facts in
it, technically speaking? I suppose what I'm meant to do is pick a
component of my subjective experience, say, my headache, and call it a
fact.
Then I say the fact of my headache has a subjective first-person
ontology. But that's redundant: all subjective things are first-person
things, and vice-versa. And "ontology" actually means "the study of
existence". I don't think the fact of my headache has any kind of
study, let alone such an esoteric one. Gordon must have meant
"existence", not "ontology".
Searle uses that same terminology. It makes things terribly difficult.
So to say something has "subjective first-person ontology" really
means it "exists only for the subject". There are facts (my headache)
which exist only for the subject (me). Ah! Now it makes sense. I even
have a word for facts like that: "delusions".
It's a low blow, I know. It shouldn't be, but it is.
Really, it just means we're too hard on especially delusional people.
We need delusions in order to function. They aren't inherently bad.
Who was it that wrote the paper describing how a delusion of self is
unavoidable when implementing a general-purpose consciousness such as
myself? I liked that paper. It appealed to my nihilistic side, which
is also the rest of me.
Ugh, this is going to drive me crazy. I have to remember some keywords
to search for. He used a very specific term to refer to that delusion.
"Distributed agent" was used in the paper, I think, but not the
message that linked to the paper...
From gts_2000 at yahoo.com Thu Feb 4 22:17:15 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 14:17:15 -0800 (PST)
Subject: [ExI] Semiotics and Computability
In-Reply-To:
Message-ID: <938103.15549.qm@web36508.mail.mud.yahoo.com>
--- On Thu, 2/4/10, Spencer Campbell wrote:
> Secondly, "facts with subjective first-person ontologies"
> is a nightmarishly convoluted phrase.
Sorry for the jargon.
> Does the universe even have facts in
> it, technically speaking? I suppose what? I'm meant to
> do is pick a component of my subjective experience, say, my headache,
> and call it a fact.
Exactly right.
> Then I say the fact of my headache has a subjective
> first-person ontology.
Yes.
> But that's redundant: all subjective things are first-person
> things, and vice-versa. And "ontology" actually means "the
> study of existence".
I make the distinction because subjective experiences exist both epistemically and ontologically in the first person; that is, we can do epistemic investigations of their causes (why do you have a headache?) in the same sense that we do epistemic investigations of any third-person objective phenomena, and they also have their *existence* in the first-person and thus a first-person ontology.
Some people especially my materialist friends seem wont to deny the first-person ontology of consciousness. They deny its existence altogether and attempt to reduce it to something "material", not realizing that in doing so they use the same dualistic vocabulary as do those with whom they want to disagree. Theirs is an over-reaction; we can keep consciousness as a real phenomenon without bringing Descartes back from the dead.
-gts
From lacertilian at gmail.com Thu Feb 4 22:24:59 2010
From: lacertilian at gmail.com (Spencer Campbell)
Date: Thu, 4 Feb 2010 14:24:59 -0800
Subject: [ExI] The simplest possible conscious system
In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com>
References:
<820642.67186.qm@web113613.mail.gq1.yahoo.com>
Message-ID:
Ben Zaiboc :
> Hm, interesting challenge.
>
> I'd probably define Intelligence as problem-solving ability, and
> Understanding as the association of new 'concept-symbols' with established ones.
>
> I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it.
Somehow I was expecting people to radically disagree on these
definitions, but you actually have very similar conceptions of
consciousness, intelligence and understanding to my own.
Understanding is notably different in my mind, though: I'd say to have
a mental model of a thing is to understand that thing. Symbols don't
really enter into it, except that we use them as shorthand to refer to
understood models.
The more similarly your model behaves to the target system, the better
you understand that system!
Ben Zaiboc :
> I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. ?Hm. maybe we already have conscious robots, and don't realise it!
I can conceive of a disembodied consciousness, interacting with its
environment only through verbal communication, which would be simpler.
Top that!
:
> I would agree, however there is a couple of issues that must be addressed before it becomes meaningful.
>
> First, what is 'conscious'? That definition must not use human brains as an axiomatic measure.
I agree. The only problem is that, if consciousness exists, any
English definition of it would at least be inaccurate, if not outright
incorrect. We can only approximate the speed of light using feet, but
we can describe it exactly with meters.
I'm not even sure if consciousness is better considered as a binary
state, present or absent, or if we should be talking about degrees of
consciousness. Certainly, intelligence and understanding are both
scalar quantities. Is the same true of consciousness?
My current theory is that consciousness requires recursive
understanding: that is, understanding of understanding.
Meta-understanding. I don't know if it exhibits any emergent
properties over and above that, though, or if there are any other
prerequisites.
From stathisp at gmail.com Thu Feb 4 22:27:52 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Fri, 5 Feb 2010 09:27:52 +1100
Subject: [ExI] The digital nature of brains (was: digital simulations)
In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com>
References:
<854080.27226.qm@web36504.mail.mud.yahoo.com>
Message-ID:
On 5 February 2010 00:32, Gordon Swobe wrote:
> --- On Wed, 2/3/10, Stathis Papaioannou wrote:
>
>>> Can a conscious Texas-sized brain constructed out of
>>> giant neurons made of beer cans and toilet paper exist as a
>>> possible consequence of your brand of functionalism? Or
>>> not?
>>
>> It would have to be much, much larger than Texas if it was
>> to be human equivalent and it probably wouldn't physically possible due
>> (among other problems) to loss of structural integrity over the
>> vast distances involved. However, theoretically, there is no
>> problem if such a system is Turing-complete and if the behaviour of
>> the brain is computable.
>
> Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you.
>
> I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-)
I take it that you are aware of the concept of "Turing equivalence"?
It implies that if a digital computer can have a mind, then any Turing
equivalent machine can also have a mind. If a beer can computer is
Turing equivalent then you don't gain anything philosophically by
pointing to it and saying that it's "absurd"; that's more like a
politician's subterfuge than a philosophical argument.
The absurdity I was referring to, on the other hand, is logical
contradiction. Spencer Campbell suggested that these may not be the
same thing but that is what I meant; see
http://en.wikipedia.org/wiki/Proof_by_contradiction.
The logical contradiction is the claim that, for example, artificial
brain components can be made which both do and do not behave exactly
the same as normal neurons. Not even God can make it so that both P
and ~P are true; however, God could easily make a beer can and toilet
paper computer or a Chinese Room. It is a difference in kind, not a
difference in degree.
--
Stathis Papaioannou
From stathisp at gmail.com Thu Feb 4 22:43:49 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Fri, 5 Feb 2010 09:43:49 +1100
Subject: [ExI] Mind extension
In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com>
References:
<767464.48754.qm@web113618.mail.gq1.yahoo.com>
Message-ID:
On 5 February 2010 00:38, Ben Zaiboc wrote:
> I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue.
>
> I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). ?Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating.
>
> If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience.
>
> If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact.
>
> The overall idea is to build extra room for the mind to expand into, and see if it really has or not. ?If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off.
An important point is that if you noticed a difference not only would
that mean the artificial parts don't support normal consciousness, it
would also mean the artificial parts do not exactly reproduce the
objectively observable behaviour of the natural neurons.
--
Stathis Papaioannou
From stathisp at gmail.com Thu Feb 4 22:53:23 2010
From: stathisp at gmail.com (Stathis Papaioannou)
Date: Fri, 5 Feb 2010 09:53:23 +1100
Subject: [ExI] The digital nature of brains
In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com>
References: <20100204102500.86W8S.511470.root@hrndva-web26-z02>
<325635.76699.qm@web36508.mail.mud.yahoo.com>
Message-ID:
On 5 February 2010 02:00, Gordon Swobe wrote:
> Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents.
Let's be clear: it is not LOGICALLY impossible that syntax can give
rise to meaning. There is no LOGICAL contradiction in the claim that
when a symbol is paired with a particular type of input, then that
symbol is grounded, and grounding of the symbol is sufficient for
meaning. You don't like this idea because you have a view that there
is a mysterious extra layer to provide meaning, but that is a claim
about the way the world is (one that is not empirically verifiable),
not a LOGICAL claim.
--
Stathis Papaioannou
From ablainey at aol.com Thu Feb 4 23:13:42 2010
From: ablainey at aol.com (ablainey at aol.com)
Date: Thu, 04 Feb 2010 18:13:42 -0500
Subject: [ExI] Pig Symbol
In-Reply-To:
References: <20100204102500.86W8S.511470.root@hrndva-web26-z02>
<325635.76699.qm@web36508.mail.mud.yahoo.com>
Message-ID: <8CC7406D403F237-55FC-520D@webmail-m027.sysops.aol.com>
Following on from Symbols, AI and especially the robot seeing a pig.
Here is a website that pretends to be a personality test based upon your drawing of a pig.
There has been over 2 million pigs drawn so far. It seems to me that you could get some
interesting insight into AI image recognition by feeding in 2 million + drawings of pigs.
The site owner is asking for suggestions of what to do with the drawings. Does anyone
have an AI that needs some training data?
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gts_2000 at yahoo.com Thu Feb 4 23:18:34 2010
From: gts_2000 at yahoo.com (Gordon Swobe)
Date: Thu, 4 Feb 2010 15:18:34 -0800 (PST)
Subject: [ExI] The digital nature of brains
In-Reply-To:
Message-ID: <523017.16837.qm@web36507.mail.mud.yahoo.com>
--- On Thu, 2/4/10, Stathis Papaioannou wrote:
>> Software implementations of artificial neural networks
>> certainly fall under the general category of digital
>> computer, yes. However in my view no software of any kind
>> can cause subjective experience to arise in the software or
>> hardware. I consider it logically impossible that
>> syntactical operations on symbols, whether they be 1's and
>> 0's or Shakespeare's sonnets, can cause the system
>> implementing those operations to have subjective mental
>> contents.
>
> Let's be clear: it is not LOGICALLY impossible that syntax
> can give rise to meaning.
I think it is logically impossible.
> There is no LOGICAL contradiction in the
> claim that when a symbol is paired with a particular type of input,
> then that symbol is grounded, and grounding of the symbol is
> sufficient for meaning.
I take it that on your view a picture dictionary understands the nouns for which it has pictures, since it "pairs" its word-symbols with sense-data, grounding the symbols in the same way that a computer + webcam can pair and ground symbols.
How about a lunch menu? Does it understand sandwiches? :-)
-gts
From eric at m056832107.syzygy.com Fri Feb 5 00:10:05 2010
From: eric at m056832107.syzygy.com (Eric Messick)
Date: 5 Feb 2010 00:10:05 -0000
Subject: [ExI] Religious idiocy (was: digital nature of brains)
In-Reply-To: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com>
References:
<584374.10388.qm@web36504.mail.mud.yahoo.com>
<20100129192646.5.qmail@syzygy.com>
<580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com>
<20100201181430.5.qmail@syzygy.com>
<580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com>
Message-ID: <20100205001005.5.qmail@syzygy.com>
Stefano writes:
>So, is the integer "3" a word symbol or a sense symbol?
The integer 3 is a concept for which a brain probably has a symbol.
That symbol will be distinct from the symbol for the word "three", and
both are distinct from the impressions (represented by sense symbols)
generated when someone views a hand with three fingers held up. All
those symbols are related to each other, and activation of any one is
likely to make it easier to activate any of the others.
> And what about the ASCII decoding of a byte?
I'm not sure exactly what you're asking here. ASCII maps byte values
to stereotypical glyphs, so I'm assuming you're referring to the glyph
'3' as a decoding of the byte value 0x33. When you look at that
glyph, a particular sense symbol will be activated, which will likely
lead to activation of the corresponding concept and word symbols
mentioned above.
> Or the rasterisation of the ASCII symbol?
Again, I'm not sure exactly what you're getting at. Is that
rasterisation what shows up on your video monitor when the computer
displays the '3' glyph? I could think about the concept of that
occurring, or I could look at the result (see above).
>And what difference would exactly make?
Not much, really. They're just names for things, so we can talk about
them. The brain probably uses similar mechanisms to process all those
symbols. That processing is likely confined to different areas of the
brain for each type of symbol, though. I don't think anyone knows yet
how the brain does any of this processing. We don't even know much
about how the symbols might be encoded, although theories do exist.
I happen to like William Calvin's theory as presented in "The Cerebral
Code":
http://williamcalvin.com/bk9/
I don't think we're yet to the point where we can put that theory to
the test.
We do know a good deal about the low level processing, but things get
complicated as we climb the abstraction ladder.
-eric
From eric at m056832107.syzygy.com Fri Feb 5 00:25:12 2010
From: eric at m056832107.syzygy.com (Eric Messick)
Date: 5 Feb 2010 00:25:12 -0000
Subject: [ExI] meaning & symbols
In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com>
References:
<910459.5460.qm@web113613.mail.gq1.yahoo.com>
Message-ID: <20100205002512.5.qmail@syzygy.com>
Ben writes:
>OK obviously this word 'symbol' needs some clear definition.
>
>I would use the word to mean any distinct pattern of neural activity
> that has a relationship with other such patterns. In that sense,
> sensory symbols exist, as do (visual) word symbols, (auditory) word
> symbols, concept symbols, which are a higher-level abstraction from
> the above three types, and hundreds of other types of 'symbol',
> representing all the different patterns of neural activity that can
> be regarded as coherent units, like emotional states, memories,
> linguistic units (nouns, verbs, etc.), and their higher-level
> 'chunks' (birdness, the concept of fluidity, etc.), and so on.
This sounds exactly like what I mean when I use the term "symbol" in
this context.
The question came up about how hard it might be to tease apart
*distinct* patterns of neural activity. I agree that this is likely
to be tricky. I expect many symbols will be active in a brain at the
same time, and differentiating them could be hard. They may change
representation with the brain region they are active in. I do expect
that a symbol is simpler than a global neural firing pattern, though.
If a firing pattern in one part of the brain triggers a similar firing
pattern in another part of the brain, is the same symbol active in
both areas, or are there two distinct symbols? I don't think we have
a good enough handle on this to answer such questions yet.
-eric
From possiblepaths2050 at gmail.com Fri Feb 5 00:50:11 2010
From: possiblepaths2050 at gmail.com (John Grigg)
Date: Thu, 4 Feb 2010 17:50:11 -0700
Subject: [ExI] "Supreme Court Allows Corporations To Run For Political
Office, " Onion parody article
Message-ID: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com>
This is both funny and creepy...
http://www.theonion.com/content/news_briefs/supreme_court_allows
John : )
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aware at awareresearch.com Fri Feb 5 02:33:29 2010
From: aware at awareresearch.com (Aware)
Date: Thu, 4 Feb 2010 18:33:29 -0800
Subject: [ExI] Personal conclusions
In-Reply-To: <469838.61714.qm@web36504.mail.mud.yahoo.com>
References:
<469838.61714.qm@web36504.mail.mud.yahoo.com>
Message-ID:
On Thu, Feb 4, 2010 at 12:39 PM, Gordon Swobe wrote:
> On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality.
Well, sometimes in effect you do; sometimes you don't. You seem to
enjoy the polemics more than you do the opportunity to encompass a
greater context of understanding.
> By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested,
Seems to me that { Materialism { Functionalism { Computationalism}}}.
Your "clear as mud" is clearly appropriate.
> and 2) functionalism is not about making artificial neurons, per se,
Stathis would argue, I think, that such was the point of that part of
his discussion with you.
> and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality.
My central point (were you paying attention?) is that there can be no
"true functionalist/computationalist account of mind..."
notwithstanding the legitimacy of all three of these isms in their
appropriate contexts.
Finally, Gordon, I'd like to thank you for your characteristically
thorough and thoughtful reply to my comments...
- Jef
From x at extropica.org Fri Feb 5 03:08:29 2010
From: x at extropica.org (x at extropica.org)
Date: Thu, 4 Feb 2010 19:08:29 -0800
Subject: [ExI] Semiotics and Computability
In-Reply-To:
References: