And I only got into micros 'cos I saw the 1802 manual had a
chapter all about SEX. Yes, SEX in capitals. I thought, 'allo,
that's the go. Sucked in badly........oooh, bugger

Seriously, I do not see where Prof Hawking is coming from.
There may well come a day when computer systems are
self-aware and Prof Hawking turns out to be an unsung
prophet of doom. Maybe a HAL2000 scenario is possible
("Open the pod door Hal") on an individual basis but I
think/hope unlikely that on a global scale computers could
do physical harm. In which case who's really in charge ?
It would be unpalatable to go back living without computers
but if they proved to be such a scourge then turn them off,
pick up the pieces and learn a lesson

> Seriously, I do not see where Prof Hawking is coming from.
> There may well come a day when computer systems are
> self-aware and Prof Hawking turns out to be an unsung
> prophet of doom. Maybe a HAL2000 scenario is possible
> ("Open the pod door Hal") on an individual basis but I
> think/hope unlikely that on a global scale computers could
> do physical harm. In which case who's really in charge ?
> It would be unpalatable to go back living without computers
> but if they proved to be such a scourge then turn them off,
> pick up the pieces and learn a lesson

I agree that Hawkings has lost the plot if the version of this in our local
Newspaper is true to his comments.
He is not alone in this idea though - AFAIR one of the world's foremost AI
authorities says that this is a real risk and that development in the field
should be curtailed for this reason!

However ludicrous this may seem, one should not discount EFFECTIVE self
awareness in computers in the forseeable medium term, all other things being
equal. The article cited Hawking citing a doubling of capability every 18
months which is "Moore's law" and originally related only to the number of
transistors in a design. Extrapolating it from the earliest PC days for hard
disks, RAM and processor power produces some fairly good straight lines.

Where a "self aware" computer could do inestimable damage is in the first
part of the Terminator scenario, where a computer makes decisions for "its
own benefit" which lead to use of nuclear weapons or whatever is the flavour
of the day equivalent. It seems to me that this is just the sort of area
that some of the most advanced computer systems are liable to be in use. IF
a self serving self aware computer system ever did come to fruition more by
accident than on purpose then the odds are that it would have internet
access. Once it has that, anything that can be done via the net is within
its grasp - a super hacker. If a nuclear weapons delivery system is
accessible from the net, no matter how complex the path to it, then there
will be "trouble" (to switch computer / movie metaphors).

Lest something that at least passes as apparent self awareness seems like
extreme SciFi consider what it would take to write a program that had at
least rudimentary aspects of this. Many on this list could write programs
which has traces of apparent self awareness - probably down at the sub 1
year old human level in most cases (no reflection on the human level
capabilities of the participants :-) ), We would immediately recognise
interchanges with such "beings" as not being with a self aware being as we
know that babies with rudimentary concepts of self awareness do not use
computers. If there was some means for babies to convey their minimal
capabilities via the internet and if it was normal for them to do this then
we would probably easily see programs passing the Turing test at this level.
(I sleep, wake, hunger, feed,defecate, cry, sleep, ... therefore I am). From
there the path to APPARENT 2 year old self awareness seems continuous albeit
very very steep and complex. But maybe not. Apparent self awareness and
actual self awareness are very very likely to be separated by a bottomless
chasm. But apparent self awareness and vested apparent self interest should
be enough to start a nuclear war.

> Asimov's prime directive is, ISTR, "do not harm humans",
> and I think one of the others is self-preservation, but not
> at the expense of breaking the prime directive
>
yes, but if you remember, when the human ask the robot: gave me your
hand, he disconect his arm and gave it. He take it and knock stright in
his enemy's had ( which was a human too ) So the first directive wasn't
break but a man died.

<quote>
The Three Laws of Robotics:
1. A robot may not injure a human being or, through in-
action, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings
except where such orders would conflict with the First
Law.
3. A robot must protect its own existence as long as such
protection does not conflict with the First or Second
Law.
</quote>

well, thats Asimov's 2cents, or at least what his "United States Robots and
Mechanical Men, Inc." thought.

m.craig wrote:
>
> <quote>
> The Three Laws of Robotics:
> 1. A robot may not injure a human being or, through in-
> action, allow a human being to come to harm.
> 2. A robot must obey the orders given it by human beings
> except where such orders would conflict with the First
> Law.
> 3. A robot must protect its own existence as long as such
> protection does not conflict with the First or Second
> Law.
> </quote>

Ha! You forgot law number 0.5:

A robot must use a PIC, and in so using a PIC, must
use the least possible code words to perform a
given task (preferably 12 or less code words), and
use the minimum hardware to perform that task, making
all possible use of Microchip datasheets, so that
maximum performance with minimum hadware is obtained
within the official Microchip specification...

Asimov was well aware of law number 0.5, but chose to
leave it out of the book as Microchip were not in
business at that time. Even the great visionaries
had to respect the space/time continuum... ;o)
-Roman

Jinks wrote:
>One of the signs of a "life-form" being self-aware is self-
>preservation. Even the most brainless bug will try to stay
>alive, even if it doesn't know, (in our terms), why
>

Jinks, do you really think that because something has some
built-in sensors/reflexes that allow it to evade predators
really means it is self-aware and has an urge for self-
preservation?

Build a mechanical bug that seeks out the darkest place it
can find whenever its CdS photoreceptor preceives a large
change in light intensity not caused by its own movement.
Is it self-aware?

Grey Walter did this with a photocell and a 1-celled brain
[ie flip-flop] 50 years ago.
=============

Other matters:

Hawking arrives a little late to the show. AI types have
been discussing the idea of robots taking over for years.

Hans Moravec wrote a book 10 years or so ago called "Mind
Children", where he actually prophesied that the "next"
step in e___lution [word blanked out on purpose] was for
robotic inventions of man to take over in man's place.

Francis Crick is a similar interloper [to Hawking]. Once
he got the Nobel for DNA, he started studying neurophysiology,
and decided he was the reigning expert on same:

The Astonishing Hypothesis: The Scientific Search for the
Soul - Francis Crick, 1994

The worst of the crew is physicist Roger Penrose - who once
studied quantum mechanics. He decided that human "consciousness"
is a quantum phenonemon and started looking for a "quantum
something" in the brain - and came up with microtubules being
quantum, and thus being the seat of consciousness [microtubules,
BTW, transport nutrients/etc from cell bodies out to the ends
of long cell processes, and also form the basis of cell growth
cones]:

A few other scientists/nobelers like John Eccles in their old
age lost their grip on reality, and started trying to explain
science in terms of non-science [ie, personal belief]:

The Self and Its Brain, by John C. Eccles, Karl Popper 1993

Other people listen to these guys spouting in areas they know
nothing about, because once they all had something important
to say in something they did know something about. Apparently,
Hawking now joins the same club - unfortunately.

> >One of the signs of a "life-form" being self-aware is self-
> >preservation. Even the most brainless bug will try to stay
> >alive, even if it doesn't know, (in our terms), why
> >
>
> Jinks, do you really think that because something has some
> built-in sensors/reflexes that allow it to evade predators
> really means it is self-aware and has an urge for self-
> preservation?

Yes and no. Bugs are dimwits and don't try to evade a predator,
for example aphids will let a ladybird walk right up and eat them
instead of running away and hiding. Perhaps they can't see that
well and I think you'd find most bugs do that. Although the praying
mantis must need to stalk slowly for some reason. However, once
caught, bugs'll flail around to try and escape, even though they have
no previous experience of being grabbed and eaten before. But
whether that's self-awareness - probably not. Self-preservation -
I think so, whatever the mechanism. Flies are an obvious example
if you've ever tried to swat one. You could extend this right through
the food chain. Birds become inured to scarecrows, animals are
domesticated, even though at one time in the animal's life both
scarecorws and humans posed possible threats. And our own
species has the saying "familiarity breeds contempt", which in
this context could be taken as meaning that although something
becomes familiar and we are comfortable or tolerant of it, such
as electricity, it is still dangerous. Cars, water, fire, etc etc are
a part of our lives and yet many people are killed by them. And
we consider ourselves self-aware and self-preserving. You could
extend this debate in many directions

Agree with your other point about experts. It's akin to celebrity
endorsements in a way. What film star would you buy a PIC off ?
(Double-gakkk or AJ ?)

Jinx wrote:
.........
However, once
>caught, bugs'll flail around to try and escape, even though they have
>no previous experience of being grabbed and eaten before. But
>whether that's self-awareness - probably not. Self-preservation -
>I think so, whatever the mechanism. Flies are an obvious example
>if you've ever tried to swat one.
..........

More likely, built-in reflexes. Point is, you can make a robobug
with a couple of sensors that responds identically. In the movie
AI, it is pretty easy to see how one could create Pinnochio's
"love" circuit - basically little more than a programmed reflex.
The really hard part with Pinnoc would have been giving him the
same level of "consciousness" as the teddy bear had, the easy
part would have been the love reflex - at least the way Spielberg
portrayed it [solid 'C' level forgettable work].
===========

>Agree with your other point about experts. It's akin to celebrity
>endorsements in a way. What film star would you buy a PIC off ?
>(Double-gakkk or AJ ?)
>

Well, now that "I" am a world-renowned expert [speaking for
Hawking and the others], I can hold forth on anything and
people will take notice - no matter how ridiculous the
blathering is.

> There may well come a day when computer systems are
> self-aware and Prof Hawking turns out to be an unsung
> prophet of doom. Maybe a HAL2000 scenario is possible
> ("Open the pod door Hal") on an individual basis but I

You mean like computers that upon the command "Open the pod door Hal"
respond with a small modal window that blocks any other possible
interaction and displays a message like: "You need to reboot the ship for
your command to take effect. Please click one: 1.Reboot 2.Cancel", with
the Cancel button being grayed ?

> One of the signs of a "life-form" being self-aware is self-
> preservation. Even the most brainless bug will try to stay
> alive, even if it doesn't know, (in our terms), why

Do Lemmings count? How about over-multiplying yeast (f.ex. the kind that
makes cheese, beer or wine when left to its own devices) ?

It is very hard to tell whether something is self-aware if it decides not
to communicate with you. Maybe suicide is a part of its tradition ?

In the Conway universe computer game all the configurations that lead to
stable population involve suicide by some cells (or their being killed by
their neighbors of the same kind). In a modified Conway u. game the cells
also age and die of old age. The dynamics of the universe are then much
slower. It would be interesting to pitch the two kinds of cells against
each other and see who wins on the short, and on the long term.

> It is very hard to tell whether something is self-aware if it
> decides not to communicate with you

I think that's the heart of the matter. How can you tell what's
going on in a bug's mind ? A Venus Flytrap is an example
of apparent smarts that is really just an elaborate biological
hydraulic system

> This is a very large can of worms imho and I won't touch it

It's helpful to discuss though as a background to AI in even
a basic bot. If you want to give the bot some smarts at some
point you have to stop with the engineering and take a look
at imprinting it with some physiological behaviour

Jinx wrote:
>> It is very hard to tell whether something is self-aware if it
>> decides not to communicate with you
>
>I think that's the heart of the matter. How can you tell what's
>going on in a bug's mind ? A Venus Flytrap is an example
>of apparent smarts that is really just an elaborate biological
>hydraulic system
>
>> This is a very large can of worms imho and I won't touch it
>
>It's helpful to discuss though as a background to AI in even
>a basic bot. If you want to give the bot some smarts at some
>point you have to stop with the engineering and take a look
>at imprinting it with some physiological behaviour
>

BTW, believe it or not Mr Ripley, I was reading parts of the
following book a couple of weeks ago, which discussed many of
these same issues. It decribed many of the experiments that
have been done in this area, and alternate interpretations of
the results - [in case you really want to take the plunge
into whimsy - IOW, psychology]:

If a lion could talk : animal intelligence and the evolution of
consciousness / Stephen Budiansky, 1998.

Personally, I happen to think that everything in the animal
kingdom has a certain "amount" of consciousness - with the
amount varying from very little up to what humans have. Kinda
of like looking thru a shutter where humans are at F/1 and
worms/roaches about F/16. Rough analogy.

OTOH, I also think it is a waste of time to worry about this
overly much in the here and now. Just get on with the job of
building what we can now, and go from there - one step at a time.
==================

BTW, if you want to check out Grey Walter's robot tortoises
from the 1950s, look here [cool]:

Some of the apparent tests for self-awareness involve whether an animal is
aware of it's own existance. One such famous test involves showing an
animal it's reflection in a mirror then putting an adspot actually on the
animal. If the animal successfully attempts to remove the spot from it's
own body, then it has a degree of self-awareness at least approaching that
of a human infant.

Of course, it is difficult to get an insect to observe it's own reflection
;-)

>> It is very hard to tell whether something is self-aware if it
>> decides not to communicate with you
>
>I think that's the heart of the matter. How can you tell what's
>going on in a bug's mind ? A Venus Flytrap is an example
>of apparent smarts that is really just an elaborate biological
>hydraulic system
>
>> This is a very large can of worms imho and I won't touch it
>
>It's helpful to discuss though as a background to AI in even
>a basic bot. If you want to give the bot some smarts at some
>point you have to stop with the engineering and take a look
>at imprinting it with some physiological behaviour
>

BTW, believe it or not Mr Ripley, I was reading parts of the
following book a couple of weeks ago, which discussed many of
these same issues. It decribed many of the experiments that
have been done in this area, and alternate interpretations of
the results - [in case you really want to take the plunge
into whimsy - IOW, psychology]:

If a lion could talk : animal intelligence and the evolution of
consciousness / Stephen Budiansky, 1998.

Personally, I happen to think that everything in the animal
kingdom has a certain "amount" of consciousness - with the
amount varying from very little up to what humans have. Kinda
of like looking thru a shutter where humans are at F/1 and
worms/roaches about F/16. Rough analogy.

OTOH, I also think it is a waste of time to worry about this
overly much in the here and now. Just get on with the job of
building what we can now, and go from there - one step at a time.
==================

BTW, if you want to check out Grey Walter's robot tortoises
from the 1950s, look here [cool]:

Dan Lloyd wrote:
>Some of the apparent tests for self-awareness involve whether an animal is
>aware of it's own existance. One such famous test involves showing an
>animal it's reflection in a mirror then putting an adspot actually on the
>animal. If the animal successfully attempts to remove the spot from it's
>own body, then it has a degree of self-awareness at least approaching that
>of a human infant.
>
>Of course, it is difficult to get an insect to observe it's own reflection

Dan, that experiment was also in the book I cited - "If a Lion ..."
Certainly, one of the more clever tests. But as Peter noted, this
is all a can of worms. Was the monkey actually "self-aware" or
did it just realize that something was different from what was
stored in its memory banks? Psychologists have about painted
themselves into one big corner, I think, with trying to come up
with "hard and rigorous" definitions for things - ie, trying to
emulate physics.

I have a nice analogy for you and Jinx ... - solve this:

DNA : life :: brain : ????

[consciousness, mind, soul, ... ??]

If you get that one, now ask: DNA --???--> life

These problems are the big ones - what is life, mind, soul,
how was the universe created, etc ? --> "solutions" prolly not
in our lifetimes. Luckily, we get to play with simple things
like infinite loops and stack overflows in our code.

Here's another interesting experiment in a book I am reading:

You see a picture of a cow looking at a picture of several other
"life-size" cows, and several scientists nearby holding writing
pads. The experiemnt is obvious. Is the cow looking at the other
cows or just pointing in that direction - possibly because of
perception of known colors? Can the cow understand that it is
seeing a picture of other cows, or does the cow think they
actually are other cows?

What really makes this example cool is the "big joke" it is playing
on the observer [ie, person looking at the picture]. You think
immediately, it is an experiment of trying to ascertain whether
a cow can perceive other cows.

But no - in fact, "you" are the test subject here. It is really
a "picture" of several cows, and not a real experiment. It only
has meaning due to what is stored in your memory banks, and not
in real-time reality.

Likewise, regards the cow in the picture - is it really thinking
it is seeing other cows, or just playing back something from its own
memory banks?

Oh well - what is life given that DNA is just a molecule? - it never
ends - back to electronics in the here and now.

Well, it is quite simple to fool the brain as it does not bother to
comprehend certain material. I read an article in New Scientist where they
had an actor approach people and ask them for directions. During the
conversation, some "workmen" passed between the two parties with an opaque
sheet. At the same time, the actor was replaced with another person. I seem
to remember that 40% of people did NOT notice that the person they were
talking to had changed. When people did notice, it was usually because both
actors were of their own age and/or social grouping. If the "victim" was
young......but both actors were old, the brain simply classifies the person
as "old person" and does not really bother to do much more than
that.....which you could argue is like your point of relying on your memory
banks to produce a good proportion of the environment you *believe* that
you are currently interacting with.

This also occurred with a gorilla-suited man crossing a basketball
court......when people were shown a video of this and told to count the
ball bounces, most did not notice the suited man. Only on the replay, where
they were free from counting did they notice......then they said they could
not believe that it was the same video.

So, you could argue that most of what you experience from day to day is
simply what the brain is creating from memory mainly because it does not
have enough processing power to create every scene that it sees. Scale this
down and you might have "simpler" animals only being able to process very
limited amounts of visual information - they actually have to use their
imagination or memory much more to recreate what they are seeing (although
they are not, as they probably never saw it in much detail in the first
place). (Then again, do you need to have vision to be self-aware? I think
not)

I wouldnt like to get into the conjectures on what is "mind" and "soul"
because it will probably end up with a debate on religion. I will say,
though, that "lower" forms (say cats/dogs/pigs etc) of life can be said to
have a distinct personality (arguably a soul).....as life becomes more
simple, you could argue that "personality" begins to approach programmed
responses to stimulus (maybe these life forms do not have a soul?). On the
basis of that, do humans just have a more complicated set of programmed
responses?

As usual, this is just an open ended discussion and no one can ever be
correct without accepted proof or evidence - there can only be conjecture.
Still, that is half the fun.

Dan Lloyd wrote:
>Some of the apparent tests for self-awareness involve whether an animal is
>aware of it's own existance. One such famous test involves showing an
>animal it's reflection in a mirror then putting an adspot actually on the
>animal. If the animal successfully attempts to remove the spot from it's
>own body, then it has a degree of self-awareness at least approaching that
>of a human infant.
>
>Of course, it is difficult to get an insect to observe it's own reflection

Dan, that experiment was also in the book I cited - "If a Lion ..."
Certainly, one of the more clever tests. But as Peter noted, this
is all a can of worms. Was the monkey actually "self-aware" or
did it just realize that something was different from what was
stored in its memory banks? Psychologists have about painted
themselves into one big corner, I think, with trying to come up
with "hard and rigorous" definitions for things - ie, trying to
emulate physics.

I have a nice analogy for you and Jinx ... - solve this:

DNA : life :: brain : ????

[consciousness, mind, soul, ... ??]

If you get that one, now ask: DNA --???--> life

These problems are the big ones - what is life, mind, soul,
how was the universe created, etc ? --> "solutions" prolly not
in our lifetimes. Luckily, we get to play with simple things
like infinite loops and stack overflows in our code.

Here's another interesting experiment in a book I am reading:

You see a picture of a cow looking at a picture of several other
"life-size" cows, and several scientists nearby holding writing
pads. The experiemnt is obvious. Is the cow looking at the other
cows or just pointing in that direction - possibly because of
perception of known colors? Can the cow understand that it is
seeing a picture of other cows, or does the cow think they
actually are other cows?

What really makes this example cool is the "big joke" it is playing
on the observer [ie, person looking at the picture]. You think
immediately, it is an experiment of trying to ascertain whether
a cow can perceive other cows.

But no - in fact, "you" are the test subject here. It is really
a "picture" of several cows, and not a real experiment. It only
has meaning due to what is stored in your memory banks, and not
in real-time reality.

Likewise, regards the cow in the picture - is it really thinking
it is seeing other cows, or just playing back something from its own
memory banks?

Oh well - what is life given that DNA is just a molecule? - it never
ends - back to electronics in the here and now.

Dan Lloyd wrote:
>Hi,
>
>Well, it is quite simple to fool the brain as it does not bother to
>comprehend certain material. I read an article in New Scientist where they
>had an actor approach people and ask them for directions. During the
>conversation, some "workmen" passed between the two parties with an opaque
>sheet. At the same time, the actor was replaced with another person. I seem
>to remember that 40% of people did NOT notice that the person they were
>talking to had changed.
........

Well, there "are" many aspects to the issue. This one is mainly about
paying attention.
=============

>
>This also occurred with a gorilla-suited man crossing a basketball
>court......when people were shown a video of this and told to count the
>ball bounces, most did not notice the suited man. Only on the replay, where
>they were free from counting did they notice......then they said they could
>not believe that it was the same video.
>

This one is also about paying attention, plus the fact that, in many
ways, people can only do one thing at a time - specifically things
like thinking >1 thought or holding >1 conversation at a time.

The fact that you can drive, listen to the radio, and talk on the
cell phone simultaneously just means some things are committed to
memory/habit [driving] while others cannot be [carrying on a realtime
conversation]. After all, you certainly could not do all of these
things simultaneously when you were first learning how to drive.
Overload.

Plus you are not always "consciously" paying attention to everything
you are doing - rather things come and go. If a dog runs in front of
the car, then you stop talking and pay attention to the driving/etc,
then go back to talking after the event is over.

Interesting that, despite our having trillions of parallel neuronal
computations, we can really only give our attention to one thing at
a time. [this is prolly a "crucial" point too].
=============

>So, you could argue that most of what you experience from day to day is
>simply what the brain is creating from memory mainly because it does not
>have enough processing power to create every scene that it sees.

Actually, I think most of what you "experience" is from direct sensory
input directly bathing the sensory areas of the cortex, but it is
simultaneously compared against memory to determine whether it makes
any sense or not - or is a new and unique experience or not.

Related to this, I am reading a book called "The Creative Loop"
by Erich Harth, where he talks about "blindsight" among other things.
In people with a damaged visual cortex, they presented lights/objects
in the blind areas of the visual field, and the people were actually
able to both point to and recognize the objects, even though they
said they "saw" nothing.

This is a not a miracle, rather due to the fact that radiations from
the eye go to several parts of the brain. What "is" important is
that the subjects had no "conscious" awareness of the objects even
though information was received. [so this says something about how
consciousness actually works].

This brings up the whole idea that 90% or so of what the brain does
is totally below the level of consciousness or awareness. So maybe
what this says is that "consciousness" itself isn't really all that
magical a thingie. It is really just the ability to sense new
input [visual, tactile, whatever] and compare it against stored
memories, and to simultaneously "realize" you are doing it.

At any rate, Harth makes a case that this last is really what
consciousness/awareness amounts to. There is really not a "little
man" [ie, homunculus] inside your brain looking at a movie screen.
==============

Scale this
>down and you might have "simpler" animals only being able to process very
>limited amounts of visual information - they actually have to use their
>imagination or memory much more to recreate what they are seeing (although
>they are not, as they probably never saw it in much detail in the first
>place). (Then again, do you need to have vision to be self-aware? I think
>not)
>

Actually, I think it is exactly the opposite.

Lower animals have essentially "no" stored memories, but simply
process input in realtime, in the limited way allowable by their
limited neuroanatomy. They simply react and react and react, with
very little "comparison" or "reflection" to past activities taking
place. Some animals do have "limited" learning abilities, and in
those cases there is some comparison, and the amount is probably
graded in relation to where the animal is in the animal hierarchy.
===============

>I wouldnt like to get into the conjectures on what is "mind" and "soul"
>because it will probably end up with a debate on religion. I will say,
>though, that "lower" forms (say cats/dogs/pigs etc) of life can be said to
>have a distinct personality (arguably a soul).....as life becomes more
>simple, you could argue that "personality" begins to approach programmed
>responses to stimulus (maybe these life forms do not have a soul?). On the
>basis of that, do humans just have a more complicated set of programmed
>responses?
>

As mentioned before, I think there is a continuum of consciousness,
awareness, whatever you want to call it, in the animal kingdom. The
level is [roughly] analogous to looking at the world thru a shutter
with different F/stops. Lower animals have a tiny view, we have a
relatively large view.

As Harth says, the ability to manipulate realtime input along many
modalities, and compare it in realtime to "stored" memories is the
key to why we are so intelligent compared to lower animals, which
do not have the same anatomy. They lack the large parallel sensory
bandwidth, as well as the large central store of past memories, as
well as the ability to manipulate and compare same, and take
successful actions. With them it is more a sense-react hardwired
pathway. Apparently, all 3 capabilities develop roughly in parallel
as you go up the ladder in the animal kingdom.
========

>As usual, this is just an open ended discussion and no one can ever be
>correct without accepted proof or evidence - there can only be conjecture.
>Still, that is half the fun.

> ==============
>
>
> Scale this
> >down and you might have "simpler" animals only being able to process very
> >limited amounts of visual information - they actually have to use their
> >imagination or memory much more to recreate what they are seeing (although
> >they are not, as they probably never saw it in much detail in the first
> >place). (Then again, do you need to have vision to be self-aware? I think
> >not)
> >
>
> Actually, I think it is exactly the opposite.
>
> Lower animals have essentially "no" stored memories, but simply
> process input in realtime, in the limited way allowable by their
> limited neuroanatomy. They simply react and react and react, with
> very little "comparison" or "reflection" to past activities taking
> place. Some animals do have "limited" learning abilities, and in
> those cases there is some comparison, and the amount is probably
> graded in relation to where the animal is in the animal hierarchy.
> ===============

Here I would disagree. I have in my kitchen a water snail, been around for over a year, who has learned to distinguish me from others in the house and beg for food. Snails don't like being up at the surface, it exposes them to birds, but this snail comes up. I recently returned from a two-week vacation, and the silly thing came up out of the water to the lip of the bowl to beg. I had the definite impression it had missed me and recognized me when I returned.

Although the thing has eyes, sort of, I believe it percieves by sound/vibration, touch, and smell more than sight alone.

>
>
> >I wouldnt like to get into the conjectures on what is "mind" and "soul"
> >because it will probably end up with a debate on religion. I will say,
> >though, that "lower" forms (say cats/dogs/pigs etc) of life can be said to
> >have a distinct personality (arguably a soul).....as life becomes more
> >simple, you could argue that "personality" begins to approach programmed
> >responses to stimulus (maybe these life forms do not have a soul?). On the
> >basis of that, do humans just have a more complicated set of programmed
> >responses?
> >
>
> As mentioned before, I think there is a continuum of consciousness,
> awareness, whatever you want to call it, in the animal kingdom. The
> level is [roughly] analogous to looking at the world thru a shutter
> with different F/stops. Lower animals have a tiny view, we have a
> relatively large view.
>
> As Harth says, the ability to manipulate realtime input along many
> modalities, and compare it in realtime to "stored" memories is the
> key to why we are so intelligent compared to lower animals, which
> do not have the same anatomy. They lack the large parallel sensory
> bandwidth, as well as the large central store of past memories, as
> well as the ability to manipulate and compare same, and take
> successful actions. With them it is more a sense-react hardwired
> pathway. Apparently, all 3 capabilities develop roughly in parallel
> as you go up the ladder in the animal kingdom.
> ========

Well, my snail appears to have stored memories, and the ability to manipulate me into giving it food. This behaviour was learned, it is definitely not hardwired. It clearly uses multimodal sensory inputs, and has available a number of malleable behavioural options. The sense-react pathways do not appear much different from my learning how to work the wipers on a rental car. I dont really see much difference except in scale. I think what distinguishes humans is the hardware and software for a lot of symbolic processing and manipulation, and extreme behavioural plasticity. In other words, we have a greater capacity to learn and create new responses than the competition.

>
> >As usual, this is just an open ended discussion and no one can ever be
> >correct without accepted proof or evidence - there can only be conjecture.
> >Still, that is half the fun.
>
> Right on.
>
> [BTW, this does all have something to do with designing bots].
>
> - dan
> ===========
>
> --
> http://www.piclist.com hint: The list server can filter out subtopics
> (like ads or off topics) for you. See http://www.piclist.com/#topics
>
>
>

Dan:
> > Lower animals have essentially "no" stored memories, but simply
> > process input in realtime, in the limited way allowable by their
> > limited neuroanatomy. They simply react and react and react, with
> > very little "comparison" or "reflection" to past activities taking
> > place. Some animals do have "limited" learning abilities, and in
> > those cases there is some comparison, and the amount is probably
> > graded in relation to where the animal is in the animal hierarchy.
> > ===============

Alice:
> Here I would disagree. I have in my kitchen a water snail, been around
for over a year, who has learned to distinguish me from others in the house
and beg for food. Snails don't like being up at the surface, it exposes
them to birds, but this snail comes up. I recently returned from a two-week
vacation, and the silly thing came up out of the water to the lip of the
bowl to beg. I had the definite impression it had missed me and recognized
me when I returned.
>
> Although the thing has eyes, sort of, I believe it percieves by
sound/vibration, touch, and smell more than sight alone.

I agree that "lower" animals can have abilities which are not normally
attributed to them.
Over the years I have frequently been told that goldfish have an attention
span of under 10 seconds and can "survive" their limited environment because
of this.
I have a large outdoor tank (like a rectangular pond with a glass wall in
one end which contains a relatively small number of goldfish. At one stage,
for reasons irrelevant to this discussion, I produced an area attached to
the tank which contained water which heated during the day relative to the
rest of the tank. Access to this area was via a 3 dimensional path which was
not a few fish diameters wide and which had several possible entrances and
exits. Something like a simple fish maze although that was not the object.

While it was possible for the area to be discovered by "random walk" (with
or without feet :-) ) it was a fairly unlikely entrance to stumble across.

For several days after implementation no fish entered the area. Then one
fish did. Thereafter it returned daily. For quite some days (1 week plus?)
it was the only fish that did so. Then another did and then in a relatively
short period (a day or two) ALL the fish did. After that EVERY day ALL the
fish would enter the area when it was warm, spend most of the day there and
then leave when it cooled down.

It seems highly likely that

- The first fish found the area randomly.
- Thereafter it returned daily purposefully. .
- The other fish learned from the first fish.
- Thereafter they all returned daily purposefully.
- The fish preferred the warmer area.
- They chose to go to it when they knew it would be warm and left when they
knew the main tank would be more desirable.
- They develop and retain memories with spans of 1 day plus and probably
rather longer

There are still many explanations of the fine mechanism.
eg the 'following" fish may be working on sensory detection of their
fellows.
"Follow the leader" pertains - this seems unlikely as they do not do so in
the main tank.

alice campbell wrote:
> Here I would disagree. I have in my kitchen a water snail, been around for over a year, who has learned to distinguish me from others in the house and beg for food. Snails don't like being up at the surface, it exposes them to birds, but this snail comes up. I recently returned from a two-week vacation, and the silly thing came up out of the water to the lip of the bowl to beg. I had the definite impression it had missed me and recognized me when I returned.
>
> Although the thing has eyes, sort of, I believe it percieves by sound/vibration, touch, and smell more than sight alone.

> I agree that "lower" animals can have abilities which are not normally
> attributed to them.
> Over the years I have frequently been told that goldfish have an attention
> span of under 10 seconds and can "survive" their limited environment because
> of this.
> I have a large outdoor tank (like a rectangular pond with a glass wall in
> one end which contains a relatively small number of goldfish. At one stage,
> for reasons irrelevant to this discussion, I produced an area attached to
> the tank which contained water which heated during the day relative to the
> rest of the tank. Access to this area was via a 3 dimensional path which was
> not a few fish diameters wide and which had several possible entrances and
> exits. Something like a simple fish maze although that was not the object.
>
> While it was possible for the area to be discovered by "random walk" (with
> or without feet :-) ) it was a fairly unlikely entrance to stumble across.
>
> For several days after implementation no fish entered the area. Then one
> fish did. Thereafter it returned daily. For quite some days (1 week plus?)
> it was the only fish that did so. Then another did and then in a relatively
> short period (a day or two) ALL the fish did. After that EVERY day ALL the
> fish would enter the area when it was warm, spend most of the day there and
> then leave when it cooled down.
>
> It seems highly likely that
>
> - The first fish found the area randomly.
> - Thereafter it returned daily purposefully. .
> - The other fish learned from the first fish.
> - Thereafter they all returned daily purposefully.
> - The fish preferred the warmer area.
> - They chose to go to it when they knew it would be warm and left when they
> knew the main tank would be more desirable.
> - They develop and retain memories with spans of 1 day plus and probably
> rather longer
>
> There are still many explanations of the fine mechanism.
> eg the 'following" fish may be working on sensory detection of their
> fellows.
> "Follow the leader" pertains - this seems unlikely as they do not do so in
> the main tank.

> > Over the years I have frequently been told that goldfish
> have an attention span of under 10 seconds

I'm sure my fish recognise me (don't think the snails do -
perhaps Alice has some empathy with molluscs). Each
morning they hover with their tiny hungry faces pressed
up to the glass watching me eat my toast at the bench.
They know as soon as I've finished they get the crumbs.
Ditto with the sparrows. They congregate outside the
window waiting for the toast crust to go up onto the
shed roof. Now a bot that did this waiting for a reward
would be something. What could you reward a bot with
that it would "appreciate" ?

.......
Some animals do have "limited" learning abilities, and in
>> those cases there is some comparison, and the amount is probably
>> graded in relation to where the animal is in the animal hierarchy.
>> ===============

>Here I would disagree. I have in my kitchen a water snail, been around
for over a year, who has learned to distinguish me from others in the
house and beg for food.
............

They lack the large parallel sensory
>> bandwidth, as well as the large central store of past memories, as
>> well as the ability to manipulate and compare same, and take
>> successful actions. With them it is more a sense-react hardwired
>> pathway. Apparently, all 3 capabilities develop roughly in parallel
>> as you go up the ladder in the animal kingdom.
>> ========

>Well, my snail appears to have stored memories, and the ability to
manipulate me into giving it food. This behaviour was learned, it
is definitely not hardwired.
.........
It clearly uses multimodal sensory inputs, and has available a number
of malleable behavioural options. The sense-react pathways do not
appear much different from my learning how to work the wipers on a
rental car. I dont really see much difference except in scale.
.............
In other words, we have a greater capacity to learn and create
new responses than the competition.
>

Dear E/L,

I don't think you have said anything here counter to what I said
previously.

Good example. How about the following explanation ??????
[doesn't require any learning whatsoever, BTW].

>
>It seems highly likely that
>
>- The first fish found the area randomly.

it sensed the warmth flowing out of the region "immediately",
but it took 2 days for it to randomly find its way in.

>- Thereafter it returned daily purposefully.

it went towards the warm area and then followed the "pheromone"
trail it had laid down the day before.

>- The other fish learned from the first fish.

they followed the first fishs' pheromone trail.

[one might assume fish lay down pheromones of different
chemical composition for different instances - acidic for bad
places, nicely sweet smelling and alkaline for warm cosy places,
etc - Alice's slugs or their cousins, BTW, lay down a heavy
slime trail where they travel]

>- Thereafter they all returned daily purposefully.

they all followed their own pheromone trails laid down the day
before.

>- The fish preferred the warmer area.

instinct - ie, hard wiring.

>- They chose to go to it when they knew it would be warm and left when they
>knew the main tank would be more desirable.

they didna know nuttin - they simply move towards the temperature
they feel most comfortable in.

>- They develop and retain memories with spans of 1 day plus and probably
>rather longer

the pheromones stay in place at least until they come and
re-do the spots the next day.

>There are still many explanations of the fine mechanism.
>eg the 'following" fish may be working on sensory detection of their
>fellows.
>"Follow the leader" pertains - this seems unlikely as they do not do so in
>the main tank.
>

they only lay down pheromones when it advantageous to do so.
============

Salmon, of course, swim upstream 1000s of miles to the place of
their birthing several years after leaving same and going out to
the open sea. They apparently do this by being able to "taste"
the unique chemicals in the water and follow it upstream - or
some similar mechanism.

[following a poop trail 10 feet in a tank may be a slightly
easier job].

> Quiet Joe wrote:
> Now a bot that did this waiting for a reward
> >would be something. What could you reward a bot with
> >that it would "appreciate" ?
> >
>
> 4 amp-hr of battery charge.

Remember those pocket animals you had to care for that
made basket cases out of so many kids ? Can't recall the
name, but they were banned from schools everywhere. How
about a bot like that (my friend's a psychologist - trying to
drum up some biz for her). There might be something
like it on the toyshop shelves, but not PIClist designed

> >- The fish preferred the warmer area.
>
> instinct - ie, hard wiring.
>
> >They chose to go to it when they knew it would be warm and left
> > when they knew the main tank would be more desirable.
>
> they didna know nuttin - they simply move towards the temperature
> they feel most comfortable in.

<snip>
>Now a bot that did this waiting for a reward
> would be something. What could you reward a bot with
> that it would "appreciate" ?

A can of oil ?, A new battery?, Being "plugged" into a bot of the other sex
?? ;o)

Regards,

Kat.

____________________________________________________________________________
The information contained in this email and any files transmitted with it
are confidential and intended solely for the use of the individual or entity
to whom they are addressed. If you have received this email in error please
promptly notify the sender by reply email and then delete the email and
destroy any printed copy. If you have received this email in error, you must
not disclose or use this information in any way.
____________________________________________________________________________

Jinx wrote:
>> >- The fish preferred the warmer area.
>>
>> instinct - ie, hard wiring.
>>
>> >They chose to go to it when they knew it would be warm and left
>> > when they knew the main tank would be more desirable.
>>
>> they didna know nuttin - they simply move towards the temperature
>> they feel most comfortable in.
>
>How can you tell ? Fish are cold-blooded
>

Exakly, Mr. Auckland CityBoy. If you lived in the country, you
would know that first thing every morning all the warm- and
cold-blooded critters come out to warm up in the 1st rays of
the sun. Also, at nite, many animals become roadkill cause
after dark they go and sit on the warm [road] pavement. If
you live literally "immersed" in water, then you prolly
try to find some that suits your favor - as Russell said.

All potentially plausible explanations, but one could reduce much of human
behaviour to similar explanations.
Somewhere along the way Occam's Razor (== "keep it simple, stupid")
requires us to jump from "hardwiring is the likely solution" to "software is
driving this". Occam's razor doesn't, of course, always produce the correct
answer.

With the fish, while hardwiring COULD explain all this, I feel a bit of
learning makes more sense. When it comes to our two Burmese cats, the amount
of hard wiring required to explain away their behaviour would be vastly more
costly than a little software. Also, my wife is sure our cats are self
aware. I'm not certain. (I purr, therefore I am ?)

> Russell,
>
> Good example. How about the following explanation ??????
> [doesn't require any learning whatsoever, BTW].
>
> >
> >It seems highly likely that
> >
> >- The first fish found the area randomly.
>
> it sensed the warmth flowing out of the region "immediately",
> but it took 2 days for it to randomly find its way in.
>
> >- Thereafter it returned daily purposefully.
>
> it went towards the warm area and then followed the "pheromone"
> trail it had laid down the day before.
>
> >- The other fish learned from the first fish.
>
> they followed the first fishs' pheromone trail.
>
> [one might assume fish lay down pheromones of different
> chemical composition for different instances - acidic for bad
> places, nicely sweet smelling and alkaline for warm cosy places,
> etc - Alice's slugs or their cousins, BTW, lay down a heavy
> slime trail where they travel]
>
> >- Thereafter they all returned daily purposefully.
>
> they all followed their own pheromone trails laid down the day
> before.
>
> >- The fish preferred the warmer area.
>
> instinct - ie, hard wiring.
>
> >- They chose to go to it when they knew it would be warm and left when

> >knew the main tank would be more desirable.
>
> they didna know nuttin - they simply move towards the temperature
> they feel most comfortable in.
>
> >- They develop and retain memories with spans of 1 day plus and probably
> >rather longer
>
> the pheromones stay in place at least until they come and
> re-do the spots the next day.
>
>
> >There are still many explanations of the fine mechanism.
> >eg the 'following" fish may be working on sensory detection of their
> >fellows.
> >"Follow the leader" pertains - this seems unlikely as they do not do so

> >the main tank.
> >
>
> they only lay down pheromones when it advantageous to do so.
> ============
>
> Salmon, of course, swim upstream 1000s of miles to the place of
> their birthing several years after leaving same and going out to
> the open sea. They apparently do this by being able to "taste"
> the unique chemicals in the water and follow it upstream - or
> some similar mechanism.
>
> [following a poop trail 10 feet in a tank may be a slightly
> easier job].
>
> --
> http://www.piclist.com hint: The list server can filter out subtopics
> (like ads or off topics) for you. See http://www.piclist.com/#topics
>
>
>

I find the simple explanation just as awe inspiring (in its simplicity and
effectiveness) as the complex one.

Nice work Dan. Mark Tilden, the Beam guys, and ...er... the MIT dude who is
building COG and who built Attila... can't remember his name, blow me away
by showing what wonderful, high level things can be done with very simple
behaviors and rules.

Russell McMahon wrote:
>All potentially plausible explanations, but one could reduce much of human
>behaviour to similar explanations.
>Somewhere along the way Occam's Razor (== "keep it simple, stupid")
>requires us to jump from "hardwiring is the likely solution" to "software is
>driving this". Occam's razor doesn't, of course, always produce the correct
>answer.
>

Russell, I think you are applying Occam's Razor backwards here. In this
case the "simple" solution actually would be hardwiring [ie, instinct],
and not learning. Since fish live in an aqueous environment, "smells"
and tastes and other chemical clues would play a huge role in their
perceptions of the world, and how they get on in it in their daily
lives. This is built-in [ie, instinct].

Think again of the salmon example I cited. How likely is it that
the salmon "remember" their route to the sea and how to get back
after 3 years in the deep briny? Zero. How likely is it they
use some built-in instincts [such as follow the chemical path]?
About 100%.
===============

>With the fish, while hardwiring COULD explain all this, I feel a bit of
>learning makes more sense. When it comes to our two Burmese cats, the amount
>of hard wiring required to explain away their behaviour would be vastly more
>costly than a little software. Also, my wife is sure our cats are self
>aware. I'm not certain. (I purr, therefore I am ?)
>

Your cats are so much higher on the animal intelligence scale wrt
fish that the situation "is" much different. In their case, learning
does play a huge role in everyday life, as with other higher
vertebrates, but this doesn't necessarily reflect backwards down to
fish. The entire point of the book I mentioned previously, "If a
Lion Could Speak ...." was how easy it is to come to the incorrect
conclusion about animal intelligence, and how difficult it is
to determine the real answers.

They had a program on TV last week about animal intelligence.
It turns out pigeons are very easy to train to discriminate
visual scenes. They actually "trained" some to tell the difference
between paintings by Picasso/etc [cubism] and Monet/etc
[impressionism]. The pigeons scored 100%.

Then they presented some very simple geometric figures, and
trained the pigeons again, and also gave the same test to a
class full of psychology college students. All of the students
got the cubism/impressionism test correct but most "failed"
the simple geometry test - in terms of getting the same answer
as the pigeons.

Turns out the pigeons were simply responding to "area" of the
shapes while the students were "reading" all sorts of meaning
into them [like rohrshach]. Shows how easy it is to misinterpret
the experiments.

OTOH, it is true that many lower animals possess "some" learning
capabilities. Alice's slugs may actually do this - to "some"
extent. But given that the entire brain of the slug only contains
100 or 200 neurons, it's overall capabilities are always going
to be limited greatly - living at F/16.

At 09:17 AM 9/6/01 -0700, you wrote:
>I find the simple explanation just as awe inspiring (in its simplicity and
>effectiveness) as the complex one.
>
>Nice work Dan. Mark Tilden, the Beam guys, and ...er... the MIT dude who is
>building COG and who built Attila... can't remember his name, blow me away
>by showing what wonderful, high level things can be done with very simple
>behaviors and rules.
>

James, that was also the point in the link I cited of
Grey Walter's tortoises from the 1950s. The bottom of one
of the pages shows the tortoise's "brain" - a flip-flop
made out of 2 triode vacuum tubes - yet they could find
their charger devices, flee from or approach bright lights,
or go in the corner and "ruminate".

> Lower animals have essentially "no" stored memories, but simply
> process input in realtime, in the limited way allowable by their
> limited neuroanatomy. They simply react and react and react, with
> very little "comparison" or "reflection" to past activities taking
> place. Some animals do have "limited" learning abilities, and in
> those cases there is some comparison, and the amount is probably
> graded in relation to where the animal is in the animal hierarchy.

> >> ===============
>
> >Here I would disagree. I have in my kitchen a water snail, been around
> for over a year, who has learned to distinguish me from others in the
> house and beg for food.
> ............
>
> They lack the large parallel sensory
> >> bandwidth, as well as the large central store of past memories, as
> >> well as the ability to manipulate and compare same, and take
> >> successful actions. With them it is more a sense-react hardwired
> >> pathway. Apparently, all 3 capabilities develop roughly in parallel
> >> as you go up the ladder in the animal kingdom.
> >> ========
>
> >Well, my snail appears to have stored memories, and the ability to
> manipulate me into giving it food. This behaviour was learned, it
> is definitely not hardwired.
> .........
> It clearly uses multimodal sensory inputs, and has available a number
> of malleable behavioural options. The sense-react pathways do not
> appear much different from my learning how to work the wipers on a
> rental car. I dont really see much difference except in scale.
> .............
> In other words, we have a greater capacity to learn and create
> new responses than the competition.
> >
>
>
> Dear E/L,
>
> I don't think you have said anything here counter to what I said
> previously.

au contraire, Mr Botboard,

I believe what Russ, Jinx and I are telling you is that these creatures DO appear to have memories and Do rememember things and people, and act on this knowledge. Here is a proposed mechanism:

The brainstem of a creature controls autonomic functions, for example breathing, heartbeat, etc, and also is the location of the limbic system, which is the seat of emotion. Now an emotional response to something need not be complex, just a fear hormone, adrenalin or analog, and a happy hormone, serotonin or analog. These allow a neuron, or bit, to be set under a certain set of circumstances. If a certain dark loomy shape is associated with crumbs, happy bit is set. More happy, stronger response. See loomy shape, happy, come over. Likewise, light loomy shape hits creature, fear bit set, see light loomy shape, flee.

Snail episode: Snail sees dark loomy shape, happy bit set, starts coming up to surface. But, Snail was left with strangers for 2 weeks, and becomes hypervigilant for dark loomy shape. Snail overreacts to dark loomy shape, and as a result overshoots top of water and comes up out of tank.

This demonstrates a perfectly sound reason to have an emotion bit set, since it tends to reinforce a behaviour that was rewarded in the past, and allows for a gradation in response. An overall increase in a seeking activity when a reward is delayed would be a good thing, and should only be allowed to be overwritten (forget) with difficulty. This, by the way, is a classic 'training' response. This is not consistent with the idea that these creatures dont have memory, and shows that, to the contrary, even in tiny creatures, emotion-based memory is desirable and highly useful.

Of course it ended - I bought it next month. I just used it to
come back and tell you

BTW, NZ scientists did actually invent a machine that could
predict the future, but the government confiscated it. Which
didn't really upset the scientists - a spokesman said they
knew that would happen

Hard to see why it strains credulity that "simple" animals
exhibit behavior implying memory, of a sort - These "simple"
nervous systems are far more complex than PICs, which possess
memory, likely for the same reason: Utility.

Alice.C wrote:
>Mr.Botboard said:
>
>> Lower animals have essentially "no" stored memories, but simply
> > process input in realtime, in the limited way allowable by their
> > limited neuroanatomy. They simply react and react and react, with
> > very little "comparison" or "reflection" to past activities taking
> > place. Some animals do have "limited" learning abilities, and in
> > those cases there is some comparison, and the amount is probably
> > graded in relation to where the animal is in the animal hierarchy.
>

>> Dear E/L,
>>
>> I don't think you have said anything here counter to what I said
>> previously.
>
>
>au contraire, Mr Botboard,
>
>I believe what Russ, Jinx and I are telling you is that these creatures DO
appear to have memories and Do rememember things and people, and act on this
knowledge. Here is a proposed mechanism:
>

So, now you and Russell and Jinkalong are ganging up on me. 3:1.
[I'll bet Jinx has a few duck and sheep stories which he hasn't
related]. But ha - you will notice that science does not work by
majority vote, young lady. Just because you guys "think" you
have found sentient slugs, ha .......... ;-)

Also, if you check closely my paragraph above, you will notice that
I contradicted myself even as I emitted a single thought. Actually, I
did not really contradict myself because I added so many qualifiers
- "essentially", "limited", "probably graded", etc. If you steer to
those, rather than to the single "no", then I really see no
contradictions with what you said about your slug with 140 IQ.
================

>The brainstem of a creature controls autonomic functions, for example
breathing, heartbeat, etc, and also is the location of the limbic system,
which is the seat of emotion. Now an emotional response to something need
not be complex, just a fear hormone, adrenalin or analog, and a happy
hormone, serotonin or analog. These allow a neuron, or bit, to be set under
a certain set of circumstances. If a certain dark loomy shape is associated
with crumbs, happy bit is set. More happy, stronger response. See loomy
shape, happy, come over. Likewise, light loomy shape hits creature, fear
bit set, see light loomy shape, flee.
>

hmmm, do your slugs have brainstems and a limbic system? I understand the rest.
However, I am a little uncertain about your use of "certain" as in "certain dark
loomy shape". Do you have any idea how many photorecpetors there are in a
slug's eyes - does it have eyes? [and I take it "light loomy shape" is #1
daughter what terrorizes the slug].
==============

>Snail episode: Snail sees dark loomy shape, happy bit set, starts coming
up to surface. But, Snail was left with strangers for 2 weeks, and becomes
hypervigilant for dark loomy shape. Snail overreacts to dark loomy shape,
and as a result overshoots top of water and comes up out of tank.
>

Ho boy, you musta just got back from seeing AI, the movie. Happy puppets
and happy slugs. Note - this scenario didn't work there either - [however,
Spielberg certainly could use a new writer or some new ideas].
==============

>This demonstrates a perfectly sound reason to have an emotion bit set,
since it tends to reinforce a behaviour that was rewarded in the past, and
allows for a gradation in response. An overall increase in a seeking
activity when a reward is delayed would be a good thing, and should only be
allowed to be overwritten (forget) with difficulty. This, by the way, is a
classic 'training' response. This is not consistent with the idea that
these creatures dont have memory, and shows that, to the contrary, even in
tiny creatures, emotion-based memory is desirable and highly useful.
>

As I stated several times, some animals do have a certain amount of
plasticity. If a slug only has "1" cell in its brain, then maybe it
would help its survival if it had some.

OTOH, your argument [and Russell's] would be would be more compelling
if you were able to train a veritable army of slugs/fish - then it
might pass the point of "anecdotal" evidence. It actually would be
interesting to know whether anyone else has been able to train
slugs and fishes.

[BTW, did you have to take your slug to Dr. Slug S. Shrink after you
left it with strangers for 2 weeks?].

Jinx wrote:
>> Perhaps this is needed to keep the discussion on track. Pity the auction
>> ended before we found it :)
>>
>> <cgi.ebay.com/aw-cgi/eBayISAPI.dll?ViewItem&item=1633430392>
>>
>> --
>
>Of course it ended - I bought it next month. I just used it to
>come back and tell you
>
>BTW, NZ scientists did actually invent a machine that could
>predict the future, but the government confiscated it. Which
>didn't really upset the scientists - a spokesman said they
>knew that would happen
>

BTW, over on the Yahoo AI Group, there is a guy who uses
bayesian probability combined with fuzzy logic to predict
the stock market "2" weeks in advance. Another guy claims
he has solved the AI problem and knows how to build a
conscious robot. Both are looking for venture backing to
take their stuff public, Jinx - [in case you happened to rob
a bank up there in the future, and brought the goods back to
where they cannot get you].

>
>au contraire, Mr Botboard,
>
>I believe what Russ, Jinx and I are telling you is that these creatures DO
appear to have memories and Do rememember things and people, and act on this
knowledge.
.........

Hey, guys, here is someone who may provide evidence to support all
3 of you - [you guys might have some credibility yet]:

> In order to "appreciate" a reward, it would have to "desire."
>
> In what possible way could it develop "desire" except to be endowed with
> such? (by it's designer...)

Very simply.
If you program it to generate new attributes indeendently of your
programming the attribute per se it would acquire attributes which were net
beneficial - as long as you did it right. As this amounts to the
"generation" of information doing it right could be difficult :-)

No - you just have to adjust your paradigms.
It looks just like what I need to extend my current capabilities.
I've sent him this response -

_____________________________

I have so far missed out on the bidding on your time machine.
I would like to buy it last month and will arrange for this transaction to
be formalised once I get the machine working. You should check your old bank
account records for payment over a six month period or so as I am uncertain
of the temporal accuracy that I will be able to achieve initially. I assume
that the device also has geographic translocation capability - this will be
necessary as I am in New Zealand. Is the inverted direction of the coriolis
forces liable to cause problems?

I only have Dilithium Hydrochloride.6H2O available. This works OK with my
present homebuilt system but the temporal range is extremely limited (too
short for getting back to a valid purchase date). If this is not suitable
for your system can you please recommend a possible source and a time when
the price was right?

At 03:11 PM 9/6/01 -0700, you wrote:
>Hard to see why it strains credulity that "simple" animals
>exhibit behavior implying memory, of a sort - These "simple"
>nervous systems are far more complex than PICs, which possess
>memory, likely for the same reason: Utility.
>

Three people trading anecdotal evidence, and ganging up on
someone who presents counter possibilities does not science
make. Science is not done by democracy vote.

> >How can you tell ? Fish are cold-blooded
> >
>
> Exakly, Mr. Auckland CityBoy. If you lived in the country, you

Well excuse me all over the place ;-) Hey, I know where the
country is. I've heard about it you know. It's that big green
thing down the end of the motorway. Mud, cows, trees, s**t
like that

> would know that first thing every morning all the warm- and
> cold-blooded critters come out to warm up in the 1st rays of
> the sun.

I thought the point of being a cold-blooded fish is that you don't
have to warm up. Fish live in different temperature waters,
Antarctic cod have anti-freeze blood, little neons live in warm
Amazonian water. Neither of those come to the surface to
warm up. My goldfish don't come to the glass on winter mornings
holding up pictures of an electric heater or earmuffs and pointing
to them with Little Tiny Tim faces. Especially since I took their
crayons away.

> Also, at nite, many animals become roadkill cause
> after dark they go and sit on the warm [road] pavement. If
> you live literally "immersed" in water, then you prolly
> try to find some that suits your favor - as Russell said.

Mammals are roadkill. When was the last time you ran over a
herring ? Fish already live where it suits them. And ponds are
liable to range from frozen over to sweltering in a summer sun,
yet the fish cope

I would have a link to http://www.findu.com/cgi-bin/find.cgi?KC6ETE-9 here
in my signature line, but due to the inability of sysadmins at TELOCITY to
differentiate a signature line from the text of an email, I am forbidden to
have it.

>> Also, at nite, many animals become roadkill cause
>> after dark they go and sit on the warm [road] pavement. If
>> you live literally "immersed" in water, then you prolly
>> try to find some that suits your favor - as Russell said.
>
>Mammals are roadkill. When was the last time you ran over a
>herring ?

...... rattlesnakes, horny toads, lizards, splatterbugs
=============

Fish already live where it suits them. And ponds are
>liable to range from frozen over to sweltering in a summer sun,
>yet the fish cope
>

..... course if the fish could talk, they'd all say they
want to live in the warm end of Russell's tank.

Dave.VH wrote:
>>
>>Three people trading anecdotal evidence, and ganging up on
>>someone who presents counter possibilities does not science
>>make. Science is not done by democracy vote.
>
>Indeed.
>
>Science is that which works, even when you don't believe in it. :)
>
>Try the pendulum proof, for an excellent illustration!
>--

pendulum proof - please explain ???????
======

At any rate, I especially did like all the shows and books [plus
the example of the entire psychology class misinterpreting what
the pigeon was responding to] that illustrate that people tend
to read much more into these things than may actually exist.

[I wonder, do snails "really" jump out of their tanks to show
their emotion? - stranger than fiction - I'm still chewing on
that one ;-)]

> Three people trading anecdotal evidence, and ganging up on
> someone who presents counter possibilities does not science
> make. Science is not done by democracy vote.

"True" science is supposed to be the study of what is, unfortunately as
I understand it there are a lot of politics and jealousies in the field
that hamper that aim, and sometimes a legitmate alternate theory is
laughed out or squelched by those that believe in the "established"
theory, probably even to this day but it takes a while for the alternate
theory to be proven and accepted, so the story won't be told until after
the fact.

(btw, do you know the last thing to go through a splatterbugs
mind when it hits your windscreen ?"

> Fish already live where it suits them. And ponds are
> >liable to range from frozen over to sweltering in a summer sun,
> >yet the fish cope
>
> ..... course if the fish could talk, they'd all say they
> want to live in the warm end of Russell's tank.

Oh you. Now you're just being difficult. Have you been to the
toilet today ? ;-)

Now, using a very large winch and chain to haul this back to bots,
an environmentally-sensitive bot would be a novelty. Hides in
corners or runs to the light, scoots to the front door to greet you,
curls up in front of the fire like a cat, sits by the back door waiting
to go out for a run........... The Japanese are big on virtual pets,
any idea what theirs are capable of ?

> > >How can you tell ? Fish are cold-blooded
> > >
> >
> > Exakly, Mr. Auckland CityBoy. If you lived in the country, you
>
> Well excuse me all over the place ;-) Hey, I know where the
> country is. I've heard about it you know. It's that big green
> thing down the end of the motorway. Mud, cows, trees, s**t
> like that

We indoor geeks refer to it as the "Big Blue Room" or BBR for short.

> > would know that first thing every morning all the warm- and
> > cold-blooded critters come out to warm up in the 1st rays of
> > the sun.

So THAT'S why the lawyers all stand in the window at Starbuck's in the
morning! Does make for a better sight picture, though.

Dale
--
Hallo, this is Linus Torvalds and I pronounce Linux as Leennuks.
Hallo, this is Bill Gates and I pronounce 'crap' as 'Windows'.

> It's helpful to discuss though as a background to AI in even
> a basic bot. If you want to give the bot some smarts at some
> point you have to stop with the engineering and take a look
> at imprinting it with some physiological behaviour

After the little I have come to know about the world I live in most
attempts by humans to endow their robotic creations with 'intelligence'
(of the kind that allows it to cross the street or buy groceries) will
likely fail. I think that bug-style simple logic and the capability to
learn (carrot or stick button and some form of evolution capable AI
system, like a neural network), is what is needed for a start, and then
many, many, many thousands or tens of thousands of real world interactions
to build a workable set of rules. Once you have a bunch of working rules
it is easy to transfer them to other robots. Of course someone would have
to pay for this, and then no-one would insure any kind of robot based on a
heuristic system that cannot be proved to be 'safe' (because it has
learned what it can do in the same way as people have, who *are* life and
accident insured by the same companies). Yet I believe that this is the
only way to gather information about what those rules should really be.

F.ex. Asimov's robot laws would not allow a robot autopilot or defense
system to save 1 million people by killing one, for example a terrorist
who has stolen an aircraft with nuclear weapons on board, en route for a
major city. Okay this is extreme, but there are other scenarios. Like
three unconscious people in a submarine after an accident, one each in an
automatically sealed compartment, with oxygen on board just enough for two
to live until rescue is sheduled to come. Can the robot decide who is to
live? Will he kill all three by not killing one? Will he be 'blamed' if he
does? If he does not?

> At 03:11 PM 9/6/01 -0700, you wrote:
> >Hard to see why it strains credulity that "simple" animals
> >exhibit behavior implying memory, of a sort - These "simple"
> >nervous systems are far more complex than PICs, which possess
> >memory, likely for the same reason: Utility.
> >
>
> Three people trading anecdotal evidence, and ganging up on
> someone who presents counter possibilities does not science
> make. Science is not done by democracy vote.

Hmm, if ganging up is talking to someone and expressing an opinion maybe we
should stop ganging up on Dan ? :-)

Expressing greater or lesser agreement with opinions from one or other side
of a discussion is in fact something vaguely approaching some of the woolly
procedures that count as Science these days
((There, that should attract comment from Dan re his response, Alice re
science and Jinx re sheep. Who wants a quiet life anyway ? :-) )).

Science sure aint democracy but science will generally lead to a majority
understanding of what constitutes reality as the hypothesising, testing and
revision will lead to a model which most people will not be able to falsify.
The model will, of course, be wrong (all models are) but it will last for a
while until one of the holdouts find the loose thread that leads in due
course to the new better model. Often not before the hold-out has been
pilloried mercifully for 1 to thirty years and sometimes beaten to death.

Re fish and thinking - I suggested that a stage is reached in terms of
intelligence in creatures where "software" is a simpler explanation than
hardware. I suggested AFAIR (AAADDS lurks) that this is true for cats. I
didn't say it was (AFAIR) so for fish but it may be. By the time you get to
people (most people anyway) software is the overwhelmingly obvious simplest
solution. If I (or Dan) could do what we do on hardwiring and Pheromones the
creation would be even more amazing than it is now. Working down the list of
"intelligences", at what stage do you get to a fully hard wired solution?
Fish? And why should we think that memory/software/thinking of some thought
is a hard part of a design compared with hard wiring? And, if a fish fires a
Neuron or several to initiate a conditioned or innate response is this
thinking at some level?

Jinx wrote:
...........
>> ..... course if the fish could talk, they'd all say they
>> want to live in the warm end of Russell's tank.
>
>Oh you. Now you're just being difficult. Have you been to the
>toilet today ? ;-)
>

In the famous words of Olin.L: "I only came here to make trouble ;-)"

[someone's got to get you literal engineer types to widen your
perspective, after all ....]
=================

>Now, using a very large winch and chain to haul this back to bots,
>an environmentally-sensitive bot would be a novelty. Hides in
>corners or runs to the light, scoots to the front door to greet you,
>curls up in front of the fire like a cat, sits by the back door waiting
>to go out for a run........... The Japanese are big on virtual pets,
>any idea what theirs are capable of ?
>

So you think a cat-emulator would be environmentally-sensitive?
[only if it could sort the recycleables from the non-recycleables].

IIAC, Aibo the robo-chihuahua is the most advanced japanese simu-pet
at present. It is environmentally-friendly [never poops on the
carpet] and does a lot of puppy-type things... rollover [?], play
dead, beg. Big in japan [>130,000 sold the 1st year], but very
few were apparently sold outside japan - rest of the world
apparently doesn't think $2500 is the good price for a toy.

The specs on the Aibo are impressive. Computing power similar
to a playstation, etc.

Russell McMahon wrote:
..........
>Re fish and thinking - I suggested that a stage is reached in terms of
>intelligence in creatures where "software" is a simpler explanation than
>hardware. I suggested AFAIR (AAADDS lurks) that this is true for cats. I
>didn't say it was (AFAIR) so for fish but it may be. By the time you get to
>people (most people anyway) software is the overwhelmingly obvious simplest
>solution. If I (or Dan) could do what we do on hardwiring and Pheromones the
>creation would be even more amazing than it is now.
.............

All of this makes good sense, and is indeed what has been implemented
in nature. Again - just looking at the vertebrate kingdom [no 140 IQ
jumping snails allowed ;-)] - lower vertebrates are more hard-wired
and have less learning capability, whereas in higher vertebrates
learning is a major aspect of life, if not "the" major aspect.

This is not to say that there is no learning at all in lower vertebrates
- there may be some - but it is mainly to say again that there is a
spectrum with all of brain size, perceptual input bandwidth, processing
power, learning, reasoning power, and apparently consciousness expanding
greatly as you go up the food chain.

You can probably find some lower vertebrates with the ability to
learn some things better than other lower vertebrates. However, this
would still be rather low-level as compared with monkeys and humans.
As I understand it, frogs are especially non-gifted when it comes
to learning. Some fish may do better.
==========

Working down the list of
>"intelligences", at what stage do you get to a fully hard wired solution?
>Fish? And why should we think that memory/software/thinking of some thought
>is a hard part of a design compared with hard wiring?
.......

As I see it, it is not so much that we think memory/etc is the hard
part compared to hardwiring, but that this seems to be the way the animal
kingdom works - at least in vertebrates.

An interesting part of this relates to brain anatomy. Lower vertebrates
[fish, amphibians] have tiny brains relative to humans, and no cerebral
cortex. At the intermediate level [birds], they still have relatively
simple brains but also increased learning abilities [I am not sure if
they have a cerebral cortex, per se]. In mammals, the cortex exists and
gets larger and larger as you go up the chain, in direct correlation
with expanded mental capbilities. Similarly, lower vertebrates have
limited ability to learn, while higher animals have great ability.
The rise of the cortex is apparently the key to higher function.
==========

And, if a fish fires a
>Neuron or several to initiate a conditioned or innate response is this
>thinking at some level?

Personally, I think there is a continuum of levels in all mental
aspects commensurate with the continuum of brain development described.
This is what I have been saying all week.

Peter wrote:
>> It's helpful to discuss though as a background to AI in even
>> a basic bot. If you want to give the bot some smarts at some
>> point you have to stop with the engineering and take a look
>> at imprinting it with some physiological behaviour
>
>After the little I have come to know about the world I live in most
>attempts by humans to endow their robotic creations with 'intelligence'
>(of the kind that allows it to cross the street or buy groceries) will
>likely fail. I think that bug-style simple logic and the capability to
>learn (carrot or stick button and some form of evolution capable AI
>system, like a neural network), is what is needed for a start,
..........

Many current workers in robotics think that the old top-down AI
approach of the past 50 years will never succeed, and the new
bottom-up approaches are the way to go. Start with the simple problems
rather than the most difficult. Add "machine evolution" to that,
where you keep adding layers of processing power on top - similar to
what you see when going up the vertebrate animal hierarchy.

Rodney Brooks is one of the main pioneers of this, and it is very
telling that his ideas were viewed as so heretical by the old-time
AIers that it took several years before the establishment would
condescend to publish his papers. Today his stuff is viewed as
revolutionary - at least by newsters:

>many, many, many thousands or tens of thousands of real world interactions
>to build a workable set of rules. Once you have a bunch of working rules
>it is easy to transfer them to other robots. Of course someone would have
>to pay for this, and then no-one would insure any kind of robot based on a
>heuristic system that cannot be proved to be 'safe' (because it has
>learned what it can do in the same way as people have, who *are* life and
>accident insured by the same companies). Yet I believe that this is the
>only way to gather information about what those rules should really be.
>

The way to make something like this "safe" is to limit its power via
hardcoding of certain aspects. If everything is literally learnable,
then you never know, but if some safeguards are hardwired in, then
you can limit the range of actions - at least until the bot learns
to override its hardwiring.
===========

>F.ex. Asimov's robot laws would not allow a robot autopilot or defense
>system to save 1 million people by killing one, for example a terrorist
>who has stolen an aircraft with nuclear weapons on board, en route for a
>major city..........

Asimov's 3 Laws only apply to those bots where they are built in.
Certainly the military is working on robotic soldiers, and they
have no intent of following Asimov's lead. If you're gonna replace
a soldier with a bot, it had darn well know how to use a gun.

Jinx wrote:
......
>Now, using a very large winch and chain to haul this back to bots,
>an environmentally-sensitive bot would be a novelty. Hides in
>corners or runs to the light, scoots to the front door to greet you,
>curls up in front of the fire like a cat, sits by the back door waiting
>to go out for a run........... The Japanese are big on virtual pets,
>any idea what theirs are capable of ?
>

Captain.J, if you want something to love and cherish, you might
try the Hootbot:

It doesn't take a very complex critter to display an impressive amount of
behavior (more than any of us really understands). If you're interested in
looking over some of the real stuff (science, that is) check out:

Lots of folks are trying to figure out C. elegans, a nematode with 959
somatic cells, of which 300 are neurons and only 81 are muscle cells. It
manages to display rudimentary learning, which certainly involves some sort
of memory.

Don Hyde wrote:
>It doesn't take a very complex critter to display an impressive amount of
>behavior (more than any of us really understands). If you're interested in
>looking over some of the real stuff (science, that is) check out:
>
>http://www.wormbase.org/
>

I would have a link to http://www.findu.com/cgi-bin/find.cgi?KC6ETE-9 here
in my signature line, but due to the inability of sysadmins at TELOCITY to
differentiate a signature line from the text of an email, I am forbidden to
have it.

> F.ex. Asimov's robot laws would not allow a robot autopilot or defense
> system to save 1 million people by killing one, for example a
> terrorist
> who has stolen an aircraft with nuclear weapons on board, en
> route for a
> major city. Okay this is extreme, but there are other scenarios. Like
> three unconscious people in a submarine after an accident,
> one each in an
> automatically sealed compartment, with oxygen on board just
> enough for two
> to live until rescue is sheduled to come. Can the robot
> decide who is to
> live? Will he kill all three by not killing one? Will he be
> 'blamed' if he
> does? If he does not?
>
> Peter

I think there was an arctic expedition that had just this problem. The
Commander (Greeley?) insisted on dividing supplies absolutely equally.
As a result almost all of them died shortly before rescue.

What would give one the right to decide who should live and who should die?
It seems to me that there is a fundamental difference between this and the
terrorist example, as the terrorist is actually intending to kill people,
whereas in this case, everyone is just trying to survive.

Sean

At 02:38 PM 9/7/01 -0400, you wrote:
>I think there was an arctic expedition that had just this problem. The
>Commander (Greeley?) insisted on dividing supplies absolutely equally.
>As a result almost all of them died shortly before rescue.
>
>Sherpa Doug
>
>--
>http://www.piclist.com#nomail Going offline? Don't AutoReply us!
>email EraseMElistserv@spam@mitvma.mit.edu with SET PICList DIGEST in the body

>What would give one the right to decide who should live and who should die?
>It seems to me that there is a fundamental difference between this and the
>terrorist example, as the terrorist is actually intending to kill people,
>whereas in this case, everyone is just trying to survive.
>
>Sean
>
>At 02:38 PM 9/7/01 -0400, you wrote:
>>I think there was an arctic expedition that had just this problem. The
>>Commander (Greeley?) insisted on dividing supplies absolutely equally.
>>As a result almost all of them died shortly before rescue.
>>

Mr.Botboard said:
>>I believe what Russ, Jinx and I are telling you is that these
creatures DO
appear to have memories and Do rememember things and
people, and act on this
knowledge. Here is a proposed mechanism:
>>

>So, now you and Russell and Jinkalong are ganging up on
me. 3:1.
>[I'll bet Jinx has a few duck and sheep stories which he hasn't
>related]. But ha - you will notice that science does not work
by
>majority vote, young lady. Just because you guys "think" you
have found sentient slugs, ha .......... ;-)

As you well know, science begins by careful observation.
Humans are notorious in seeing what they want to see, or are
told to see. I was merely reporting what I saw, and I will admit i
was surprised, because i had never thought of snails as having
that much of a personality.

>Also, if you check closely my paragraph above, you will notice
that
>I contradicted myself even as I emitted a single thought.
Actually, I
>did not really contradict myself because I added so many
qualifiers
>- "essentially", "limited", "probably graded", etc. If you steer to

>those, rather than to the single "no", then I really see no
>contradictions with what you said about your slug with 140 IQ.

================
Only because Im feeling generous will I let you weasel out of
this one.

>>The brainstem of a creature controls autonomic functions, for
example
breathing, heartbeat, etc, and also is the location of the limbic
system,
which is the seat of emotion. Now an emotional response to
something need
not be complex, just a fear hormone, adrenalin or analog, and a
happy
hormone, serotonin or analog. These allow a neuron, or bit, to
be set under
a certain set of circumstances. If a certain dark loomy shape is
associated
with crumbs, happy bit is set. More happy, stronger response.
See loomy
shape, happy, come over. Likewise, light loomy shape hits
creature, fear
bit set, see light loomy shape, flee.
>>

>hmmm, do your slugs have brainstems and a limbic system? I
understand the rest.
However, I am a little uncertain about your use of "certain" as in
"certain dark
loomy shape". Do you have any idea how many photorecpetors
there are in a
slug's eyes - does it have eyes? [and I take it "light loomy
shape" is #1
daughter what terrorizes the slug].
==============
Well, i took a gander at the Apple Snail page,http://www.applesnail.net/
and they have several ganglia, their eyespots have about
40-pixel resolution, and they have chemo and tactile receptors,
as well as a 3-axis motion/attitude detection system analogous
to our inner ear. I didnt see a neuron count, but i estimate it to
be around 1K. (most of their nerves are in their smell and skin
receptors). Seems about rught for the complexity of their
behaviour.

>Snail episode: Snail sees dark loomy shape, happy bit set,
starts coming
up to surface. But, Snail was left with strangers for 2 weeks,
and becomes
hypervigilant for dark loomy shape. Snail overreacts to dark
loomy shape,
and as a result overshoots top of water and comes up out of
tank.
>

Ho boy, you musta just got back from seeing AI, the movie.
Happy puppets
and happy slugs. Note - this scenario didn't work there either -
[however,
Spielberg certainly could use a new writer or some new ideas].
==============

Well, it's pretty generic behaviour theory, I don't quite
understand why this seems farfetched. Perhaps my description
the snail's walk above the waterline may have sounded more,
err, umm, energetic than I intended. In reality, rather than a
bit-level memory, something more like byte-level is involved,
with levels as well as simple boolean states.

>>This demonstrates a perfectly sound reason to have an
emotion bit set,
since it tends to reinforce a behaviour that was rewarded in the
past, and
allows for a gradation in response. An overall increase in a
seeking
activity when a reward is delayed would be a good thing, and
should only be
allowed to be overwritten (forget) with difficulty. This, by the
way, is a
classic 'training' response. This is not consistent with the idea
that
these creatures dont have memory, and shows that, to the
contrary, even in
tiny creatures, emotion-based memory is desirable and highly
useful.
>>

>As I stated several times, some animals do have a certain
amount of
plasticity. If a slug only has "1" cell in its brain, then maybe it
would help its survival if it had some.

It appears that they have a pretty sophisticated nervous system
for a non-vertebrate, did I mention that Apple Snails are about
the largest of the nonmarine snails?

>OTOH, your argument [and Russell's] would be would be more
compelling
if you were able to train a veritable army of slugs/fish - then it
might pass the point of "anecdotal" evidence. It actually would
be
interesting to know whether anyone else has been able to train
slugs and fishes.
>

AHA! a PIC challenge. Fine, how about a PIC project: I believe
this snail could be trained to push a lever to obtain food, set up
to work if a light were blinking, and incorporating some variable
delay between tries so the snail doesnt clean out the food
supply. The tricky part is coming up with a switch/lever sysem
that a snail could push--probably something like a sipper switch
(Russell? how do these work?) THe pic could handle the
lighting and a relay to drop fish flakes into the tank. THe whole
thing could operate late at night when no people are around,
these snails are not really obligate daytime creatures. After
training the behaviour, I could see how long the snail could
remember it.

>[BTW, did you have to take your slug to Dr. Slug S. Shrink
after you
left it with strangers for 2 weeks?].

Oh I know that it IS done, I'm just questioning whether it SHOULD be
done. It sounded to me as if some people were considering it a fault that
some people do not want to decide if others should die or live.

> Usually the guy in "command" - ie, highest ranking military
> officer, your president, the accountant at your HMO, etc.
> Don't look now, but they make these kinds of decisions every
> day.
>
>
> At 03:06 PM 9/7/01 -0400, you wrote:
> >What would give one the right to decide who should live and who should die?
> >It seems to me that there is a fundamental difference between this and the
> >terrorist example, as the terrorist is actually intending to kill people,
> >whereas in this case, everyone is just trying to survive.
> >
> >Sean
> >
> >At 02:38 PM 9/7/01 -0400, you wrote:
> >>I think there was an arctic expedition that had just this problem. The
> >>Commander (Greeley?) insisted on dividing supplies absolutely equally.
> >>As a result almost all of them died shortly before rescue.
> >>
>

They claim 110lbs (50kg) for a schoolkid, which sounds a bit large.
Perhaps they should have put the time into finding out why they are
so pudgy. Anyway, no doubt film of 1,000,000 fat kids jumping up
and down will be all over the TV news today

> >[BTW, did you have to take your slug to Dr. Slug S. Shrink
> after you
> left it with strangers for 2 weeks?].
>
> - dan
> ===========
>
> No, I just sang to it for a while till it calmed down :)
>
>
> alice

All you needed to keep it happy in your absence were a few of Dave King's
singing potatoes

Russell.M wrote:
>> >[BTW, did you have to take your slug to Dr. Slug S. Shrink
>> after you
>> left it with strangers for 2 weeks?].
>>
>> - dan
>> ===========
>>
>> No, I just sang to it for a while till it calmed down :)
>>
>>
>> alice
>
>
>All you needed to keep it happy in your absence were a few of Dave King's
>singing potatoes
>

Interesting. As the Apple Snail site indicated, snails have poor
or no hearing, and minimal vision capabilities, but do have
excellent smell.

One wonders that, were Alice to make a rigorous test, she might
find the snail responding to her deordorant and/or hand lotion,
and not to her smile or singing.

I think that from the point of view of the robot there would be no
difference between the submarine and the terrorist situation. In both
cases it would have to decide the 'value' of life. Note that I did not
imply that the terrorist would suicide by bombing the city.

I have no idea and no certain opinion on this. Douglas Butler said that he
knows about an arctic expedition where everyone died due to food being
distributed equally.

I remember the case of a crash landed aircraft in the Andes where the
survivors practiced cannibalism to stay alive (for two months ?) until
they were rescued. I have also read some more or less direct allusions to
the same subject among survivors of sunken ships in WW2 and elsewhere.

Hawking has said science could increase the
complexity of DNA and "improve" human
beings in order to stay ahead of electronic
artificial intelligence, going on to suggest
human-machine interconnects be developed
to bridge the widening gap between man and machine.

The voice synthesizer, however, urged humans to remain complacent,
assuring them that they would have nothing to fear. "What you may think of
as robotic overlords today will seem like Palm Pilots in comparison to
whats on the way," the electronic voice box added.

The 8 year old voice synthesizer is carried by a moterized wheelchair and a
59 year old with Lou Gehrig's Disease.

> The voice synthesizer, however, urged humans to remain
> complacent, assuring them that they would have nothing to
> fear.

Hey, I'm cool. The human race in The Matrix were quite
happy in their little cocoons until Keanu Reeves stuck his
oar in. Ignorance CAN be bliss. As long as you don't know
you're ignorant of course. Which is Ignorance^2. Or even
Ignorance^3 if you've seen The 13th Floor

Greeley was widely criticized for not taking more decisive action. Also
since he was one of the few survivors people assumed he got more than
his fair share of food. There was also some evidence of cannibalism,
though no evidence the Greeley knew about it. This was sometime in the
1800's.

I feel that our society is not good at making such decisions and chooses
to simply deny that they need to be made.