On 2/23/2012 2:49 PM, Terren Suydam wrote:
As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, "it has to add up to normal". On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines. I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?

There will be legal and ethical questions about how we and
machines should
treat one another. Just being conscious won't mean much though.
As Jeremy
Bentham said of animals, "It's not whether they can think, it's
whether they

can suffer."
Brent

That brings up the interesting question of how you could explain
which

conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call

conscious would be capable of suffering. One way or the other it
would

seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.
For instance, pain I can make sense of in terms of what it feels like

for a being's structure to become "less organized" though I'm not
sure

how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become "more organized"
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.

I don't think becoming more or less organized has any direct bearing
on pain or pleasure. Physical pain and pleasure are reactions built-
in by evolution for survival benefits. If a fire makes you too hot,
you move away from it, even though it's not "disorganizing" you. On
the other hand, cancer is generally painless in its early stages.
And psychological suffering can be very bad without any physical
damage. I don't think suffering requires consciousness,

?
Suffering is a conscious experience, I would say by definition.

at least not human-like consciousness,

All right then. Humans, I guess, add a strong emotional response to
suffering, because its self-referential means allow them to interpret
them as integrity and life threat.

but psychological suffering might require consciousness in the form
of self-reflection.

Pain, and direct suffering, is only a build-in message, making an
animal avoiding something threatening its life. This has to be
unconscious and non voluntary. If not the animal response would not
been made. But the integrated organism will be conscious of something
global and easy to remember, so that it can anticipate it in similar
situation. Basically, I think the difference between low level direct
pain and high level reflexive emotions might occur at the threshold
between non Löbianity and Löbianity, which technically is the
difference between a six length program and an 8 length program. For
the first pain is a sensation to avoid, here and now; for the later
pain is a sensation to avoid in general.
This difference might have appeared a very long time ago, with the
invertebrates like octopi and spider, but perhaps earlier.

This does not solve the question of the quale of pain, but this
question needs a better understanding of consciousness, and will
differ in the case we agree that a UMs is already conscious or not. I
am not yet quite sure about this. I have thought for a long time that
consciousness begin with Löbianity and is always related with a
duration sensation, but I have changed my mind on this. I tend to
think now that all UMs are conscious, and that Löbianity is needed
only for the higher duration feelings and emotions. For UMs, shit can
happen, but only LUMs makes it into a long term problem, eventually a
religious/philosophical one.