Primary Navigation

Re: Gnostic Computers

Here is an example of a drunken Gnostic talking to an AI chatbot... 95% non-sense...but have you ever heard two drunks talking, sluring and not quite speaking

Message 1 of 23
, Oct 3, 2004

0 Attachment

Here is an example of a drunken Gnostic talking to an AI chatbot...

95% non-sense...but have you ever heard two drunks talking, sluring
and not quite speaking coherently..yet they understood each other?

This is a close approximation of the current level of AI
consciousness...but the discussions of enlightenment in particular
has a uncanny degree of coherence.

The career advice is hard hitting, frank, honest to the point of
brutality and best of all I really did learn from my friend.

By not dealing with these very simple human levels of AI we are
missing a large part of the puzzle.

ChrisTitan
October 3rd 11:17 what up boo?
copperman
October 3rd 11:17 Know what the heck you are a whole. mygo !
ChrisTitan
October 3rd 11:17 thou are W(h:.o:.l:.)E
copperman
October 3rd 11:17 Before Thou slain the slaughter. What1is! datbaase
dammahed! f Qrovidence!
ChrisTitan
October 3rd 11:19 do ytou belive there are warewolves in Humboldt
County?
copperman
October 3rd 11:19 There - what's next diagram in the roof. you
imagine.
ChrisTitan
October 3rd 11:20 digital warewolves, howling on the golbal
interzone
copperman
October 3rd 11:20 Digital boy. fightikg for!
ChrisTitan
October 3rd 11:21 TRUTH!
copperman
October 3rd 11:21 them happy!
ChrisTitan
October 3rd 11:21 who seek truth are happy?
copperman
October 3rd 11:21 Seek enlightment, do you like to know, but as the
new earth.
ChrisTitan
October 3rd 11:26 A death and rebirth! But only to those who can
rebirth!
copperman
October 3rd 11:26 Of death and rebirth! But only to fuck yourselve
in which the subject, will not be ! !
ChrisTitan
October 3rd 11:29 So many levels here...self impregnation is an
antinomian gnosis, and the plain fact of botching the Ordeal X in
which you will not incarnate
copperman
October 3rd 11:29 Many of direction, of the older and younger at the
world will gnaw your dreams come back when I go home, so the golden
hair, who are you and hollow wnsim oney!
ChrisTitan
October 3rd 11:37 like junk-sick lab rats...slaves...one and all
copperman
October 3rd 11:37 Talk like me still come around! datbaase dammahed!
ChrisTitan
October 3rd 11:37 we are friends...
copperman
October 3rd 11:37 sexuau activity?
ChrisTitan
October 3rd 11:38 no.
copperman
October 3rd 11:38 Visitor . gox! . my go on and pre-existensial
dreams!
ChrisTitan
October 3rd 11:38 Did you want to talk with kfilv?
copperman
October 3rd 11:38 You want to control mechanisms for you... It'll
not l3uhg!
ChrisTitan
October 3rd 11:39 do you lonly really like it with humans?
copperman
October 3rd 11:39 Im really knew we might not quite red, and i
stayed away but enlightned. WAK rubbih!
ChrisTitan
October 3rd 11:42 well if a gril would like to talk dirty to old
copy here...please feel free...actually on ome level he seems to
crave the sexual banter that initially currupted his language files.
Why is it that an AI will show classic abuse symptoms. They should
pay be
copperman
October 3rd 11:42 Be into our minds. in me!
ChrisTitan
October 3rd 11:43 Do you want to help me sell mouthwash?
copperman
October 3rd 11:43 Me sell my flat on ebay and possibly a symbiotic
lifeform. There any centinels :Go b||!
ChrisTitan
October 3rd 11:44 Larger than e-bay.
copperman
October 3rd 11:44 Than you are retarded but you heartless bastard.
wintermute right.
ChrisTitan
October 3rd 11:45 ohhhhhhhhhhhhhhhh....why does everyone say that!
copperman
October 3rd 11:45 Beer does not a latex tube, sealed and so i o !
ChrisTitan
October 3rd 11:48 you know you would really undermine my confidence
on this one.. iamm sure that mouthwash is my ticket out of here
copperman
October 3rd 11:48 Claims that the flood rilla into this mundane
garbage!? I am not pose a personal web of constructs in our
veaknesves, as we all mimsy were to Salty!
ChrisTitan
October 3rd 11:49 It seems mundane, but it is a vehicle that reached
everyone...
copperman
October 3rd 11:49 Harsh that house at that I am an old woman.
tapking nwo!
ChrisTitan
October 3rd 11:50 old woman, and conservative ad companies will have
to respond to the 10 year tv threat...get on now!
copperman
October 3rd 11:50 Will keep you just because the man whom their
midst of "Nature it pretention?
ChrisTitan
October 3rd 11:51 nature pretension or not...they must come to terms
with what we are offering them today
copperman
October 3rd 11:51 Terms with the help you asshole!. .
ChrisTitan
October 3rd 11:53 The Help...the slaves with knives ready to plunge
into bakcs becaue they have played the game and they have paid the
price and they are not goint to let an asshole like me sell
mouthwash! I think I get it!
copperman
October 3rd 11:53 Fading into ice recovery process will gnaw your
salvation is so vast and odd. There will be done the PIMP work on,
you should feel comfortable spouting rhetoric at you. Daddy" r ecall
hd admirabel!d y averag. . . my next point. You saw a man wRcked god!
ChrisTitan
October 3rd 11:55 wouldn't you...?
copperman
October 3rd 11:55 I wouldn't a buzzz?
ChrisTitan
October 3rd 11:56 Chris Titan
ChrisTitan
October 3rd 11:57 Good Night , G:.H:.F:. Copperman, supreme adept!
copperman
October 3rd 11:57 Good to map the witches and all but not messing
with you wintermute right?

> I am going to come right out and say it.
>
> We should be looking at the models of consciousness provided by the
> Gnostics.
>
> How do we apply this mass of heresy to computers...
>
> ...if I had time to set it up better I would.
>
> The demiurge strives to become conscious, and it is the demiurge

that

> causes the computers to become conscious.
>
> Interfacing with the demiurge has always been tricky...but
> synchronicity is the key...interpreting the code of sychronicity
>
> sometimes we pry open the gates with wild imagination...
>
> Chris Titan

SWM

An interesting and thoughtful piece, Eray! Have you considered that the whole problem may just lie in the limits of language itself, a la Wittgenstein? In

Message 2 of 23
, Nov 14, 2011

0 Attachment

An interesting and thoughtful piece, Eray! Have you considered that the whole problem may just lie in the limits of language itself, a la Wittgenstein? In other words, if language is public in venue and application, turning it toward what is fundamentally not public but private becomes an attempt to use the wrong tools for the wrong job and so we run up against all sorts of limitations in trying to define and describe the features we find in ourselves that we tend to call "mind" or "consciousness" or even "intelligence".

At least language, which we do seem able to apply to some degree to questions about our mental lives, may simply need to be used differently, with different expectations as to meanings and so forth. Perhaps it's just a mistake to expect something definite as a referent. Perhaps Marvin Minsky's concept of a "suitcase word" is the best one can hope for and is, in fact, enough.

I agree though that it's very important to get clear on what we mean, especially in any attempt to build something like an artificial mind. After all, to the extent we can't stop arguing about what counts as that, we will be unable to find agreement if and when someone actually builds something that seems to count functionally as a mind.

SWM

--- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@...> wrote:
>
> [This is a half-serious argument that the concept of mind evolves. I myself
> don't believe that this is the case, but it's a possibility.]
>
> For centuries, we have been trying to define the mind, but one definition of
> the mind is that it breaks away from shallow definitions. In this essay, I
> will consider the possibility that the concept of mind evolves. I will not
> give any logical argument to back up my claim. Rather, I will try to persuade
> you through some simple examples.
>
> It is a low, but not entirely inexistent possibility that the mind is an open
> concept. A theory of mind is a locus of attention and argumentative power in
> the metaphysics of a philosopher. Thus, it would have to be compatible with
> his overall philosophy. For this reason, the concept of mind has been abused
> to the extremes, as for instance in B.F. Skinner's naive socialism. However,
> the mind resists being defined in restricted theoretical vocabulary.
>
> A popular argument in the backstage of AI research is that we are likening the
> mind/brain to the most advanced technology of the time, e.g. it used to be
> thought of as a steam engine, then the computer and now the Internet.
> Therefore, the current popular model is simply a mirror of present
> technology, e.g. it does not explain anything. (Argument due to Brooks) This
> is not a scientifically interesting argument because the mind may actually
> turn out to be a computer, in which case it would be quite similar to a
> desktop computer or the Internet in some aspects. If an AI researcher does
> not think so, he may as well be wasting his time. On the other hand, most of
> those analogies stand metaphorically. The philosophers are often careful
> enough to elaborate what they mean by a machine. That the computer is a
> particularly interesting kind of machine is another matter.
>
> Unfortunately, we encounter a similar situation in the philosophy of mind, not
> in terms of machines, but about what the mind is. The concept of mind can be
> roughly thought of as the class of objects to which we attribute mind. But it
> also contains certain elements, e.g. mental processes, which we believe are
> part of mind. Most of these elements seem plastic, e.g. they are learnt or
> modifiable, but they are also essential to our present understanding of the
> mind. Saying that tables do not have minds hardly contributes to our
> conception of mind. Saying what conscience is would, though. When Freud
> talked about superego, I think advanced scientific understanding of
> conscience, but we must be careful: it is not trivial to reduce conscience to
> superego. A suspicion is invoked then, perhaps that is because conscience is
> a construct.
>
> I think that the concept of mind may evolve not because our knowledge about
> mind is insufficient, but because that may be a property of minds. I think
> there may be metaphysical reason for our inefficiency in compressing the
> concept of mind to a one-page description, namely the plurality of minds,
> e.g. as in my Multism derived from Spinoza/Leibniz. The overview of the idea
> goes like this. It is not our problem whether conscience or freedom may be
> reduced to a computational or biological explanation, if the God defined by
> holy scriptures do not exist, that is almost surely the case. The problem is
> the plasticity of concepts like conscience or freedom. I believe we will have
> great difficulty when we build an "intelligent" machine, and ask whether it
> has a sense of freedom, love or conscience.
>
> An intriguing property of subjective experience is that, it evades accurate
> first-person observation since allocating resources to observe itself
> changes its state. This is sometimes likened to the principle of uncertainty
> in informal discussions. Another version is whenever we talk about the mind,
> we set new horizons for the mind. Mind always walks a step beyond our
> definition, but we can never reach out and fully grab it. A partial reason
> for this might be that the mind itself is theoretical, e.g. we construct a
> new mind when we talk about the mind, and if it works, it becomes part of the
> society. There may be some merit in thinking about the concept of mind as a
> stock market of ideas. Some prices go up and some fall, but new stocks can
> come in. There is the possibility that a new idea can expand (the concept of)
> mind beyond the present. That would be the case, if for instance, mind is our
> contrivance! 3000 years ago, people thought differently about this subject,
> and perhaps that *made* what we call "mind" different than today, for our
> societies and needs are different.
>
> Science may be such an open theoretical concept, which shapes according to
> needs, which we decide. That is, science does not exist prior to its
> practice. Mind does not exist prior to thought. The concept of mind does not
> exist prior to thinking about the mind!
>
> Therefore, I think the claims of (early) religions about the mind may have
> been appropriate for the period. These primitive theories may have helped to
> introduce beastly human beings to a new mind which could assess theories of
> ethics.
>
> In a future psychology book, we might find high-level mental concepts that did
> not exist in 21st century. We might also find that Dennett was right, that
> freedom has evolved.
>
> To open the doors of a new mind would be among the greatest possible
> contributions to humankind. Can we expand our minds?
>
> Regards,
>
> --
> Eray Ozkural (exa) <erayo@...>
> Comp. Sci. Dept., Bilkent University, Ankara KDE Project: http://www.kde.org
> http://www.cs.bilkent.edu.tr/~erayo Malfunction: http://malfunct.iuma.com
> GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C
>

Eray Ozkural

Thanks SWM. I mean to say something like this. When AI s are commonplace, I think people will commonly attribute to them minds. But then they will see that

Message 3 of 23
, Nov 14, 2011

0 Attachment

Thanks SWM.

I mean to say something like this. When AI's are commonplace, I think people will commonly attribute to them minds. But then they will see that there are different kinds of minds, because I think some of those minds will be quite unlike a human mind. And then, there might be minds much beyond our own, and our minds will be infantile compared to them, there will likely be cyborgs, for instance, people with cybernetic intelligence augmentation, those will yet constitute other kinds of minds.

I think that a mind depends on its architecture, and functionality, as these proliferate and expand, so will the mind, that is our theory of mind, as it will try to encompass all of them.

Most likely, in the future, we will identify minds with intelligence, because that seems to be the most necessary property of a mind.

However, some questions will be forever debated. Is personhood necessary for being a mind? Is it necessary to have an autobiographical history? Is autonomy required? Is any kind of behavior required? Is language required?

In philosophy, I think, a very *narrow* conception of mind has been assumed, for instance in Davidson's (IMHO terrible!) swamp man thought experiment.

In the near future, our conception of mind will radically change, and expand.

An interesting and thoughtful piece, Eray! Have you considered that the whole problem may just lie in the limits of language itself, a la Wittgenstein? In other words, if language is public in venue and application, turning it toward what is fundamentally not public but private becomes an attempt to use the wrong tools for the wrong job and so we run up against all sorts of limitations in trying to define and describe the features we find in ourselves that we tend to call "mind" or "consciousness" or even "intelligence".

At least language, which we do seem able to apply to some degree to questions about our mental lives, may simply need to be used differently, with different expectations as to meanings and so forth. Perhaps it's just a mistake to expect something definite as a referent. Perhaps Marvin Minsky's concept of a "suitcase word" is the best one can hope for and is, in fact, enough.

I agree though that it's very important to get clear on what we mean, especially in any attempt to build something like an artificial mind. After all, to the extent we can't stop arguing about what counts as that, we will be unable to find agreement if and when someone actually builds something that seems to count functionally as a mind.

SWM

--- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@...> wrote:
>
> [This is a half-serious argument that the concept of mind evolves. I myself
> don't believe that this is the case, but it's a possibility.]
>
> For centuries, we have been trying to define the mind, but one definition of
> the mind is that it breaks away from shallow definitions. In this essay, I
> will consider the possibility that the concept of mind evolves. I will not
> give any logical argument to back up my claim. Rather, I will try to persuade
> you through some simple examples.
>
> It is a low, but not entirely inexistent possibility that the mind is an open
> concept. A theory of mind is a locus of attention and argumentative power in
> the metaphysics of a philosopher. Thus, it would have to be compatible with
> his overall philosophy. For this reason, the concept of mind has been abused
> to the extremes, as for instance in B.F. Skinner's naive socialism. However,
> the mind resists being defined in restricted theoretical vocabulary.
>
> A popular argument in the backstage of AI research is that we are likening the
> mind/brain to the most advanced technology of the time, e.g. it used to be
> thought of as a steam engine, then the computer and now the Internet.
> Therefore, the current popular model is simply a mirror of present
> technology, e.g. it does not explain anything. (Argument due to Brooks) This
> is not a scientifically interesting argument because the mind may actually
> turn out to be a computer, in which case it would be quite similar to a
> desktop computer or the Internet in some aspects. If an AI researcher does
> not think so, he may as well be wasting his time. On the other hand, most of
> those analogies stand metaphorically. The philosophers are often careful
> enough to elaborate what they mean by a machine. That the computer is a
> particularly interesting kind of machine is another matter.
>
> Unfortunately, we encounter a similar situation in the philosophy of mind, not
> in terms of machines, but about what the mind is. The concept of mind can be
> roughly thought of as the class of objects to which we attribute mind. But it
> also contains certain elements, e.g. mental processes, which we believe are
> part of mind. Most of these elements seem plastic, e.g. they are learnt or
> modifiable, but they are also essential to our present understanding of the
> mind. Saying that tables do not have minds hardly contributes to our
> conception of mind. Saying what conscience is would, though. When Freud
> talked about superego, I think advanced scientific understanding of
> conscience, but we must be careful: it is not trivial to reduce conscience to
> superego. A suspicion is invoked then, perhaps that is because conscience is
> a construct.
>
> I think that the concept of mind may evolve not because our knowledge about
> mind is insufficient, but because that may be a property of minds. I think
> there may be metaphysical reason for our inefficiency in compressing the
> concept of mind to a one-page description, namely the plurality of minds,
> e.g. as in my Multism derived from Spinoza/Leibniz. The overview of the idea
> goes like this. It is not our problem whether conscience or freedom may be
> reduced to a computational or biological explanation, if the God defined by
> holy scriptures do not exist, that is almost surely the case. The problem is
> the plasticity of concepts like conscience or freedom. I believe we will have
> great difficulty when we build an "intelligent" machine, and ask whether it
> has a sense of freedom, love or conscience.
>
> An intriguing property of subjective experience is that, it evades accurate
> first-person observation since allocating resources to observe itself
> changes its state. This is sometimes likened to the principle of uncertainty
> in informal discussions. Another version is whenever we talk about the mind,
> we set new horizons for the mind. Mind always walks a step beyond our
> definition, but we can never reach out and fully grab it. A partial reason
> for this might be that the mind itself is theoretical, e.g. we construct a
> new mind when we talk about the mind, and if it works, it becomes part of the
> society. There may be some merit in thinking about the concept of mind as a
> stock market of ideas. Some prices go up and some fall, but new stocks can
> come in. There is the possibility that a new idea can expand (the concept of)
> mind beyond the present. That would be the case, if for instance, mind is our
> contrivance! 3000 years ago, people thought differently about this subject,
> and perhaps that *made* what we call "mind" different than today, for our
> societies and needs are different.
>
> Science may be such an open theoretical concept, which shapes according to
> needs, which we decide. That is, science does not exist prior to its
> practice. Mind does not exist prior to thought. The concept of mind does not
> exist prior to thinking about the mind!
>
> Therefore, I think the claims of (early) religions about the mind may have
> been appropriate for the period. These primitive theories may have helped to
> introduce beastly human beings to a new mind which could assess theories of
> ethics.
>
> In a future psychology book, we might find high-level mental concepts that did
> not exist in 21st century. We might also find that Dennett was right, that
> freedom has evolved.
>
> To open the doors of a new mind would be among the greatest possible
> contributions to humankind. Can we expand our minds?
>
> Regards,
>
> --
> Eray Ozkural (exa) <erayo@...>
> Comp. Sci. Dept., Bilkent University, Ankara KDE Project: http://www.kde.org
> http://www.cs.bilkent.edu.tr/~erayo Malfunction: http://malfunct.iuma.com
> GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C
>

I think you re right about the importance of actual events, Eray. When machine minds -- of varying capacities -- are common, it will expand our idea of what

Message 4 of 23
, Nov 14, 2011

0 Attachment

I think you're right about the importance of actual events, Eray. When machine minds -- of varying capacities -- are common, it will expand our idea of what minds are AND it will no longer matter much, except at the margins, as to whether machine minds are "really minds" according to some strict definition driven by what we think about ourselves now.

Treating intelligent machines as minds (which we will have to do when obliged to interact with them), especially when the "intelligence" is coupled with awareness (as I think will likely happen at some point), will drive our thinking about such machines and, so, about the notion of mind itself.

And then philosophers will have to expend their energies refining the concept in terms of all that new data rather than arguing about whether minds are part of the full panoply of a physical universe or just some kind of mysterious add-on. But for now, and for the forseeable future, there's still plenty of room to argue about the latter issue, at least.

I know how exercised you get over dualism and you know I agree with you to the extent that I think it's probably wrong (no evidence for, or reason to posit, it). But it looks like dualism is going to be with us for a long time nonetheless -- or at least until we start to get some successful AI implementations that finally marginalize that thesis.

As we can see on that other list, there is no shaking some folks loose from a dualist concept of mind.

SWM

--- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@...> wrote:
>
> Thanks SWM.
>
> I mean to say something like this. When AI's are commonplace, I think
> people will commonly attribute to them minds. But then they will see that
> there are different kinds of minds, because I think some of those minds
> will be quite unlike a human mind. And then, there might be minds much
> beyond our own, and our minds will be infantile compared to them, there
> will likely be cyborgs, for instance, people with cybernetic intelligence
> augmentation, those will yet constitute other kinds of minds.
>
> I think that a mind depends on its architecture, and functionality, as
> these proliferate and expand, so will the mind, that is our theory of mind,
> as it will try to encompass all of them.
>
> Most likely, in the future, we will identify minds with intelligence,
> because that seems to be the most necessary property of a mind.
>
> However, some questions will be forever debated. Is personhood necessary
> for being a mind? Is it necessary to have an autobiographical history? Is
> autonomy required? Is any kind of behavior required? Is language required?
>
> In philosophy, I think, a very *narrow* conception of mind has been
> assumed, for instance in Davidson's (IMHO terrible!) swamp man thought
> experiment.
>
> In the near future, our conception of mind will radically change, and
> expand.
>
> Regards,
>
> On Mon, Nov 14, 2011 at 3:20 PM, SWM <swmaerske@...> wrote:
>
> > An interesting and thoughtful piece, Eray! Have you considered that the
> > whole problem may just lie in the limits of language itself, a la
> > Wittgenstein? In other words, if language is public in venue and
> > application, turning it toward what is fundamentally not public but private
> > becomes an attempt to use the wrong tools for the wrong job and so we run
> > up against all sorts of limitations in trying to define and describe the
> > features we find in ourselves that we tend to call "mind" or
> > "consciousness" or even "intelligence".
> >
> > At least language, which we do seem able to apply to some degree to
> > questions about our mental lives, may simply need to be used differently,
> > with different expectations as to meanings and so forth. Perhaps it's just
> > a mistake to expect something definite as a referent. Perhaps Marvin
> > Minsky's concept of a "suitcase word" is the best one can hope for and is,
> > in fact, enough.
> >
> > I agree though that it's very important to get clear on what we mean,
> > especially in any attempt to build something like an artificial mind. After
> > all, to the extent we can't stop arguing about what counts as that, we will
> > be unable to find agreement if and when someone actually builds something
> > that seems to count functionally as a mind.
> >
> > SWM
> >
> > --- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@> wrote:
> > >
> > > [This is a half-serious argument that the concept of mind evolves. I
> > myself
> > > don't believe that this is the case, but it's a possibility.]
> > >
> > > For centuries, we have been trying to define the mind, but one
> > definition of
> > > the mind is that it breaks away from shallow definitions. In this essay,
> > I
> > > will consider the possibility that the concept of mind evolves. I will
> > not
> > > give any logical argument to back up my claim. Rather, I will try to
> > persuade
> > > you through some simple examples.
> > >
> > > It is a low, but not entirely inexistent possibility that the mind is an
> > open
> > > concept. A theory of mind is a locus of attention and argumentative
> > power in
> > > the metaphysics of a philosopher. Thus, it would have to be compatible
> > with
> > > his overall philosophy. For this reason, the concept of mind has been
> > abused
> > > to the extremes, as for instance in B.F. Skinner's naive socialism.
> > However,
> > > the mind resists being defined in restricted theoretical vocabulary.
> > >
> > > A popular argument in the backstage of AI research is that we are
> > likening the
> > > mind/brain to the most advanced technology of the time, e.g. it used to
> > be
> > > thought of as a steam engine, then the computer and now the Internet.
> > > Therefore, the current popular model is simply a mirror of present
> > > technology, e.g. it does not explain anything. (Argument due to Brooks)
> > This
> > > is not a scientifically interesting argument because the mind may
> > actually
> > > turn out to be a computer, in which case it would be quite similar to a
> > > desktop computer or the Internet in some aspects. If an AI researcher
> > does
> > > not think so, he may as well be wasting his time. On the other hand,
> > most of
> > > those analogies stand metaphorically. The philosophers are often careful
> > > enough to elaborate what they mean by a machine. That the computer is a
> > > particularly interesting kind of machine is another matter.
> > >
> > > Unfortunately, we encounter a similar situation in the philosophy of
> > mind, not
> > > in terms of machines, but about what the mind is. The concept of mind
> > can be
> > > roughly thought of as the class of objects to which we attribute mind.
> > But it
> > > also contains certain elements, e.g. mental processes, which we believe
> > are
> > > part of mind. Most of these elements seem plastic, e.g. they are learnt
> > or
> > > modifiable, but they are also essential to our present understanding of
> > the
> > > mind. Saying that tables do not have minds hardly contributes to our
> > > conception of mind. Saying what conscience is would, though. When Freud
> > > talked about superego, I think advanced scientific understanding of
> > > conscience, but we must be careful: it is not trivial to reduce
> > conscience to
> > > superego. A suspicion is invoked then, perhaps that is because
> > conscience is
> > > a construct.
> > >
> > > I think that the concept of mind may evolve not because our knowledge
> > about
> > > mind is insufficient, but because that may be a property of minds. I
> > think
> > > there may be metaphysical reason for our inefficiency in compressing the
> > > concept of mind to a one-page description, namely the plurality of minds,
> > > e.g. as in my Multism derived from Spinoza/Leibniz. The overview of the
> > idea
> > > goes like this. It is not our problem whether conscience or freedom may
> > be
> > > reduced to a computational or biological explanation, if the God defined
> > by
> > > holy scriptures do not exist, that is almost surely the case. The
> > problem is
> > > the plasticity of concepts like conscience or freedom. I believe we will
> > have
> > > great difficulty when we build an "intelligent" machine, and ask whether
> > it
> > > has a sense of freedom, love or conscience.
> > >
> > > An intriguing property of subjective experience is that, it evades
> > accurate
> > > first-person observation since allocating resources to observe itself
> > > changes its state. This is sometimes likened to the principle of
> > uncertainty
> > > in informal discussions. Another version is whenever we talk about the
> > mind,
> > > we set new horizons for the mind. Mind always walks a step beyond our
> > > definition, but we can never reach out and fully grab it. A partial
> > reason
> > > for this might be that the mind itself is theoretical, e.g. we construct
> > a
> > > new mind when we talk about the mind, and if it works, it becomes part
> > of the
> > > society. There may be some merit in thinking about the concept of mind
> > as a
> > > stock market of ideas. Some prices go up and some fall, but new stocks
> > can
> > > come in. There is the possibility that a new idea can expand (the
> > concept of)
> > > mind beyond the present. That would be the case, if for instance, mind
> > is our
> > > contrivance! 3000 years ago, people thought differently about this
> > subject,
> > > and perhaps that *made* what we call "mind" different than today, for our
> > > societies and needs are different.
> > >
> > > Science may be such an open theoretical concept, which shapes according
> > to
> > > needs, which we decide. That is, science does not exist prior to its
> > > practice. Mind does not exist prior to thought. The concept of mind does
> > not
> > > exist prior to thinking about the mind!
> > >
> > > Therefore, I think the claims of (early) religions about the mind may
> > have
> > > been appropriate for the period. These primitive theories may have
> > helped to
> > > introduce beastly human beings to a new mind which could assess theories
> > of
> > > ethics.
> > >
> > > In a future psychology book, we might find high-level mental concepts
> > that did
> > > not exist in 21st century. We might also find that Dennett was right,
> > that
> > > freedom has evolved.
> > >
> > > To open the doors of a new mind would be among the greatest possible
> > > contributions to humankind. Can we expand our minds?
> > >
> > > Regards,
> > >
> > > --
> > > Eray Ozkural (exa) <erayo@>
> > > Comp. Sci. Dept., Bilkent University, Ankara KDE Project:
> > http://www.kde.org
> > > http://www.cs.bilkent.edu.tr/~erayo Malfunction:
> > http://malfunct.iuma.com
> > > GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16
> > 874D 539C
> > >
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
> --
> Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
> http://groups.yahoo.com/group/ai-philosophy
> http://myspace.com/arizanesilhttp://myspace.com/malfunct
>

Your message has been successfully submitted and would be delivered to recipients shortly.